Jump to content
TUFLOW Forum
ajafari

GPU solver output is different from CPU solver

Recommended Posts

Dear Admin/ Members,

 

I ran an exact same setup with GPU and CPU solver (“2 hours” flooding event, Rain on grid, Watre level BC).

 

Surprisingly the water levels (extracted from PO lines) in GPU are constant over the 2 hours event and are different from what CPU solver returns! Differences are between 1.2m to 2m!

 

Any comment or suggestion is highly appreciated.

 

Cheers,

Ali

Share this post


Link to post
Share on other sites

A picture of the GPU and CPU results may shed some light on the problem. Looking at the flood surface results will tell you more information than PO plots.

 

You say the GPU results are static throughout the entire event?  From experience this typically means one of two things:

1) The cell at the PO location is dry (TUFLOW reports the ground level in the PO output in this instance);

2) Downstream boundary issues (drowning out the model). If your PO location is near the downstream boundary of you model it might be worth reviewing the boundary condition definition and inputs in you GPU model.

 

Cheers,

Chris

Share this post


Link to post
Share on other sites

Further to Chris' post if you email the log files (.tlf and .gpu.tlf) for the simulations to support@tuflow.com we will take a look at what might be causing the static water levels in the GPU.

 

Cheers

Phil

Share this post


Link to post
Share on other sites

Just some feedback for anyone else reading this.  The model was a direct rainfall model with a downstream water level boundary. The model provided had a wet/dry depth of 0.01 (10mm) set.  This appears to have been causing significant volume error in the provided model.

 

After getting the model inputs and logs, we have made the following changes to the model and the results are now consistent between the CPU and GPU.

  • Set the wet / dry depth to 0.0002 (0.2mm), this was changed from 0.01 (10mm). I believe this was the main reason for the differences.  Note that the mapping cutoff depth was left unchanged at 20mm (Map Cutoff Depth == 0.02);
  • For the GPU model we have used the single precision engine (for speed reasons), for the CPU model we have used the double precision engine, this is required for direct rainfall modelling, particularly with high elevations.  There is a further discussion on this in the release notes (see heading "Single and Double Precision" in the GPU module section); and
  • Revised the plot output, this does not affect the computations, simply the output.  There "H_" (water level) type outputs were being specified on lines.  The Q_ (flow) type plot output should be along lines (flow across the line) and the H_ (water level) outputs should be at points (the water level and the cell the point is within).

 

The results show the peak water levels are generally consistent.  A histogram of the difference in peak water level (GPU – CPU) is below.  The results are not identical as outlined in the release notes (see heading "Will the Results be the Same as TUFLOW “Classic”), there are a number of reasons for this, however, the histogram shows that the majority of the results are within a narrow band.

 

Histogram of peak water level differences

post-220-0-66420200-1400467774_thumb.jpg

 

The flows at a mid catchment location are shown below, these show that the timing, peak flow and volume are consistent across the simulations.

post-220-0-56400100-1400467922_thumb.jpg

 

Edit: In terms of runtimes, the model has > 2.2 Million cells, when running on CPU the run-time is approximately 15.5 hours, when running on GPU (on a GTX 680 with 1536 CUDA cores) the run-time is approximately 20min.  In this case the GPU is ~46 times as fast as the CPU solver.

 

Cheers

Phil

Share this post


Link to post
Share on other sites
On 5/19/2014 at 8:23 AM, par said:

Reviving an old thread because it is relevant.  I'm experiencing the same issue outlined above and depths are in excess of 1m different between Classic and 2016 GPU.  I have adjusted the parameters you mention below and this makes a minimal impact on waterlevels in the GPU run of about 5cm.  The difference is still huge compared to Classic.  I also note that the manual states that you might expect 0.2m difference in 10m of water at 3m/s and that a sensible thing to do is to vary the roughness to calibrate the GPU.  

Is this still recommended?

For context, the Classic 1d/2d model uses a 2d_SA derived from PO lines in the 2d gpu model to truncate the model and save on run time.  Could this be a source of error or reason why the water levels produced are so different. It 'feels' like there is too little volume in the Classic and too much in the GPU!

 

On 5/19/2014 at 8:23 AM, par said:

 

Just some feedback for anyone else reading this.  The model was a direct rainfall model with a downstream water level boundary. The model provided had a wet/dry depth of 0.01 (10mm) set.  This appears to have been causing significant volume error in the provided model.

 

After getting the model inputs and logs, we have made the following changes to the model and the results are now consistent between the CPU and GPU.

  • Set the wet / dry depth to 0.0002 (0.2mm), this was changed from 0.01 (10mm). I believe this was the main reason for the differences.  Note that the mapping cutoff depth was left unchanged at 20mm (Map Cutoff Depth == 0.02);
  • For the GPU model we have used the single precision engine (for speed reasons), for the CPU model we have used the double precision engine, this is required for direct rainfall modelling, particularly with high elevations.  There is a further discussion on this in the release notes (see heading "Single and Double Precision" in the GPU module section); and
  • Revised the plot output, this does not affect the computations, simply the output.  There "H_" (water level) type outputs were being specified on lines.  The Q_ (flow) type plot output should be along lines (flow across the line) and the H_ (water level) outputs should be at points (the water level and the cell the point is within).

 

The results show the peak water levels are generally consistent.  A histogram of the difference in peak water level (GPU – CPU) is below.  The results are not identical as outlined in the release notes (see heading "Will the Results be the Same as TUFLOW “Classic”), there are a number of reasons for this, however, the histogram shows that the majority of the results are within a narrow band.

 

Histogram of peak water level differences

post-220-0-66420200-1400467774_thumb.jpg

 

The flows at a mid catchment location are shown below, these show that the timing, peak flow and volume are consistent across the simulations.

post-220-0-56400100-1400467922_thumb.jpg

 

Edit: In terms of runtimes, the model has > 2.2 Million cells, when running on CPU the run-time is approximately 15.5 hours, when running on GPU (on a GTX 680 with 1536 CUDA cores) the run-time is approximately 20min.  In this case the GPU is ~46 times as fast as the CPU solver.

 

Cheers

Phil

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...