Search the Community
Showing results for tags 'tuflow gpu'.
Found 2 results
Pre-release testing for the forthcoming update to the 2016 version of TUFLOW has identified a potential issue in the 2016-03-AA version of TUFLOW when using TUFLOW GPU with SA inflows and the latest version of the NVidia drivers (368.81). The issue occurs when the SA inflows are proportioned to depth (which is the default behaviour). Previous versions of the NVidia driver show the correct flows, but for the latest driver, an issue can occur when there are cells wetting / drying within the SA boundary causing an incorrect inflow volume. The NVidia drivers were released on the 14th of July 2016 and the 2016-03-AA version of TUFLOW was released in March 2016. Our SA inflow test results are shown below, for the same model with the NVidia latest drivers (368.81) and an older set of drivers. The results highlight how artificial volume is being created by the simulation run using the new NVidia drivers (368.81). Our testing has shown this error is resolved by using the “Read GIS SA ALL ==“ command, instead of “Read GIS SA ==”. If you are using the 2016-03-AA version with SA inflows within a TUFLOW GPU model, the following is recommended: 1) Quality check you flow / volumes results to determine if the NVidia Drivers your computer uses creates the above mentioned issue. 2) Use the “Read GIS SA ALL” option. An 2016-03-AB update is to be released shortly which will address the issue. For future NVidia driver updates, we are planning on running a series of benchmark models to check compatibility of the drivers. Please contact email@example.com if you have any questions about the above. Regards TUFLOW Team
Q: We have hit out memory limit with a very large TUFLOW GPU model and were interested in adding another graphics card to our machine. Does SLI increase the memory available for TUFLOW GPU - if we add a 6GB card (GTX Titan) to our existing 3GB card (GTX 780) will this mean we have 9GB available or do the cards need to match? A: SLI is a method for enabling rendering on multiple cards. The TUFLOW GPU extension is written with NVIDIA CUDA which does not support the use of the SLI. However, you can run a large model on up to four cards on the same motherboard (data is transferred between cards via the motherboard using the PCI bus). Essentially the model is split between the cards, so the memory requirement will be shared amongst the cards. This means that running on multiple GPUs increases the size of the model that can be simulated. It is noted that you will need a GPU licence for each GPU card you wish to access. For example, if you have more than one GPU card and you wish to run the model across both cards, you will need two (free) GPU licences. In terms of the cards matching you should be able to use mis-matched cards, however, the model is currently split evenly between the cards. This means that: * The slowest card will limit the speed as they have to synchronise every timestep * The card with the smallest RAM will limit the model size to N x Smallest RAM, where N is the number of cards being used. If interested, we can looking at allowing the user to define the split between the cards, however, an uneven split may not be ideal. For example, if you were to run a very large (i.e. a 9GB model) split unevenly across the two cards (3GB 780 and 6GB Titan) for memory reasons the Titan would need to process twice as much of the model as the 780. Given the Titan and the 780 have similar CUDA core counts (identical, depending on if you are using Ti or black variants), this would mean that the 780 would be waiting for the Titan. Regards TUFLOW Support Team