Jump to content

Search the Community

Showing results for tags 'gpu'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • About This Forum and Announcements
    • How to Use This Forum
    • Forum Feedback
    • Announcements
  • TUFLOW Modelling
    • 1D/2D Linking
    • 1D Domains
    • 2D/2D Linking
    • 2D/2D Nesting
    • 2D Domains
    • Boundaries
    • Documentation & Tutorial Model
    • Dongles/Licensing/Installation
    • Ideas / Suggestions / New Features
    • Mass Balance/Mass Error
    • MATH Errors & Simulation Failure
    • Restart Files
    • Post-Processing
    • Software/Hardware Requirements
    • Text Files (.tcf, .tgc, .tbc, .ecf)
    • Utilities
    • Miscellaneous
  • Other Software
    • MapInfo/Vertical Mapper
    • miTools
    • Other GIS/CAD
    • SMS
    • XP-SWMM2D
    • UltraEdit/Excel
    • TUFLOW Apps

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 18 results

  1. Dear Admin/ Members, I ran an exact same setup with GPU and CPU solver (“2 hours” flooding event, Rain on grid, Watre level BC). Surprisingly the water levels (extracted from PO lines) in GPU are constant over the 2 hours event and are different from what CPU solver returns! Differences are between 1.2m to 2m! Any comment or suggestion is highly appreciated. Cheers, Ali
  2. Hi, We are running some TUFLOW models using the GPU hardware and HPC solution scheme (Build 2017-09-AC). We found that some of the runs fail without warning (well into the simulation). It seems that they are 'exiting without prompt', so we are unable to view the dos window to see what has caused the models to fail. We have tried disabling the 'quick edit mode' in the dos window in case the issue was related to the 'TUFLOW pause mid simulation? cause and solution' topic posted by Chris Huxley, but this didn't make any difference. Three runs (15, 25, and 540 minute) completed without any apparent issues. Two runs (90 and 120 minute) failed, but when re-started they completed successfully. However three longer runs (720, 2400 and 2880 minute) failed and will not complete on re-start. We have reviewed the .tlf files (both standard .tlf and hpc.tlf). We appreciate that the adaptive timestep will mask potential instabilities, but there is nothing in the tlf files to indicate that TUFLOW is having instability issues (i.e. the timesteps are consistent at the time of failure). We would appreciate some guidance on how to resolve this issue. We are managing the runs via TRIM. Kind regards, Francis Lane
  3. Hi Admin, Do you have an example of model text files incorporating .tcf HPC, 1d (.ecf), IL (materials), direct rainfall? Thanks in advance
  4. I've got multiple runs going on my computer, one of which is a GPU run. It seems to have a drastic effect on the run times of my CPU models. Has anyone else experienced this?
  5. I've set up downstream boundaries in GPU models a number of times now using a 2d_bc HT boundary where a fixed water level below ground level is used. According to the documentation, this causes the model to apply 'normal flow' at this point, effectively behaving as an automatic HQ boundary. However, the results usually look like water is simply pouring out of the boundary, with water level and depth decreasing as they approach the boundary. This does not look like normal flow as I would have expected it and can have effects quite some distance upstream. The attached image illustrates the point. Is someone able to expand on how exactly this boundary works?
  6. We have recently become aware of an issue affecting TUFLOW GPU simulations using the new gridded rainfall inputs. If the map output interval exceeds the interval in the rainfall grids, then the applied rainfall boundary condition is not updated. Choosing a map output interval that matches or divides into the boundary rainfall data interval will alleviate the issue. For example, if the gridded rainfall has a grid every 30 minutes, a Map Output Interval of 30, 15 or 10 minutes will all work. It is also important that the start map output time matches the gridded rainfall input start time. This will occur by default, though may be altered if a user defined Start Map Output time is specified. This does not affect TUFLOW “classic” simulations and the issue will be corrected in the 2016-03-AB release which will be available within the next month. If you're concerned about large file sizes resulting from your map output interval being reduced to fit your rainfall data interval you can use an output zone. For example with a 10 minute rainfall interval an output zone can be defined with an output interval of 10 minutes and the entire model output can be 1 hour. Again this work-around will only be required until the 2016-03-AB release. The 2016-03-AA release has the option to output the instantaneous rainfall rate and cumulative rainfall can be output with the RFR and RFC output types respectively. These can be used to cross-check the results. For example to output the depth, levels, velocities, rainfall rate and cumulative rainfall the .tcf command would be: Map Output Data Types == d h V RFR RFC Please contact support@tuflow.com if you have any queries relating to the above. Regards TUFLOW Support Team
  7. Pre-release testing for the forthcoming update to the 2016 version of TUFLOW has identified a potential issue in the 2016-03-AA version of TUFLOW when using TUFLOW GPU with SA inflows and the latest version of the NVidia drivers (368.81). The issue occurs when the SA inflows are proportioned to depth (which is the default behaviour). Previous versions of the NVidia driver show the correct flows, but for the latest driver, an issue can occur when there are cells wetting / drying within the SA boundary causing an incorrect inflow volume. The NVidia drivers were released on the 14th of July 2016 and the 2016-03-AA version of TUFLOW was released in March 2016. Our SA inflow test results are shown below, for the same model with the NVidia latest drivers (368.81) and an older set of drivers. The results highlight how artificial volume is being created by the simulation run using the new NVidia drivers (368.81). Our testing has shown this error is resolved by using the “Read GIS SA ALL ==“ command, instead of “Read GIS SA ==”. If you are using the 2016-03-AA version with SA inflows within a TUFLOW GPU model, the following is recommended: 1) Quality check you flow / volumes results to determine if the NVidia Drivers your computer uses creates the above mentioned issue. 2) Use the “Read GIS SA ALL” option. An 2016-03-AB update is to be released shortly which will address the issue. For future NVidia driver updates, we are planning on running a series of benchmark models to check compatibility of the drivers. Please contact support@tuflow.com if you have any questions about the above. Regards TUFLOW Team
  8. Hi, Like to know how to apply cell flow width factors (CWF) in TUFLOW GPU. what command should be used. "Read GIS FC Shape == ". I got the mode with 15 cell size and bridge span is only 5m, try to use the cell flow width factors to simulate the flow thru the bridge. any advice is much appreciated. Regards Henry
  9. Question: I am currently running a GPU model and I am wanting to apply a constriction to represent pier losses from a bridge. My understanding is that within the GPU I can only run a 2d_FLC file which just reads one GIS field for the constriction. I have calculated the constrictions I wish to apply however I am having difficulty finding out how it will be applied within the model. If you could please assist me with the following questions it would be very helpful. -Is the constriction applied to the centre of the cell or at the outer edges. -When applying the constriction do I apply it per m or by per cell (by using a negative value such as the layered flow constrictions in CPU) similar to what occurs in the CPU flow constrictions. Or do I simply apply the full constriction to the cell centre, if I simply want the constriction to apply to 1 grid cell. Is the same process used for the Cell Width Factors? Answer: Within the GPU solver, only the cell centre values are available and used. You can check the FLC or CFW applied to each cell by reviewing the 2d_grd check file. More info is available here: http://wiki.tuflow.com/index.php?title=Check_Files_2d_grd Regarding how the FLC is applied: · You provide the loss value that will be applied to the cell centre/s. It is essentially applied as a minor loss based on the velocity head of that cell centre. FLC * (V^2/2g) · If you use a polygon, all the cells centres within the polygon will be selected and the loss applied. · You can also use a line that will allow you to select a small row of cells based on the ‘crosshair’ principle. · Using the Read GIS FLC == command there is no m by m application of losses that you see when using flow constrictions (which notably are not available in the GPU yet). i.e. you directly apply the loss coefficient to each cell of interest. Regarding how the CFW is applied: · The Read GIS CFW == command is applied in a similar way but acts to reduce the available cross sectional area of the cell. For example, a value of 0.9 will reduce the cell to 90% of full capacity. This factor is applied to all depths of flow through the cell/s. Whenever using these approaches we recommend cross checking the losses through your structure with other model outputs such as HECRAS or with reference to Bridge Design Documents such as Bradley: http://www.ciccp.es/ImgWeb/Castilla y Leon/Documentación Técnica/Hydraulics of Bridge Waterways (1978).pdf to assess the sensitivity of your assumed loss approach. Another good post detailing FLCs can be seen here: http://www.tuflow.com/forum/index.php?/topic/1130-flc-values-in-hx-links/ Regards, The TUFLOW Team.
  10. Is is possible to apply HT boundaries within the 2D domain to withdraw flows out of the model and avoid using 1d estry components? I'd like to maintain using the GPU module. Cheers O
  11. Hi TUFLOWers, We hope you have survived the dash to end of financial year . We frequently get asked, "What is the minimum or recommended hardware to use for TUFLOW modelling". This is always a tricky question, as the answer depends on the type and size of the models you are going to be running! For a small model, TUFLOW should run on any modern PC or laptop that is capable of running Windows XP or later. However, for large models there may be requirements for a hefty computer running a 64 bit version of Windows.To assist you, we have prepared a new Wiki page and download so that you can compare run performance on a range of computers and also your own, including both CPU and GPU (if you have one). For more details please refer to: http://wiki.tuflow.com/index.php?title=Hardware_Benchmarking Regards, the TUFLOW Team.
  12. Hi, I was hoping to model the effects of an embankment failing using the GPU Solver. When I run the model, it appears that it can read the vzsh file. However, within the tlf file the following line shows up: WARNING 2320 - GPU Solver ignoring Read GIS Variable Z Shape commands. Also, a vzsh check file is not created. Am I doing something wrong or is vzsh not supported by the GPU Solver at this time? If not is there hope for this function anytime soon? Thanks, Julian
  13. Hi Tuflow admins we are trying to run a GPU solver model. it is purely 2D. strangely the simulation prematurely exits stating: Allocating 13600 Kb of temporary 1D domain memory (RAM) Determining 1D array sizes SORRY - 1D linking not available yet with adaptive time-stepping. I have compared the setup files to another GPU solver model. They are consistent with each other. I can only think the error is in one of the layers? We have checked our BC's and other layers. We are really not sure. Cheers O-Dogs
  14. Hi All, I have done some test runs of rain on grid model by using GPU, the details are: Boundary condition – 2d_bc HT type (statics tail water) Run1 - 5m cell size Run2 - 3m cell size Run3 – 2m cell size All runs are same except change cell size in TGC file When checking the water level at the location near boundary, it was found that the results show the level is dropping after peak for 3m and 5m cell size model runs. However the level for 2m cell size model is continually increasing, it seems the 2d_BC is not functioning. see attached file for more inforamtion. Appreciated that someone can help. Regards Hai CGPU check.pdf
  15. Hi admins, Have just been running a ~ 50ha ROG model, pure 2D domain with the GPU solver. We really wanted to run it using 1m cells so we could pick up on small overland flow paths within the site. the model continued to be unstable until we adopted 4m cells. The 4 hour simulation ran very very fast, in about 5 minutes. Am I right in thinking that maybe the time step may have some impact on the instability? Are there any codes that can applied to slow the time step and accept (force) a smaller grid, such as a 1m or even 2m? Cheers
  16. Q: We have hit out memory limit with a very large TUFLOW GPU model and were interested in adding another graphics card to our machine. Does SLI increase the memory available for TUFLOW GPU - if we add a 6GB card (GTX Titan) to our existing 3GB card (GTX 780) will this mean we have 9GB available or do the cards need to match? A: SLI is a method for enabling rendering on multiple cards. The TUFLOW GPU extension is written with NVIDIA CUDA which does not support the use of the SLI. However, you can run a large model on up to four cards on the same motherboard (data is transferred between cards via the motherboard using the PCI bus). Essentially the model is split between the cards, so the memory requirement will be shared amongst the cards. This means that running on multiple GPUs increases the size of the model that can be simulated. It is noted that you will need a GPU licence for each GPU card you wish to access. For example, if you have more than one GPU card and you wish to run the model across both cards, you will need two (free) GPU licences. In terms of the cards matching you should be able to use mis-matched cards, however, the model is currently split evenly between the cards. This means that: * The slowest card will limit the speed as they have to synchronise every timestep * The card with the smallest RAM will limit the model size to N x Smallest RAM, where N is the number of cards being used. If interested, we can looking at allowing the user to define the split between the cards, however, an uneven split may not be ideal. For example, if you were to run a very large (i.e. a 9GB model) split unevenly across the two cards (3GB 780 and 6GB Titan) for memory reasons the Titan would need to process twice as much of the model as the 780. Given the Titan and the 780 have similar CUDA core counts (identical, depending on if you are using Ti or black variants), this would mean that the 780 would be waiting for the Titan. Regards TUFLOW Support Team
  17. Q: I have a TUFLOW GPU simulation which I am using “Read RowCol RF == <layer.mid>” to vary my rainfall factors (f1 and f2) on a cell by cell basis. However, when I review the results it would appear that the multiplication factors are not being used (or used as 1). A: As background for other readers, to gradually vary direct rainfall across your code in both TUFLOW classic and GPU area, you can use the: Read RowCol RF == <gis_layer> command and alter the f1 and f2 scaling factors. The hyetograph weights are multiplied together before being applied to the rainfall. When running TUFLOW GPU, the GPU module performs a check that these are within a valid range, currently this range is 0 – 1.The range limit is applied in the GPU to limit the amount of memory required and maintain accuracy of resolution. If the factors are in the range 0 – 1 they are used as expected. However, if these are outside the range the following error occurs in the log file: Adding hydrograph weight layer 1 ... ERROR: Hydrograph weight data not in range [0..1] If the error above is generated, the simulation then discards the weighting factors and proceeds, however, the results should be treated as suspect and not used. Currently, if this occurs the message is logged to the .gpu.tlf file, however, the simulation is not stopped. It is likely the for a future release that we will force the simulation to stop. TUFLOW classic allows the these weighting factors to add to more than 1, whereas, the GPU solver is capped as 1. To maintain consistency with TUFLOW classic, this capped weighting limit is currently under review within the GPU module, we will ensure that should this change then users will be notified. In the interim, to use weighing factors greater than 1 for a TFULOW GPU simulation, you will need to modify the rainfall boundary so that the f1 factors are less than 1. E.g. multiply the rainfall boundary by 2 and divide the factors by 2 so that they are less than 1. For future releases, we have been enhancing TUFLOW to support a wider range of rainfall boundaries (e.g. as a series of radar images). As part of this we are also adding more outputs. We will be including added the following, to make it easier to track rainfall on a cell by cell basis: · Rainfall rate (output as mm/hr) · Cumulative rainfall (output as mm) Regards TUFLOW Support Team
  18. Hi Tuflow Not sure why we are getting this error: ERROR 0103 - BC Groups not supported for .csv formats have not come across this before. our 2D GPU model bcbase files are setup the same. we are using 2d_sa buffers. thanks O-Dogs
  • Create New...