Search the Community
Showing results for tags 'grid'.
Found 4 results
The 2017 TUFLOW release notes mention that input grid extents are checked when using the "Read GRID ==" command. If the extent is outside the 2D domain the grid is skipped to reduce simulation startup. I have noticed that for quadtree models with nesting, that inputs grids are processed for each refinement level even if some grids do not fall in the refinement region. For example, a quadtree model with base 10m grid size and one 5m refinement area using DEM tiles to set Zpts. Each tile in the DEM will be processed to the 5m refinement level (GRID_TILE_X.5m.xf4) even if that tile does not fall within any part of the 5m refinement region. Does this unnecessarily increase startup times by generating xf files that aren't needed?
I've trawled the manual and I can't seem to find the answer to this - if it is in there please let me know! If I have multiple grids listed in the .tgc via the Read GRID == command, where cells overlap, does Tuflow treat the first grid as 'truth' then the second and third etc, or does the last grid overwrite overlapping cells from grids in the lines of code immediately above? I could merge them in Esri but that creates more files, uses more disk space etc. Thanks.
Q: I've got substantial differences between my time series and plot output data and the max grids. Why is this? A: There are a few things that could be going on here, depending on the differences that you’re seeing. BIG differences (>0.1m) The max WSL level is a tracked maximum, which means that at every time step Tuflow checks the current water level at the ZC and saves it if it’s a maximum for the model run. This is then effectively a max output calculated for every time step. Compare that to the TS output which is only at your time series output interval. Similarly, a WaterRide output is an envelope of the maximum grids at every output time, not of every cell at every time step. If you’re seeing a big difference in the gridded results and the TS results, it could be that the temporal resolution in WaterRide and your TS layer maybe missing a point of instability. Investigate the mass balance at that location to see if it spikes, also try seriously reducing the time series output interval (down to something close to your timestep if possible) to see if the TS can produce the same results as the grids. Small differences There are two mechanisms used to create ascii files that can influence the difference between grid and PO. The first one is the extrapolation from the cell centers to the cell corners. Tuflow calculates the water surface level at the ZC then when you grid results using Tuflow_to_GIS. It extrapolates out to the ZH’s on the cell edges and doesn’t preserve the ZC values. This is why you get a mapped grid cell size half of your modelled grid cell size This isn’t normally an issue except where you have really steep topography or incised channels. What you can do to help this is output the ascii files within Tuflow. When you write them from within Tuflow, you get the ZC values, IE, no extrapolation. This can help reduce the difference. (You could also use HIGH RES for an output option. This means that you will get output at cell centers, mid-sides and corners, but be aware that you output size will be about 4 times the size and some programs (such as TUFLOW to GIS and dat_to_dat) don’t yet recognise this format.) The second one is the north/south orientation of the ascii grid. Ascii grids are always north/south orientated. That is, the ascii cells do not line up with your modelled orientation. Thus, there is an interpolation to rotate the grids. This can also cause issues on steep topography where there are large changes in water level or ground level. Unfortunately, there is nothing you can do to fix this one. You can look at the tuflow .dat files for raw results and confirm to yourself that the modelled results are hitting the mark and that only the presentation of results is different.
Hi admins, Have just been running a ~ 50ha ROG model, pure 2D domain with the GPU solver. We really wanted to run it using 1m cells so we could pick up on small overland flow paths within the site. the model continued to be unstable until we adopted 4m cells. The 4 hour simulation ran very very fast, in about 5 minutes. Am I right in thinking that maybe the time step may have some impact on the instability? Are there any codes that can applied to slow the time step and accept (force) a smaller grid, such as a 1m or even 2m? Cheers