Controlling maximym number of iterations

Miscellaneous discussion topics about Code_Saturne (development, ...)
Post Reply
Antech
Posts: 207
Joined: Wed Jun 10, 2015 10:02 am

Controlling maximym number of iterations

Post by Antech »

Hello. It's an old question, but it's arised again so I needed to cope with it somehow...
In some situations, like start of calculation or convergence problems, Saturne make lots of linear solver iterations (10000+). It's not compatible with practice when we need just to pass through this period, not to converge ideally to "classic" 10^-5 tolerance. Calculation becomes so long so it, really, just cannot be performed in realistic time.
One can use large tolerances, but:
1. It requires additional run with high tolerance.
2. Program will not automatically converge to 10^-5 when possible and drop linear iterations otherwise.
3. It does not guarantee, for example, that solver will converge to 0.1 error.
So I prefere to limit number of linear iterations at 100...300. In older versions there was a GUI option, but now it's removed due to more complex solver setting.
I tried to experiment with setting number of iterations with user routine. Calculation was just temperature field (frozen flow). I set multigrid for temperature just for testing and max coarse iterations to 100 (default is 10000) in cs_multigrid_set_solver_options. As a result, first iteration lasted "forever" instaed of ~30 min with defaults so I just stopped the process. Another problem is that, even if it will give positive result, user setting will override Saturne automatic solver selection that is not optimal.
Currently, I use another simple method. I set _n_max_iter_default to 300 in cs_sles_param.c and recompile. It gives exactly required effect, but it's hard-coded and the same for all fields. Would you, please, add this variable (_n_max_iter_default) to GUI? If I add it myself, changes will reset with new version.
Yvan Fournier
Posts: 4254
Joined: Mon Feb 20, 2012 3:25 pm

Re: Controlling maximym number of iterations

Post by Yvan Fournier »

Hello,

Which version of the code are you using, and which linear solver are you using with multigrid (assuming you are using multigrid as a preconditionner, not as a solver) ?

Using default settings, this issue has mostly dissapeared on the computations that I am aware of. The main issue was that we were using the multigrid as a preconditioner for a simple PCG solver, which does not guarantee convergence with the preconditioner has small fluctuations in its behavior, so switching to a flexible congugate gradient was necessary. Since then, this issue seems to have mostly dissapeared.

We do have convergence issues on some meshes at some time steps, related especially to difficult convergence of some turbulence variables.

For this, after too many (200 to 400) Gauss-Seidel or Jacobi iterations, using default settings, the code switches to GMRES.
This usually helps, though is not always sufficient.

In any case, I concurr that low-level settings for linear solvers are getting too complex and unwieldy, so I plan on switching to a tree/dictionnary type approach for solver settings, which would make it possible to change one setting at a time (such as the top level of iterations) instead of redefining everything in user-defined functions (and possibly pass some settings as key/value strings through the GUI). This would certainly help in your case.

Although this would also make things simpler for my own tests on linear solvers (and I have been doing quite a few of those lately for GPU performance tuning), I can't guarantee this will be in 9.1 in December. I definitely hope I can do this by 9.2 (June 2026), and it might make it into 9.1, but I can't promise it, as we also have a other features of the code we need to finish before that.

Regards,

Yvan
Antech
Posts: 207
Joined: Wed Jun 10, 2015 10:02 am

Re: Controlling maximym number of iterations

Post by Antech »

Thanks for your answer.

I use Saturne 8.0.4 now.

I have never made any CFD code so I don't know details. My idea was to take a user example and modify the number of iterations. The code is as follows:

Code: Select all

cs_multigrid_t *mg;
mg=cs_multigrid_define(-1,"TempC",CS_MULTIGRID_V_CYCLE);
cs_multigrid_set_coarsening_options(mg,
                                    3,    /* aggregation_limit (default 3) */
                                    0,    /* coarsening_type (default 0) */
                                    10,   /* n_max_levels (default 25) */
                                    30,   /* min_g_cells (default 30) */
                                    0.95, /* P0P1 relaxation (default 0.95) */
                                    0);  /* postprocessing (default 0) */

cs_multigrid_set_solver_options
  (mg,
   CS_SLES_PCG, /* descent smoother type (default: CS_SLES_PCG) */
   CS_SLES_PCG, /* ascent smoother type (default: CS_SLES_PCG) */
   CS_SLES_PCG,    /* coarse solver type (default: CS_SLES_PCG) */
   100,             /* n max cycles (default 100) */
   2,              /* n max iter for descent (default 2) */
   10,              /* n max iter for asscent (default 10) */
   100,           /* n max iter coarse solver (default 10000) */
   0,              /* polynomial precond. degree descent (default 0) */
   0,              /* polynomial precond. degree ascent (default 0) */
   0,              /* polynomial precond. degree coarse (default 0) */
   -1.0,           /* precision multiplier descent (< 0 forces max iters) */
   -1.0,           /* precision multiplier ascent (< 0 forces max iters) */
   1);           /* requested precision multiplier coarse (default 1) */
As I can see, multigrid solves first on coarse mesh, then interpolates and solves on finer meshes... I selected default solvers (all PCG). Maximum number of iterations was reduced from 10000 to 100. Anyway, it's also not the best way because it will replace Saturne's internal optimal solver selection (I use Auto solver settings now).

Regarding variables/GUI. When I was working on my relatively simple programs, I used to use the following rule. I had a big structure/class for entire program with all the data. Subroutines was fed with this structure (ptr) and particular substructures to work on (ptr). So everything was accessible from everywhere. Then, in GUI, all the variables from this big structure was exposed, it was mandatory. So there was zero possibility (except bugs) of any global variable not editable/seen from GUI. For more complex programs I used a configuration. It's the same structure holding all numerical and physicsl settings like reaction list and rate parameters, numbers of iterationg, tolerances etc. This structure also passed to every function as an argument (ptr) so no problem to access anywhere. Cfg files was text with specific format like:
VariableName {Path} > Value<EOL Mark>,
although you can use XML with the same result. Path is an identifier within structure (for example: mdl.rcn.rcn(1).actEng). Something like this is definitely useful for Saturne, but it requires lots of work to rewrite. Also, my interface was more complex than Saturne GUI even for non-CFD (classic engineering) software just for one "big" element (universal heat exchanger, chemical reactor / furnace zone), so applied to CFD it will require lots of windows. To make GUI more usable, I used "elevation": many variables was copied from lower to upper levels in document-view interface making them accessible for the user without digging too deep in treeview (names has lite-green background to distinguish elevated items). Benefit is absolute control of global structures that eliminates the problem of access of settings/results anywhere.
Intermadiate approach that you can use is a global configuration class fully stored for the case in XML (or my name/path/value format) and partially accessible form GUI. The most quick workaround is to add one GUI parameter in solver settings table to replace hard-coded max of 10000 iterations.

The other major thing related to solver settings and GUI. Using Saturne many times, I figured out that two things must be used:
1. Individual min/max limits for all fields.
2. Individual relaxation for all fields.
It's much more important than starting with UPWIND and switching to SOLU then. It's of primary importance. Would you, please, add these parameters to GUI? I have my user functions, but many users will benefit form possibility to just set it in GUI. Example default relaxations are 0.1...0.5 (0.3).

I also want to briefly mention another issue to not to start the full topic (sorry, don't have time now). When you couple Solid/Fluid on inflated boundary (prism layer in fluid zone), program gives lots of join errors and diverges in omega solver (my part of mistake: I forgot to relax omega, but it's not a root-cause). Joiner cannot couple first layer of prisms with relatively thick tetras in solid. Settings does not help. You can find this on any finned tube case with internal coupling and non-conformal mesh at the interface. The remedy is to use conformal mesh (no need to join), but what if such mesh cannot be build due to meshes issues? Please check if you will have some spare time (for example, take 20mm O. D. pipe, 1 mm fins with 3 mm fin step and non-conformal mesh with inflation in fluid zone).
Yvan Fournier
Posts: 4254
Joined: Mon Feb 20, 2012 3:25 pm

Re: Controlling maximym number of iterations

Post by Yvan Fournier »

Hello,

Replacing all settings in the code with a tree would be a huge undertaking, and to avoid possible performance issues, we will keep specific structures in many places. But the XML file is transformed into a simple tree when read, and if we can make this editable (which requires work due to the way this structure is optimized), that would be similar to what you describe.

Regarding the relaxation factors, I need to check with colleagues (using the issue tracker on GitHub might be better to follow through with suggestions).

Regarding joining issues, I am not sure where the issues come from without a visualization or diagram. But in any case, if you have a mix of fine boundary cells and a curvature with a coarse tangential refinement, the joining algorithm will fail (and the theory elements in the documentation can explain that, as we try halt the algorithm before it risks tangling some elements).
In that situation, it possible, it is better to add the boundary layer after joining. We also have a few possible improvements to our boundary layer insertion, for which handling of side surfaces and robustness fallbacks are not quite complete.

Regards,

Yvan
Antech
Posts: 207
Joined: Wed Jun 10, 2015 10:02 am

Re: Controlling maximym number of iterations

Post by Antech »

Hello, thanks for your info. Hope there will be more settings in your new XML and solver control approach. It's fine that Saturne keeps developing unlike almost all other software (I mean real development, not version number change or killing interface with black and white "flatness").
Maybe some features may be taken from CFX style. We consider it the best available program for general aerodynamics and simple gas combustion (yes, it's proprietary and expensive and dropped in around 2010, but approach is great in many aspects, solver is super-stable). CFX uses the following data model. On disk storage is binary files, case data is stored in preprocessor, results, solver definition and intermediate results (backup) files, any of them may be read in preprocessor.
Solver uses it's own data structures, naturally, we know nothing because it's not open. But it seems that there is a kind of a tree structure inside it.
Besides binary format, there is a text files called CCL (CFX Language) that is an old proprietary variation of XML style. CCL is used to export and import a part or a whole case data except mesh, but it's only used by the preprocessor. It's an analog of your XML case files, so here you are similar to CFX. Putting the mesh + case in one file (like in CFX) is not really needed.
When it comes to the solver, it's a different story. It does not take any text files, only binary (definition files), but there is a great add-on called Solver Manager (GUI). It can read in definition file while solver is running and show the treeview structure with a case data. Many parameters are editable (not all), also, some new ones can be inserted into the tree via menu. This treeview in Solver Manager is a representation of the case data (a kind of XML visualisation in modern words), and seems to be that you are going to introduce. After changing variables in Solver Manager, they are written to temporary file that is read by the Solver on-the-fly, applied to calculation process and then stored in results file (yes, you can open it with Solver Manager also). Solver Manager also handles monitors that can be applient to points, surface and volume named selections (not just points).
I think it's the best you can implement in Saturne. It does not require full solver data structure replacement. You already have XML+Pre GUI. If you add a kind of configuration structure and it's editor in existing or additional GUI and runtime Solver Manager, it will be significant improvement. It will replace simple control_file with case XML files. Editor opens XML, you change, for example, boundary temperature for ignition or iteration number for results writing, then you save XML and solver re-reads it on-the-fly and updates calculation. You can use separate config file or add sections to case XML; extend preprocessor GUI or add separate Solver Manager - it will be significant functionality progress anyway.

Regarding mesh inflation layers and joining. I attached the picture with mesh cross-section for the finned tube. There are tube wall, fins and air volume with inflation layer (marked with different colours). As I understand, Saturne can turn internal named slection into internal wall (boundary insertion in mesh preprocessing) but not to build inflation layers. Is there a function I'm not aware of? I cannot find layer parameters like first layer thickness and number of layers in Boundary insertion in Saturne GUI.
How joining errors look like. Ihave not ready joining filed (din't keep non-conformal variants) but, if you look at the joining.case in Paraview, you will see that most of inflation (air) and tetra (fin metal) faces are joined, while there are some "holes". Joiner cannot understand "flat" pyramids of the first inflation layer contacting with tube fin metal tetras. Maybe it's algorithm limit that cannot be fixed. The mesh itself is good, CFX runs on it.
Attachments
Example of mesh with layers
Example of mesh with layers
Post Reply