Page 1 of 1

Controlling maximym number of iterations

Posted: Mon Sep 15, 2025 10:05 am
by Antech
Hello. It's an old question, but it's arised again so I needed to cope with it somehow...
In some situations, like start of calculation or convergence problems, Saturne make lots of linear solver iterations (10000+). It's not compatible with practice when we need just to pass through this period, not to converge ideally to "classic" 10^-5 tolerance. Calculation becomes so long so it, really, just cannot be performed in realistic time.
One can use large tolerances, but:
1. It requires additional run with high tolerance.
2. Program will not automatically converge to 10^-5 when possible and drop linear iterations otherwise.
3. It does not guarantee, for example, that solver will converge to 0.1 error.
So I prefere to limit number of linear iterations at 100...300. In older versions there was a GUI option, but now it's removed due to more complex solver setting.
I tried to experiment with setting number of iterations with user routine. Calculation was just temperature field (frozen flow). I set multigrid for temperature just for testing and max coarse iterations to 100 (default is 10000) in cs_multigrid_set_solver_options. As a result, first iteration lasted "forever" instaed of ~30 min with defaults so I just stopped the process. Another problem is that, even if it will give positive result, user setting will override Saturne automatic solver selection that is not optimal.
Currently, I use another simple method. I set _n_max_iter_default to 300 in cs_sles_param.c and recompile. It gives exactly required effect, but it's hard-coded and the same for all fields. Would you, please, add this variable (_n_max_iter_default) to GUI? If I add it myself, changes will reset with new version.

Re: Controlling maximym number of iterations

Posted: Mon Sep 15, 2025 2:01 pm
by Yvan Fournier
Hello,

Which version of the code are you using, and which linear solver are you using with multigrid (assuming you are using multigrid as a preconditionner, not as a solver) ?

Using default settings, this issue has mostly dissapeared on the computations that I am aware of. The main issue was that we were using the multigrid as a preconditioner for a simple PCG solver, which does not guarantee convergence with the preconditioner has small fluctuations in its behavior, so switching to a flexible congugate gradient was necessary. Since then, this issue seems to have mostly dissapeared.

We do have convergence issues on some meshes at some time steps, related especially to difficult convergence of some turbulence variables.

For this, after too many (200 to 400) Gauss-Seidel or Jacobi iterations, using default settings, the code switches to GMRES.
This usually helps, though is not always sufficient.

In any case, I concurr that low-level settings for linear solvers are getting too complex and unwieldy, so I plan on switching to a tree/dictionnary type approach for solver settings, which would make it possible to change one setting at a time (such as the top level of iterations) instead of redefining everything in user-defined functions (and possibly pass some settings as key/value strings through the GUI). This would certainly help in your case.

Although this would also make things simpler for my own tests on linear solvers (and I have been doing quite a few of those lately for GPU performance tuning), I can't guarantee this will be in 9.1 in December. I definitely hope I can do this by 9.2 (June 2026), and it might make it into 9.1, but I can't promise it, as we also have a other features of the code we need to finish before that.

Regards,

Yvan

Re: Controlling maximym number of iterations

Posted: Mon Sep 15, 2025 3:40 pm
by Antech
Thanks for your answer.

I use Saturne 8.0.4 now.

I have never made any CFD code so I don't know details. My idea was to take a user example and modify the number of iterations. The code is as follows:

Code: Select all

cs_multigrid_t *mg;
mg=cs_multigrid_define(-1,"TempC",CS_MULTIGRID_V_CYCLE);
cs_multigrid_set_coarsening_options(mg,
                                    3,    /* aggregation_limit (default 3) */
                                    0,    /* coarsening_type (default 0) */
                                    10,   /* n_max_levels (default 25) */
                                    30,   /* min_g_cells (default 30) */
                                    0.95, /* P0P1 relaxation (default 0.95) */
                                    0);  /* postprocessing (default 0) */

cs_multigrid_set_solver_options
  (mg,
   CS_SLES_PCG, /* descent smoother type (default: CS_SLES_PCG) */
   CS_SLES_PCG, /* ascent smoother type (default: CS_SLES_PCG) */
   CS_SLES_PCG,    /* coarse solver type (default: CS_SLES_PCG) */
   100,             /* n max cycles (default 100) */
   2,              /* n max iter for descent (default 2) */
   10,              /* n max iter for asscent (default 10) */
   100,           /* n max iter coarse solver (default 10000) */
   0,              /* polynomial precond. degree descent (default 0) */
   0,              /* polynomial precond. degree ascent (default 0) */
   0,              /* polynomial precond. degree coarse (default 0) */
   -1.0,           /* precision multiplier descent (< 0 forces max iters) */
   -1.0,           /* precision multiplier ascent (< 0 forces max iters) */
   1);           /* requested precision multiplier coarse (default 1) */
As I can see, multigrid solves first on coarse mesh, then interpolates and solves on finer meshes... I selected default solvers (all PCG). Maximum number of iterations was reduced from 10000 to 100. Anyway, it's also not the best way because it will replace Saturne's internal optimal solver selection (I use Auto solver settings now).

Regarding variables/GUI. When I was working on my relatively simple programs, I used to use the following rule. I had a big structure/class for entire program with all the data. Subroutines was fed with this structure (ptr) and particular substructures to work on (ptr). So everything was accessible from everywhere. Then, in GUI, all the variables from this big structure was exposed, it was mandatory. So there was zero possibility (except bugs) of any global variable not editable/seen from GUI. For more complex programs I used a configuration. It's the same structure holding all numerical and physicsl settings like reaction list and rate parameters, numbers of iterationg, tolerances etc. This structure also passed to every function as an argument (ptr) so no problem to access anywhere. Cfg files was text with specific format like:
VariableName {Path} > Value<EOL Mark>,
although you can use XML with the same result. Path is an identifier within structure (for example: mdl.rcn.rcn(1).actEng). Something like this is definitely useful for Saturne, but it requires lots of work to rewrite. Also, my interface was more complex than Saturne GUI even for non-CFD (classic engineering) software just for one "big" element (universal heat exchanger, chemical reactor / furnace zone), so applied to CFD it will require lots of windows. To make GUI more usable, I used "elevation": many variables was copied from lower to upper levels in document-view interface making them accessible for the user without digging too deep in treeview (names has lite-green background to distinguish elevated items). Benefit is absolute control of global structures that eliminates the problem of access of settings/results anywhere.
Intermadiate approach that you can use is a global configuration class fully stored for the case in XML (or my name/path/value format) and partially accessible form GUI. The most quick workaround is to add one GUI parameter in solver settings table to replace hard-coded max of 10000 iterations.

The other major thing related to solver settings and GUI. Using Saturne many times, I figured out that two things must be used:
1. Individual min/max limits for all fields.
2. Individual relaxation for all fields.
It's much more important than starting with UPWIND and switching to SOLU then. It's of primary importance. Would you, please, add these parameters to GUI? I have my user functions, but many users will benefit form possibility to just set it in GUI. Example default relaxations are 0.1...0.5 (0.3).

I also want to briefly mention another issue to not to start the full topic (sorry, don't have time now). When you couple Solid/Fluid on inflated boundary (prism layer in fluid zone), program gives lots of join errors and diverges in omega solver (my part of mistake: I forgot to relax omega, but it's not a root-cause). Joiner cannot couple first layer of prisms with relatively thick tetras in solid. Settings does not help. You can find this on any finned tube case with internal coupling and non-conformal mesh at the interface. The remedy is to use conformal mesh (no need to join), but what if such mesh cannot be build due to meshes issues? Please check if you will have some spare time (for example, take 20mm O. D. pipe, 1 mm fins with 3 mm fin step and non-conformal mesh with inflation in fluid zone).

Re: Controlling maximym number of iterations

Posted: Mon Sep 15, 2025 7:12 pm
by Yvan Fournier
Hello,

Replacing all settings in the code with a tree would be a huge undertaking, and to avoid possible performance issues, we will keep specific structures in many places. But the XML file is transformed into a simple tree when read, and if we can make this editable (which requires work due to the way this structure is optimized), that would be similar to what you describe.

Regarding the relaxation factors, I need to check with colleagues (using the issue tracker on GitHub might be better to follow through with suggestions).

Regarding joining issues, I am not sure where the issues come from without a visualization or diagram. But in any case, if you have a mix of fine boundary cells and a curvature with a coarse tangential refinement, the joining algorithm will fail (and the theory elements in the documentation can explain that, as we try halt the algorithm before it risks tangling some elements).
In that situation, it possible, it is better to add the boundary layer after joining. We also have a few possible improvements to our boundary layer insertion, for which handling of side surfaces and robustness fallbacks are not quite complete.

Regards,

Yvan