Page 1 of 1

Long CPU time in gradients reconstruction

Posted: Tue Oct 29, 2019 12:58 pm
by alen12345
Dear all,

When I perform CFD calculations, I double check my results by using both code_saturne and OpenFOAM.

Considering the same geometry and approximately the same number of cells, on the one hand, code_saturne performs slightly better in the resolution of the linear systems, but a lot of additional time (which is not evident in OF) is spent in the gradients reconstruction. Here is a part of timer_stats.csv:

Code: Select all

 iteration, total, mesh processing, checkpoint/restart, post-processing, linear solvers, gradients reconstruction
      20,  2.1939129e+01,  0.0000000e+00,  0.0000000e+00,  5.8790508e-02,  8.0743166e+00,  1.2563782e+01
      21,  2.1283359e+01,  0.0000000e+00,  0.0000000e+00,  5.8800029e-02,  7.4036324e+00,  1.2583527e+01
      22,  2.1091649e+01,  0.0000000e+00,  0.0000000e+00,  5.8853104e-02,  7.2070620e+00,  1.2587694e+01
      23,  2.1059643e+01,  0.0000000e+00,  0.0000000e+00,  5.8795778e-02,  7.1749788e+00,  1.2551120e+01
      24,  2.0471423e+01,  0.0000000e+00,  0.0000000e+00,  5.8830510e-02,  6.2789714e+00,  1.2885422e+01
      25,  2.0188398e+01,  0.0000000e+00,  0.0000000e+00,  5.8818247e-02,  6.1303528e+00,  1.2751858e+01
      26,  2.0765154e+01,  0.0000000e+00,  0.0000000e+00,  5.8970040e-02,  6.0776807e+00,  1.3364816e+01
      27,  1.9669992e+01,  0.0000000e+00,  0.0000000e+00,  6.0362150e-02,  5.4630235e+00,  1.2888543e+01
      28,  1.9488124e+01,  0.0000000e+00,  0.0000000e+00,  5.9289848e-02,  5.3613498e+00,  1.2806488e+01
      29,  1.9651522e+01,  0.0000000e+00,  0.0000000e+00,  5.8751223e-02,  5.6594369e+00,  1.2689682e+01
The linear solvers time decreases gradually as the number of iterations for the velocity decreases, but the time for the gradients reconstruction is constant.

Can anyone give me some insight in this issue?

Thanks in advance.

AN

Re: Long CPU time in gradients reconstruction

Posted: Tue Oct 29, 2019 2:12 pm
by Yvan Fournier
Hello,

Which gradient reconstruction option do you use ? Depending on the mesh, the iterative method can be relatively fast, or slow with meshes with less regular cells (such as tetrahedra).

The least-squares options should be much faster, at least when not using extended neighborhhoods (which improve robustness but are costly).

I am actually currently trying to improve settings and defaults for gradients, so version 6.1 will hopefully have faster default settings.

In any case, I'm interested in your feedback.

Best regards,

Yvan

Re: Long CPU time in gradients reconstruction

Posted: Tue Oct 29, 2019 2:59 pm
by alen12345
Dear Yvan,

here is a hopefully accurate comparison of the situation. The mesh is a tetrahedral mesh produced with SMESH, with maximum aspect ratio of 8:

- Extrapolation:
- Least squares with extended neighbourhood: 12 secs
- Iterative: 9 secs
- Least squares: immediate divergence

- Neumann:
- Least squares with extended neighbourhood: 12 secs
- Iterative: 9 secs
- Least squares: 2 secs, the simulation is sufficiently stable.

I can confirm your suggestion: Neumann with least squares is the fastest. However, in my case, using a tethaedral mesh with a maximum apect ratio of 8, extrapolation (second order gradient calculation) with the least squares does not converge.

I would say that I am satisfied with the result!

Please feel free to share any other suggestions or thoughts.

Thanks.

AN

Re: Long CPU time in gradients reconstruction

Posted: Tue Oct 29, 2019 3:39 pm
by Yvan Fournier
Hello,

Thanks for the feedback. Using version 6, you might also try "Iterative with least-squares initialization", which is wrongly-named, and will be renamed in 6.0.1. This mode should be called "Green-Gauss with least squares face values" or something of the sort, as it uses an initial least-squares gradient to compute face-values, which are then used in the Stokes/Green-Gauss-based gradient, and should be a good compromise between iterative (also based on Green-Gauss theorem) and least-squares (slightly slower than least-squares, but better from a theory standpoint and possibly more consistent at least for some of the computation stages).

The feedback for extrapolation is interesting also. In version 6.0, it is actually active only for the least-squares model, and I just re-enabled the option for the iterative option, but we do not have any recent case in memory where this option helps, and may consider dropping it if it is not useful.

Best regards,

Yvan