faster convergence

Questions and remarks about Code_Saturne usage
Forum rules
Please read the forum usage recommendations before posting.
Post Reply
meb15aa
Posts: 15
Joined: Wed Feb 27, 2019 2:32 pm

faster convergence

Post by meb15aa » Sun Mar 31, 2019 1:01 pm

Hi everyone i have two questions. Firstly my current simulation comprises of 3200 elements but requires convergence of both energy and momentum take 50,000 iterations to converge. Is there any way I can reduce this as I am carrying out a mesh independent study and require analysis of finer mesh with smaller time steps and much larger computational power?
(I have attached my steady state time step parameters if it helps)
Thank you in advance
Attachments
Screenshot from 2019-03-31 09-00-37.png

meb15aa
Posts: 15
Joined: Wed Feb 27, 2019 2:32 pm

Re: faster convergence

Post by meb15aa » Sun Mar 31, 2019 2:47 pm

Just a quick update, I have had to decrease the time step to 1-e05 to avoid divergence for my finer mesh models due to a very low y plus (10e-06) at the boundary. As a result i believe several hundred thousands of iterations will be necessary for convergence now. Any suggestions.

Yvan Fournier
Posts: 2625
Joined: Mon Feb 20, 2012 3:25 pm

Re: faster convergence

Post by Yvan Fournier » Sun Mar 31, 2019 8:02 pm

Hello,

Posting more info (as per the forum rules/recommendations) would help get a bigger picture of what is slowing you down. Info on the mesh quality (or even the mesh, as it is very small) would also be useful for starters.

But in any case, such as small y+ is not useful, and will force you to slow the computation. Having a locally small y+ (due do velocity being small or zero in re-circulation regions for example) is OK, but having excessive refinement in the wall region is not useful. So the issue is probably related to your mesh.

Regards,

Yvan

Antech
Posts: 119
Joined: Wed Jun 10, 2015 10:02 am

Re: faster convergence

Post by Antech » Mon Apr 01, 2019 8:31 am

Hello, meb15aa.
Convergence speed we expect depend on the formulation. If you need dynamics, you will need lots of time for real meshes (starting from 5...10 millions of cells) on a workstation. So, in this case, you need a cluster. I know that it is sounds like advert but I cannot say it other way. We mostly use CFX and, in some cases, Fluent for our work tasks (I use Saturne for some tasks also). I can say that, if you use CFX, you can obtain results quite quickly in dynamics because of it's robust solver (time steps may be large). With any SIMPLE-like solvers, it's very unlikely that you will get stable calculation with large timesteps with realistic geometry and mesh. In static cases, local timestep saves your time. In dynamics the only way is a constant timestep (physically) so you need the robust solver. In CFX, there is a fully coupled aerodynamic solver, but it's license is very expensive (for our country). In Fluent, that uses the same ANSYS CFD license, there is a SIMPLE-like solver so it's not as stable as CFX and, in some cases, it doesn't even approach Saturne stability in statics (with high velocity gas turbine outlet profiles).

Therefore, if you're on a "standard" SIMPLE-like solver (that is only partially coupled), like in Saturne, it's unlikely that you will reach fantastic convergence speed: with large timesteps the solution will diverge or will take too much time for one iteration. In EDF, they use powerful clusters for dynamics... We use small cluster for statics and dynamics, even for CFX.

To be more practical. For statics, please, try my recommendations from your previous topic. For cases with particles use the static gas phase flow field when possible (first run is for the gas phase and static, second is for the particulate phase and dynamic with carefully chosen timestep to avoid large uncertainities in particle tracking). For dynamics, select default gradient reconstruction scheme (if the mesh is good and there are no convergence problems) and reduce the precision for all equations to 10^-5. Use static results as initial conditions when possible. Consider obtaining or renting the cluster in future.

Yvan Fournier
Sorry for CFX "advert" here....
I remember "there's no miracle for workstation" :)

Yvan Fournier
Posts: 2625
Joined: Mon Feb 20, 2012 3:25 pm

Re: faster convergence

Post by Yvan Fournier » Mon Apr 01, 2019 8:55 am

Hello,

Yes, for unsteady computations there is no miracle solution in Code_Saturne. We should be able to use somewhat larger time steps with CDO, but this is still work in progress. Having stability while still capturing nonlinear physical behavior is tricky.

In any case this underscores rhe importance of using a sufficient resolution for good quality results, but avoiding very small cells where they are not useful, as they may constrain the time step.

A second tricky case is natural convection, where the CFL and Fourier criteria may be in conflict (as the thermal diffusivity is influenced by the turbulence model and can have local peaks), I'm which case adapting the time step to "expected" velocities may work best (this is a rare case, but we can only notice this with a full log file).

Regards,

Yvan

Antech
Posts: 119
Joined: Wed Jun 10, 2015 10:02 am

Re: faster convergence

Post by Antech » Tue Apr 02, 2019 7:35 am

Hello.
We should also take into account that, while CFX is very stable, it doesn't mean that it produces the same accurate results. In one of our studies we needed to make some "academic-like" cyclone investigation because we found significant differences with various turbulence models. We fond that it was because, if one have strongly swirling flows, one have to use appropriate turbulence model (RSM in any code or, at least, SST in CFX, don't now if it will give good results for cyclones in other codes because of possible differences in implementation). But, as a side-effect, I also found that CFX gave significantly different (~1,5 times) local maximum tangential velocity in cyclone annular channel than Fluent and Code Saturne. I checked BCs and turbulence models selected and they was identical, spatial discretisation for velocity was second order, mesh dependency was also checked ans it was not the case, all programs used the same geometry model. So there may be some uncertainity introduced by more stable numerical scheme in CFX, although it's only a guess.

Post Reply