Hello, thanks for your response. I have no time now for deep analysis of the problem, sorry. But it's OK to use an old gradient reconstruction option.
I have another question related to this topic. With both meshes (whole domain + "All adjacent" gradient option or partial geometry + "Non-orthogonal faces threshold" gradient option) the case diverges after some iterations. Target CFL in 1.0 (real is always < 1.5), fan flow rates are quite low, fan pressures are 3000 Pa in the beginning and lowering to 1800...2800 Pa, turbulence is RSM SSG.
At some iteration, rapid pressure ripple occurs in some cell and calculation diverges quickly. Cells are arbitrary, it may even be the cell on the bottom boundary where there are no any features around and no any resistance areas, just usual volume mesh... Pressure maximum is around 5000...6000 Pa and then rises to 10^5 Pa in one iteration!
Is it expected with RSM that is known to have convergence problem, or I need to change case settings? The xml file is in first post of the thread. It looks strange because there are no any causes for these pressure ripples with so low velocities at the calculation start... Below is a table with pressure and velocity maximums on iterations.
Code: Select all
=====================================================
| Itr (abs) | VelMag Max | Prs Min | Prs Max |
| [---] | [m/s] | [Pa] | [Pa] |
=====================================================
| 1 | 0.72942 | -6021.1 | 6211.2 |
| 2 | 1.5544 | -6246.7 | 6567.1 |
| 3 | 2.4442 | -6374.8 | 6763.4 |
| 4 | 3.4138 | -6234.3 | 6571.7 |
| 5 | 4.7079 | -6184 | 6515.6 |
| 6 | 6.1844 | -6173.2 | 6490.8 |
| 7 | 7.8699 | -6152.3 | 6461.8 |
| 8 | 9.7741 | -6128.2 | 6420.8 |
| 9 | 11.906 | -6094 | 6374 |
| 10 | 14.274 | -6058.2 | 6322.2 |
| 11 | 16.888 | -6013.7 | 6271.2 |
| 12 | 19.751 | -5966.8 | 6216.4 |
| 13 | 22.863 | -5915.1 | 6152 |
| 14 | 26.216 | -5857.8 | 6077.8 |
| 15 | 29.79 | -5795.1 | 5994.9 |
| 16 | 33.549 | -5725.3 | 5904.5 |
| 17 | 37.434 | -5650 | 5809.4 |
| 18 | 41.351 | -5566.1 | 5708.3 |
| 19 | 45.167 | -5477.9 | 5596.3 |
| 20 | 48.7 | -5378.9 | 5477.2 |
| 21 | 51.705 | -5274.3 | 5348 |
| 22 | 53.59 | -5158.2 | 5210 |
| 23 | 53.856 | -5036.2 | 73912 |
| 24 | 56.805 | -1.1282e+05 | 7400.5 |
| 25 | 63.765 | -1.0855e+05 | 52389 |
| 26 | 65.198 | -4616.3 | 1.5016e+05 |
| 27 | 57.838 | -1.1346e+06 | 6.8951e+05 |
| 28 | 53.622 | -1.3136e+06 | 1.6121e+06 |
| 29 | 83.346 | -3.5274e+06 | 2.6567e+06 |
| 30 | 2754.5 | -1.5123e+07 | 1.8698e+07 |
======================================================
I also attached a picture with divergence area. Background is semi-transparent and coloured with pressure, vectors a colored with velocity. As you can see, there is still area around with divergence in just 1-2 cells. Pressure in "diverged cells" is up to 10^6 Pa although its in -600...-500 Pa range in area around and velocities is low except "diverged cells" where velocity components reach 42...117 m/s (background is ~0.5 m/s at this iteration).
========================================
The strange thing is that calculation does not run again, although I opened exactly the same XML from RESU for successful run (that diverged but started normally). When I switch to k-epsilon it's OK, if I switch back to
RSM SSG or EBRSM it fails. Seems the bug is intermittent and the reason is not gradient reconstruction option but RSM with particular mesh (it's around 20 millions of cells, calculation is on Xeon desktop machine with 2678v3 CPU). Unfortunately, I have no time now to install debug tools and dig deeper into this issue... Error message only contains call stack, two examples:
Code: Select all
SIGTERM signal (termination) received.
--> computation interrupted by environment.
Call stack:
1: 0x7f86f4a68bbe <+0x485bbe> (libsaturne-7.0.so)
2: 0x7f86f4a8eddd <cs_convection_diffusion_tensor+0x10cd> (libsaturne-7.0.so)
3: 0x7f86f4a393fb <cs_balance_tensor+0x4db> (libsaturne-7.0.so)
4: 0x7f86f471b49e <cs_equation_iterative_solve_tensor+0x58e> (libsaturne-7.0.so)
5: 0x7f86f4e70e3f <__cs_c_bindings_MOD_coditts+0x389> (libsaturne-7.0.so)
6: 0x7f86f4c6ea69 <resssg2_+0x3b59> (libsaturne-7.0.so)
7: 0x7f86f4c7caf6 <turrij_+0x35f6> (libsaturne-7.0.so)
8: 0x7f86f4836b91 <tridim_+0x4171> (libsaturne-7.0.so)
9: 0x7f86f469ddf7 <caltri_+0x1e77> (libsaturne-7.0.so)
10: 0x7f86f57859ba <main+0x70a> (libcs_solver-7.0.so)
11: 0x7f86f1f8f555 <__libc_start_main+0xf5> (libc.so.6)
12: 0x400c99 <> (cs_solver)
End of stack
Code: Select all
SIGTERM signal (termination) received.
--> computation interrupted by environment.
Call stack:
1: 0x7fefbc3e5adb <+0x3adb> (mca_btl_vader.so)
2: 0x7fefc1292d2a <opal_progress+0x4a> (libopen-pal.so.6)
3: 0x7fefc31a8005 <ompi_request_default_wait_all+0x225> (libmpi.so.1)
4: 0x7fefc31d874f <PMPI_Waitall+0x9f> (libmpi.so.1)
5: 0x7fefc46b4cc9 <cs_halo_sync_var_strided+0x459> (libsaturne-7.0.so)
6: 0x7fefc4a6d6b8 <cs_matrix_pre_vector_multiply_sync+0x28> (libsaturne-7.0.so)
7: 0x7fefc4aac02b <+0x54602b> (libsaturne-7.0.so)
8: 0x7fefc4aaec52 <cs_sles_it_solve+0x152> (libsaturne-7.0.so)
9: 0x7fefc4a9c5ca <cs_sles_solve+0x28a> (libsaturne-7.0.so)
10: 0x7fefc4a9d824 <cs_sles_solve_native+0x514> (libsaturne-7.0.so)
11: 0x7fefc469f137 <cs_equation_iterative_solve_tensor+0x1227> (libsaturne-7.0.so)
12: 0x7fefc4df3e3f <__cs_c_bindings_MOD_coditts+0x389> (libsaturne-7.0.so)
13: 0x7fefc4bf1a69 <resssg2_+0x3b59> (libsaturne-7.0.so)
14: 0x7fefc4bffaf6 <turrij_+0x35f6> (libsaturne-7.0.so)
15: 0x7fefc47b9b91 <tridim_+0x4171> (libsaturne-7.0.so)
16: 0x7fefc4620df7 <caltri_+0x1e77> (libsaturne-7.0.so)
17: 0x7fefc57089ba <main+0x70a> (libcs_solver-7.0.so)
18: 0x7fefc1f12555 <__libc_start_main+0xf5> (libc.so.6)
19: 0x400c99 <> (cs_solver)
End of stack
Common thing is
cs_equation_iterative_solve_tensor subroutine. Then, an error may occur in different functions that it call: matrix multiplication or "balancing" (sorry, don't know what it means). So I don't think I will find what causes this error because it looks like it is generated in some other place and here we only see the result as in case of memory access problems. I will now check if there is no problem with free memory also.
Oops! It's seems that it just runs out of memory. Mesh is 22M that is OK for simple turbulence models but on RSM with a bunch of Rij fields it consumes almost entire memory. It ran now 2 times on RSM after reboot (had some problem with KDE) but peak memory usage is almost 64 GB that the system has. Sorry for "many words", 99% that the mesh is just too large for the system on RSM.
But the question with divergence (pressure runaway) remains. Maybe I need to tweak numeric settings? Or make fan curves "softer"?