Blocking access to mesh_input file

Questions and remarks about code_saturne usage
Forum rules
Please read the forum usage recommendations before posting.
Post Reply
finzeo
Posts: 53
Joined: Fri Sep 09, 2022 4:23 pm

Blocking access to mesh_input file

Post by finzeo »

Hi all,

I recently installed code_saturne 8.0-beta on my workplace cluster. I still couldn't make any run because, after initializing, when trying to run the cs_solver file, it starts but gets stuck at the stage of reading the mesh_input file (specifically, a file mesh_input.csm-1271136257-129995.lock appears in RESU/ runfolder folder). I attach the log files.

What could be the problem? I think it may be related to issues about handling .med files (it's the mesh file type) and hdf5

Thank you in advance,
Attachments
run_solver.log
(5.69 KiB) Downloaded 45 times
preprocessor.log
(6.01 KiB) Downloaded 53 times
compile.log
(2.31 KiB) Downloaded 41 times
Yvan Fournier
Posts: 4070
Joined: Mon Feb 20, 2012 3:25 pm

Re: Blocking access to mesh_input file

Post by Yvan Fournier »

Hello,

This might be due to MPI-IO locking, or mesh redistribution locking.

In the GUI's "performance settings" section, you can try setting an MPI rank step of 4 or 8 for the Input/Output.

If that is not enough, try serial I/O. If this makes a difference, it means there is an issue with the MPI IO library on your cluster (this is unfortunately common at high rank counts, but should not occur at 80 MPI ranks).

Otherwise, in the MPI algorithms section, you can also try setting the Crystal Router for MPI_Alltoallv.

We have had issues with similar locking on one of our clusters, which appeared more at the end of long running computations, with a combination of OpenMPI 4.0 or 4.1 and some version of the MOFED driver... Hope you do not have a buggy MPI stack...

Are other MPI libraries or versions installed on the cluster ? It might be interesting to test with those...

Best regards,

Yvan
finzeo
Posts: 53
Joined: Fri Sep 09, 2022 4:23 pm

Re: Blocking access to mesh_input file

Post by finzeo »

Hi Yvan,

Thank you for directing me where the problem is. I'm seeing that I don't have PT-SCOTCH / SCOTCH and ParMETIS / METIS activated, whereas in the previous installation of code_saturne (v6) I did. I don't know if this is important information in relation to the problem. In such a case, how can I do to have them activated? I installed code_saturne 8 from the source code in git, compiling it.
Yvan Fournier
Posts: 4070
Joined: Mon Feb 20, 2012 3:25 pm

Re: Blocking access to mesh_input file

Post by Yvan Fournier »

Hello,

The installation method has not changed much between v6.0 and v8.0, at least not as regards PT-Scotch and ParMETIS detection. So installing with the same install paths for those as v6.0 should work.

But that does not explain your issue. The built-in (Morton curve) partitioner may lead to lower quality partitions, but should be just as robust (or more robust) regarding code initialization. Though it is possible that the partitioning with PT-Scotch "almost blocs" while the one with a Morton curve does block.

Did you try the options I suggested ?

Best regards,

Yvan
finzeo
Posts: 53
Joined: Fri Sep 09, 2022 4:23 pm

Re: Blocking access to mesh_input file

Post by finzeo »

Hi Yvan,

I managed to solve the problem (concerning the run stopping while reading the mesh_input file) simply by compiling code_saturne with OpenMPI 3.1.3 (it was avaiable in the cluster; previously, I was using 4.1.1). Even after this, I managed to compile code_saturne with the metis and scotch libraries. It seems that they are recognized.
However, another problem arose: the runs are running sooooo slowly. I am attaching relevant files to see what the problem may be. Since I had already solved the original problem with the above, I didn't make any other changes besides the ones mentioned, just in case (I didn't change the MPI rank step value or make any changes regarding Crystal Router, so I could try this if you like. useful)
Attachments
archives.zip
(32.91 KiB) Downloaded 55 times
Yvan Fournier
Posts: 4070
Joined: Mon Feb 20, 2012 3:25 pm

Re: Blocking access to mesh_input file

Post by Yvan Fournier »

Hello,

You seem to be oversubscribing the nodes :

You have 10 cores (with 20 thrads probably with hyperthreading), but are running 20 MPI ranks * 10 OpenMP threads = 200 threads per node. This can only make things much slower...

In addition, you compiler versions are very old (gcc 4.9.4 was released in Au 2016), and OpenMP support was slower in older versions if I remember correctly (higher fork-join latency).

So you should probably only use 1 thread per MPI rank (basically running in pure MPI mode). Also, if you have blocking issues with a current Open MPI version but not an older one, you should check th options I suggested. (and possibly report this to you cluster administrators and check if an update of some libraries is planned or not, as the issues we had with some OFED/MOFED versions also caused issues with Open MPI with other (commercial) codes, though it is not certain the issue is the same here.

If you were already using 200 threads per core with Open MPI 4, you should first test it with default options and a 20 threads per core (20 MPI rank * 1 thread per rank).

Best regards,

Yvan
finzeo
Posts: 53
Joined: Fri Sep 09, 2022 4:23 pm

Re: Blocking access to mesh_input file

Post by finzeo »

Yvan,

Thanks! I was able to solve the problem. Runs as fast as it's supposed to go.
I took what you told me and just set "export OMP_NUM_THREADS=1" in the script I run with sbatch.

However, I still don't understand. Specifically, quoting you...
You have 10 cores (with 20 thrads probably with hyperthreading), but are running 20 MPI ranks * 10 OpenMP threads = 200 threads per node. This can only make things much slower...
I was setting 20 MPI ranks to be considered, but where was I setting 10 OpenMP threads to be considered for each rank?
Sorry, but I always understood the theory associated with MPI well, but not OpenMP's.
I will also check the other observations you made to me.
Yvan Fournier
Posts: 4070
Joined: Mon Feb 20, 2012 3:25 pm

Re: Blocking access to mesh_input file

Post by Yvan Fournier »

Hello,

When runnning with MPI, if you use 20 MPI ranks on a node with 20 cores, you are using 20 processes, each with 1 thread. Actually, I think you have 10 core + hyperthreading, but the result is similar.

So you are using all the compute power of the node, as MPI places one process per core (I'll ignore the posssibility and detail of processes moving from one core to another and process pinning options for simplicity).

When.using OpenMP you are using OMP_NUM_THREADS per MPI process. So if you want to combine the 2 you should use several cores per MPI process (less MPI processes). This is useful in some cases but best performance is usually bedt in pure MPI mode.

If you use the GUI to launch the code (with a proper post install / code_saturne.cfg settings, it will show you how many physical cores you are using (the product of MPI processes and OpenMp cores), and set OMP_NUM_THREADS automatically.

Best regards,

Yvan
Post Reply