Maximum number of elements in Saturne
Forum rules
Please read the forum usage recommendations before posting.
Please read the forum usage recommendations before posting.
Re: Maximum number of elements in Saturne
and here the log file for 12 cpus (1 node)
- Attachments
-
- nameandcase.txt
- (566.28 KiB) Downloaded 227 times
-
- Posts: 4250
- Joined: Mon Feb 20, 2012 3:25 pm
Re: Maximum number of elements in Saturne
Hello,
Could you please post the performance.log file I mentioned in my previous post, instead of an execution log ?
At least, the execution log shows one thing : whoever suggested you to run with "--log 0 --logp 0" options for a performance test ?
This is intended only for help when debugging, when directing output of all "listings" to separate terminals or files. Redirecting them all to one output only helps make it unreadable, in addition to multiplying text output IO by the number of cores (and in this mode, probably adding some more overhead).
This mode is definitely not for production runs or performance benchmarking.
In addition, you still did not tell me if you checked whether your OpenMPI install is built/configured to use Infiniband correctly.
So if you are unable to check this, please also reinstall Code_Saturne using the default cluster "mpicc", which should be configured correctly. I recall you needed to have a more recent version of gfortran than the one one your cluster, but nothing prevents you from using CC=mpicc and FC=<your_gfortran>.
First, did you check the modules available on your cluster, using "module avail" ? There seem to be Intel compilers installed, so there may already be recent enough Intel compiler or more recent gfortran with support for iso_c_bindings... And you seem to have the choice between OpenMPI and MVAPICH. If you have the patience, you may try both, and compare the performance (I am interested in the feedback)
But please check all of those options before running any further performance tests, and run with default "listing" output options (only one rank outputs a log to a standard file, no to an unbuffered output).
Regards,
Yvan
Could you please post the performance.log file I mentioned in my previous post, instead of an execution log ?
At least, the execution log shows one thing : whoever suggested you to run with "--log 0 --logp 0" options for a performance test ?
This is intended only for help when debugging, when directing output of all "listings" to separate terminals or files. Redirecting them all to one output only helps make it unreadable, in addition to multiplying text output IO by the number of cores (and in this mode, probably adding some more overhead).
This mode is definitely not for production runs or performance benchmarking.
In addition, you still did not tell me if you checked whether your OpenMPI install is built/configured to use Infiniband correctly.
So if you are unable to check this, please also reinstall Code_Saturne using the default cluster "mpicc", which should be configured correctly. I recall you needed to have a more recent version of gfortran than the one one your cluster, but nothing prevents you from using CC=mpicc and FC=<your_gfortran>.
First, did you check the modules available on your cluster, using "module avail" ? There seem to be Intel compilers installed, so there may already be recent enough Intel compiler or more recent gfortran with support for iso_c_bindings... And you seem to have the choice between OpenMPI and MVAPICH. If you have the patience, you may try both, and compare the performance (I am interested in the feedback)
But please check all of those options before running any further performance tests, and run with default "listing" output options (only one rank outputs a log to a standard file, no to an unbuffered output).
Regards,
Yvan
Re: Maximum number of elements in Saturne
Thanks Yvan,
I swithced to the normal listing mode, it was a great idea, the performance is arond 80% on 4 nodes (48 cpus). It was a huge improvement, thank you Yvan. However, i submitted this task on 12 nodes, giving 60% of performance. We used open-MPI 1.6.4 compiled with gcc4.6.3, and on this cluster, all MPI library are compiled with InfiniBand. However, we could not compile properly the METIS library with the CS (i attached the log file of comilation), hence we are using SCOTCH library. Does it change the performance (i also attached the performance file)? Any recommendation for the installation of this library?
Thank you
I swithced to the normal listing mode, it was a great idea, the performance is arond 80% on 4 nodes (48 cpus). It was a huge improvement, thank you Yvan. However, i submitted this task on 12 nodes, giving 60% of performance. We used open-MPI 1.6.4 compiled with gcc4.6.3, and on this cluster, all MPI library are compiled with InfiniBand. However, we could not compile properly the METIS library with the CS (i attached the log file of comilation), hence we are using SCOTCH library. Does it change the performance (i also attached the performance file)? Any recommendation for the installation of this library?
Thank you
- Attachments
-
- performance.log
- (18.16 KiB) Downloaded 239 times
-
- install_saturne .log.gz
- (336.02 KiB) Downloaded 231 times