Hello
I am using the version 2.1.5. I try to call a subroutine in which the MPI communication is used. I just want to test if I can use a subroutine parallized. When I start the calculation, there is a fatal error
********************************************************************************************************************************
/home/chengan/Saturne-calculation/magnetic_Ra_105/RESU/20130412-1049/src_saturne/pair_impair.f90:4.9:
USE MPI
1
Fatal Error: Can't open module file 'mpi.mod' for reading at (1): No such file or directory
*********************************************************************************************************************************
If someone could give me some suggestions?
Thanks a lot
about the use of MPI
Forum rules
Please read the forum usage recommendations before posting.
Please read the forum usage recommendations before posting.
about the use of MPI
- Attachments
-
- compile.log
- (14.01 KiB) Downloaded 230 times
-
- pair_impair.f90
- (429 Bytes) Downloaded 198 times
-
- runcase.log
- (1.65 KiB) Downloaded 208 times
-
- Posts: 4208
- Joined: Mon Feb 20, 2012 3:25 pm
Re: about the use of MPI
Hello,
The answer is quite simple: do not use MPI directly from Fortran in Code_Saturne, and only use the wrappers in cs_parall.h (add your own wrappers ifr necessary).
This is done for several reasons, mainly related to the build system:
[*] The code is linked with a C or C++ (not Fortran) compiler, adding the libraries required for Fortran. (this is because there are no matching macros in autoconf to do the opposite, and in some cases, such as static builds with the MED library, we need all o C, Fortran, C++). But The does not play well with libtool (used by the Autotools) when using Fortran MPI wrappers.
[*] More visible to users, there are now 3 ways of defining Fortran bindings for MPI: the old (Fortran 77-style) bindings, the incomplete Fortran-90 bindings, and the new bindings, which are much cleaner, but require very recent Fortran compilers and MPI libraries.
As we are trying to slowly do more things in C and less in Fortran, were are not planning on adding MPI bindings to Fortran. So you need to either check the user subroutines for examples (probably in usproj.f90, but as version 2.1 is obsolete, I am not going to check).
Regards,
Yvan
The answer is quite simple: do not use MPI directly from Fortran in Code_Saturne, and only use the wrappers in cs_parall.h (add your own wrappers ifr necessary).
This is done for several reasons, mainly related to the build system:
[*] The code is linked with a C or C++ (not Fortran) compiler, adding the libraries required for Fortran. (this is because there are no matching macros in autoconf to do the opposite, and in some cases, such as static builds with the MED library, we need all o C, Fortran, C++). But The does not play well with libtool (used by the Autotools) when using Fortran MPI wrappers.
[*] More visible to users, there are now 3 ways of defining Fortran bindings for MPI: the old (Fortran 77-style) bindings, the incomplete Fortran-90 bindings, and the new bindings, which are much cleaner, but require very recent Fortran compilers and MPI libraries.
As we are trying to slowly do more things in C and less in Fortran, were are not planning on adding MPI bindings to Fortran. So you need to either check the user subroutines for examples (probably in usproj.f90, but as version 2.1 is obsolete, I am not going to check).
Regards,
Yvan
Re: about the use of MPI
Hello
Thank you for your reply. But I do not understand how to use the wrappers in cs_parall.h? If there are some exemples?
Regards
Chengan
Thank you for your reply. But I do not understand how to use the wrappers in cs_parall.h? If there are some exemples?
Regards
Chengan
-
- Posts: 4208
- Joined: Mon Feb 20, 2012 3:25 pm
Re: about the use of MPI
Hello,chengan.wang wrote: ...
Thank you for your reply. But I do not understand how to use the wrappers in cs_parall.h? If there are some exemples?
...
Yes, check my last paragraph (usproj.f90).
Regards,
Yvan
Re: about the use of MPI
Hello Yvan,
Finally I made it. I take 'use parall' and 'irangp' directly without any words correspand to MPI. I choose 4 processes. When I take 1 iteration, there is no problem. But when I increase the iteration, for exemple to 10, it shows errors like:
***********************************************************************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpiexec.openmpi has exited due to process rank 0 with PID 8182 on
node chengan-System-Product-Name exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec.openmpi (as reported here).
--------------------------------------------------------------------------
solver script exited with status 1.
Error running the calculation.
Check code_saturne log (listing) and error* files for details.
*************************************************************************************************
In the error file, it shows
*************************************************************************************************
SIGSEGV signal (forbidden memory area access) intercepted!
Call stack:
1: 0xb7768400 ? (?)
2: 0x805117f <usclim_+0x23b> (cs_solver)
3: 0xb6651fdf <tridim_+0x263b> (libsaturne.so.0)
4: 0xb6559a19 <caltri_+0x2c59> (libsaturne.so.0)
5: 0xb6553472 <cs_run+0x722> (libsaturne.so.0)
6: 0xb6552c70 <main+0x240> (libsaturne.so.0)
7: 0xb4dc74d3 <__libc_start_main+0xf3> (libc.so.6)
8: 0x8049a91 <> (cs_solver)
End of stack
*****************************************************************************************************
Acturally, I only want to paralle the 'pair_impair.f90' file with 4 process. The other files take use of one process. If you could give me some suggestions?
Thanks a lot.
Chengan
Finally I made it. I take 'use parall' and 'irangp' directly without any words correspand to MPI. I choose 4 processes. When I take 1 iteration, there is no problem. But when I increase the iteration, for exemple to 10, it shows errors like:
***********************************************************************************************
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpiexec.openmpi has exited due to process rank 0 with PID 8182 on
node chengan-System-Product-Name exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec.openmpi (as reported here).
--------------------------------------------------------------------------
solver script exited with status 1.
Error running the calculation.
Check code_saturne log (listing) and error* files for details.
*************************************************************************************************
In the error file, it shows
*************************************************************************************************
SIGSEGV signal (forbidden memory area access) intercepted!
Call stack:
1: 0xb7768400 ? (?)
2: 0x805117f <usclim_+0x23b> (cs_solver)
3: 0xb6651fdf <tridim_+0x263b> (libsaturne.so.0)
4: 0xb6559a19 <caltri_+0x2c59> (libsaturne.so.0)
5: 0xb6553472 <cs_run+0x722> (libsaturne.so.0)
6: 0xb6552c70 <main+0x240> (libsaturne.so.0)
7: 0xb4dc74d3 <__libc_start_main+0xf3> (libc.so.6)
8: 0x8049a91 <> (cs_solver)
End of stack
*****************************************************************************************************
Acturally, I only want to paralle the 'pair_impair.f90' file with 4 process. The other files take use of one process. If you could give me some suggestions?
Thanks a lot.
Chengan
- Attachments
-
- pair_impair.f90
- (249 Bytes) Downloaded 203 times
-
- Posts: 4208
- Joined: Mon Feb 20, 2012 3:25 pm
Re: about the use of MPI
Hello,
Your routine is very simple and only prints to terminal, without modifying anything.
So the crash is probably due to something else (probably another user subroutine, or possibly a bug).
Regards,
Yvan
Your routine is very simple and only prints to terminal, without modifying anything.
So the crash is probably due to something else (probably another user subroutine, or possibly a bug).
Regards,
Yvan
Re: about the use of MPI
Hello Yvan
Thank you very much your reply. Maybe I should try to test my program in other version.
Best regards,
Chengan
Thank you very much your reply. Maybe I should try to test my program in other version.
Best regards,
Chengan