CS 2.0.1 on OpenSUSE 11.2
Posted: Tue Jun 21, 2011 6:40 pm
Hallo,
I've installed the latest release on a server with OpenSUSE 11.2, using the automatic install procedure. As regards the mpi, I made the setup file point at a lam/mpi installation already existing on the machine (used by NEPTUNE_CFD). As regards the partitioner, I manually installed metis and made the Saturne setup file point at the metis installation folder.
The first tests I performed were only partially successful, as:
- Saturne couldn't find and use metis (and then used the unoptimized internal partitioner). I can't see what mistake I made.
- LAM/MPI failed to boot on remote nodes (of the same cluster which the above machine belongs to). NOTE1: the remote nodes have same hardware and different operating system (Fedora). NOTE2: multiple-processor runs on the local machin were OK. NOTE3: I have experience of lam/mpi properly working on such a heterogeneous cluster with NEPTUNE_CFD (provided that the master node is NOT the Suse one!)
Then, I decided to reinstall the code and automatically install and use openmpi 1.4.3 instead of lam/mpi.
Well, openmpi is correctly installed, while I get an "Error during configure stage of FVM", the file ...fvm-0.15.2.build/config.log reporting the following message: "configure:14128: error: MPI support is requested, but test for MPI failed!"
I'd appreciate any advice on how to solve the above issues. Thanks a lot.
Regards,
fabio
EDIT 22/06/2011:
Sorry, openmpi was NOT correctly installed! The config.log file shows messages like: "configure: failed program was: confdefs.h." However, the install_saturne.log file doesn't report any error message during mpi configure and install.
So, I have a problem with openmpi. Any ideas? Thanks
I've installed the latest release on a server with OpenSUSE 11.2, using the automatic install procedure. As regards the mpi, I made the setup file point at a lam/mpi installation already existing on the machine (used by NEPTUNE_CFD). As regards the partitioner, I manually installed metis and made the Saturne setup file point at the metis installation folder.
The first tests I performed were only partially successful, as:
- Saturne couldn't find and use metis (and then used the unoptimized internal partitioner). I can't see what mistake I made.
- LAM/MPI failed to boot on remote nodes (of the same cluster which the above machine belongs to). NOTE1: the remote nodes have same hardware and different operating system (Fedora). NOTE2: multiple-processor runs on the local machin were OK. NOTE3: I have experience of lam/mpi properly working on such a heterogeneous cluster with NEPTUNE_CFD (provided that the master node is NOT the Suse one!)
Then, I decided to reinstall the code and automatically install and use openmpi 1.4.3 instead of lam/mpi.
Well, openmpi is correctly installed, while I get an "Error during configure stage of FVM", the file ...fvm-0.15.2.build/config.log reporting the following message: "configure:14128: error: MPI support is requested, but test for MPI failed!"
I'd appreciate any advice on how to solve the above issues. Thanks a lot.
Regards,
fabio
EDIT 22/06/2011:
Sorry, openmpi was NOT correctly installed! The config.log file shows messages like: "configure: failed program was: confdefs.h." However, the install_saturne.log file doesn't report any error message during mpi configure and install.
So, I have a problem with openmpi. Any ideas? Thanks