CS 2.0.1 on OpenSUSE 11.2

All questions about installation
Forum rules
Please read the forum usage recommendations before posting.
Post Reply
Fabio Moretti

CS 2.0.1 on OpenSUSE 11.2

Post by Fabio Moretti »

Hallo,
 
I've installed the latest release on a server with OpenSUSE 11.2, using the automatic install procedure. As regards the mpi, I made the setup file point at a lam/mpi installation already existing on the machine (used by NEPTUNE_CFD). As regards the partitioner, I manually installed metis and made the Saturne setup file point at the metis installation folder.
The first tests I performed were only partially successful, as:
- Saturne couldn't find and use metis (and then used the unoptimized internal partitioner). I can't see what mistake I made.
- LAM/MPI failed to boot on remote nodes (of the same cluster which the above machine belongs to). NOTE1: the remote nodes have same hardware and different operating system (Fedora). NOTE2: multiple-processor runs on the local machin were OK. NOTE3: I have experience of lam/mpi properly working on such a heterogeneous cluster with NEPTUNE_CFD (provided that the master node is NOT the Suse one!)
 
Then, I decided to reinstall the code and automatically install and use openmpi 1.4.3 instead of lam/mpi.
Well, openmpi is correctly installed, while I get an "Error during configure stage of FVM", the file ...fvm-0.15.2.build/config.log reporting the following message: "configure:14128: error: MPI support is requested, but test for MPI failed!"
 
I'd appreciate any advice on how to solve the above issues. Thanks a lot.
Regards,
fabio
 
EDIT 22/06/2011: 
Sorry, openmpi was NOT correctly installed! The config.log file shows messages like: "configure: failed program was: confdefs.h." However, the install_saturne.log file doesn't report any error message during mpi configure and install.
So, I have a problem with openmpi. Any ideas? Thanks
Yvan Fournier

Re: CS 2.0.1 on OpenSUSE 11.2

Post by Yvan Fournier »

Hello,
Did you configure the code  (FVM and NCS) using --with-mpi=... or using CC=mpicc ? I am not sure what the automated installer uses, but if you go into the build directories and check the top of the config.log files, you'll see. passing CC=mpicc (with the correct version of an absolute path to the MPI compiler wrapper) is less elegant, but is actually more robust and is safer in case of multiple MPI installs on you machine.
if you re-run configure (with the adjusted options), then make && make install in the FVM then NCS build directories, things should be better.
 
If things still fail, in your config.log, you probably have detailed information which can help understand the issue, so you may post it or send i to me.
 
Best regards,
  Yvan
Post Reply