Page 1 of 1

error running in parallel (runs in serial)

Posted: Sat Aug 16, 2014 9:03 pm
by lovedaypeter
Hello,

I set up the CFDStudy tutorial case, and in serial it runs (haven't seen the results yet), but in parallel it produces an error:

"
SIGSEGV signal (forbidden memory area access) intercepted!

Call stack:
1: 0x7f24b0ed6d1b <PMPI_Comm_size+0x4b> (libmpi.so.1)
2: 0x7f24b1ffb962 <_SCOTCHdgraphInit+0x72> (libptscotch.so)
3: 0x7f24b363d991 <cs_partition+0x63e1> (libsaturne.so.0)
4: 0x7f24b33cbc05 <cs_preprocessor_data_read_mesh+0x2b5> (libsaturne.so.0)
5: 0x7f24b33c4d45 <cs_preprocess_mesh+0x125> (libsaturne.so.0)
6: 0x7f24b332e2bb <cs_run+0x12b> (libsaturne.so.0)
7: 0x7f24b332e07a <main+0x14a> (libsaturne.so.0)
8: 0x7f24b2cbeec5 <__libc_start_main+0xf5> (libc.so.6)
9: 0x4008c9 <> (cs_solver)
End of stack
"

The output file was invoking scotch at the time. What is the real problem? and how do I fix it?

Thanks in advance,

Peter

Re: error running in parallel (runs in serial)

Posted: Sat Aug 16, 2014 9:41 pm
by Yvan Fournier
Hello,

Did you install Pt-Scotch or Scotch with the code, or did you use a packaged version ? I recommend avoiding precompiled versions of scotch, as some options may differ. Also, if You have a packaged version of Scotch but not Pt-Scotch, Code_Sature will try to use it (doing all partitioning on 1 processor, which is not optimal on a cluster, but OK on a workstation), but this has not been tested much recently, so you may have a real bug there.

In the GUI, under "performance tuning", you can change partitioning options, so you can at least test a partitioning other Scotch (for example, Morton curve-based) before re installing anything.

Regards,

Yvan

Re: error running in parallel (runs in serial)

Posted: Sat Aug 16, 2014 9:54 pm
by lovedaypeter
Hi,

Thanks for the answer. I'm not sure if I installed scotch or pt-scotch...I toldthe setup file to download and install scotch.

I changed to parmetis and the calculation runs...so for me it is ok, I'm not picky about using metis or scotch ;)

Regards,

Peter