Skip to main content
Home

Main navigation

  • Home
  • News
  • Documentation
  • Community
  • Download
  • Development
  • salome_cfd
  • neptune_cfd

High-Performance Computing

Breadcrumb

  • Home
  • Documentation
  • Features
  • High-Performance Computing

Book traversal links for High-Performance Computing

  • ‹ Numerical Methods
  • Up
  • Mesh flexibility ›

code_saturne is based on a massively parallel architecture, using the MPI (Message Passing Interface) paradigm as the primary level of parallelism, and optionally shared memory parallelism though OpenMP.

code_saturne and neptune_cfd are used extensively on HPC machines at different sites :

  • EDF clusters (Intel-based)
  • PRACE machines
    • Archer 2 (EPCC), Jean Zay (IDRIS)
  • DOE machines (through INCITE access)
    • Summit (ORNL)

In the past, they have also been used on the following architectures:

  • IBM Blue Gene L/P/Q series (at EDF, STDF Daresbury, ANL)

Tests have been run by STFC Daresbury and IMFT up to several billion cells, leading to intensive work on parallel optimization and debugging with EDF.

Work is in progress for porting to GPU, though this is not used in production yet


Typical production studies use 10 to 50 million cells, with a few in the 200-400 million cell range.

  • First 1+ billion cells run, by STFC and EDF Energy UK in 2013, on 4 Blue Gene/Q racks,
  • Several production runs above 1 billion cells and 10000 ranks on EDF clusters

code_saturne is one of the 12 codes selected for the PRACE and DEISA Unified European Application Benchmark Suite (UEABS), and one of 2 CFD codes in that list along with with ALYA.

Parallelism is based mostly on a classical domain partitioning (ParMETIS, PT-SCOTCH or internal Morton space-filling curve) scheme combined with any current  MPI library. The input/output is partition-independent. It is based on a classical “ghost cell” method for both parallelism and periodicity. Most operations require only ghost cells sharing faces; the extended neighborhoods, used for gradients calculations, also require ghost cells sharing vertices.
 
Example test case: cross-flow in a tube bundle
  • mesh with repeatable pattern for weak scaling benchmarks
  • tested on 12 million to 3.2 billion variant

See the user meetings presentations for more recent examples...

Footer menu

  • Contact
  • Site Information

User account menu

  • Log in

Copyright EDF SA 2021