PLE
Parallel Location and Exchange
PLE (Parallel Location and Exchange) documentation

Introduction

PLE is a libary designed to simplify coupling of distributed parallel computational codes. It is maintained as a part of Code_Saturne, EDF's general purpose Computational Fluid Dynamics (CFD) software, but it may also be used with other tools, and is distributed under a more permissive licence (LGPL instead of GPL).

PLE provides support for 2 categories of tasks: synchronizing parallel codes at predifined points, and enabling parallel mapping of points to meshes, and transfer of variables using this mapping.

PLE Coupling API

The ple_coupling_...() functions allow identifying applications and defining MPI communicators necessary to the ple_locator_...() functions, as well as providing each of a set of coupled codes with info on the other code's time steps, convergence status, and other synchronization data at predifined points (usually once per time step).

PLE Locator subset

The ple_locator_...() functions allow mapping points to a mesh in parallel, given serial functions providing this functionnality for the associated data structures, then exchanging variables using this mapping.

Example

For example, consider the case described in the following figure. Fluid flows along a channel (from left to right), and heat is exchanged at the domain boundaries on the lower side of the channel. In addition, in this example, we add a separate inlet channel, in which we will use a periodicity boundary condition to simulate an infinite upstream channel, and based the inlet flow conditions on those along a section inside the inlet domain.

ple_coupling_example_domains.svg
Example coupling case

For clarity, we will limit ourselves to this simple case, which allows illustrating the main concepts without adding unneeded complexity.

Communicators and ple_coupling API.

Let us in addition assume that we will assign the first /a np_inlet MPI processes to the inlet domain, the second /a np_fluid processes to the fluid domain, and the last /a np_solid processes to the solid domain. All computational codes are started simultaneously by the MPI runtime environment.

So finally, each domain can use the following communicators

  1. The inlet domain can access the common top-level and local comm_domain communicator (including ranks [0, np_inlet[ of the top communicator). It will also use a communicator for exchanges with the fluid domain, including ranks [0, np_inlet + np_fluid[ of the top communicator.
  2. The fluid domain can access the common top-level and local comm_domain communicator (including ranks [np_inlet , np_inlet + np_fluid[ of the top communicator). It will also use a communicator for exchanges with the inlet domain, including ranks [0, np_inlet + np_fluid[ of the top communicator, and a different communicator for exchanges with the solid domain, including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[.
  3. The solid domain can access the common top-level and local comm_domain communicator (including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[ of the top communicator). It will also use a communicator for exchanges with the fluid domain, including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[ of the main communicator.

These steps are shown on the following figure, with collective operations highlighted using frames.

ple_coupling_example_init.svg
Example initialization and MPI communicators

Note also that when ending a computation, additional communicators must be freed and the coupling structures finalized and destroyed using the appropriate functions.

Mesh location, variable exchange, and ple_locator API.

We will then locate discretization points from computational domains relative to their coupled domains, using a ple_locator_t object, and the ple_locator_set_mesh and ple_locator_exchange_point_var functions. These steps are shown on the next figure, with collective operations highlighted using frames. Note that use of a global ple_coupling_mpi_set_t synchronization object is not required, but when we have more than two domains, such as here, it can be very practical so as to avoid deadlocks or crashes due to computations stopping at different time steps.

ple_coupling_example_exchange.svg
Example exchanges and MPI communicators