PLE
Parallel Location and Exchange

PLE is a libary designed to simplify coupling of distributed parallel computational codes. It is maintained as a part of Code_Saturne, EDF's general purpose Computational Fluid Dynamics (CFD) software, but it may also be used with other tools, and is distributed under a more permissive licence (LGPL instead of GPL).
PLE provides support for 2 categories of tasks: synchronizing parallel codes at predifined points, and enabling parallel mapping of points to meshes, and transfer of variables using this mapping.
The ple_coupling_...() functions allow identifying applications and defining MPI communicators necessary to the ple_locator_...() functions, as well as providing each of a set of coupled codes with info on the other code's time steps, convergence status, and other synchronization data at predifined points (usually once per time step).
The ple_locator_...() functions allow mapping points to a mesh in parallel, given serial functions providing this functionnality for the associated data structures, then exchanging variables using this mapping.
For example, consider the case described in the following figure. Fluid flows along a channel (from left to right), and heat is exchanged at the domain boundaries on the lower side of the channel. In addition, in this example, we add a separate inlet channel, in which we will use a periodicity boundary condition to simulate an infinite upstream channel, and based the inlet flow conditions on those along a section inside the inlet domain.
For clarity, we will limit ourselves to this simple case, which allows illustrating the main concepts without adding unneeded complexity.
Let us in addition assume that we will assign the first /a np_inlet MPI processes to the inlet domain, the second /a np_fluid processes to the fluid domain, and the last /a np_solid processes to the solid domain. All computational codes are started simultaneously by the MPI runtime environment.
MPI_COMM_WORLD
) communicator.comm_domain
, though each tool may of course use its own variable name to refer to this.So finally, each domain can use the following communicators
comm_domain
communicator (including ranks [0, np_inlet[ of the top communicator). It will also use a communicator for exchanges with the fluid domain, including ranks [0, np_inlet + np_fluid[ of the top communicator.comm_domain
communicator (including ranks [np_inlet , np_inlet + np_fluid[ of the top communicator). It will also use a communicator for exchanges with the inlet domain, including ranks [0, np_inlet + np_fluid[ of the top communicator, and a different communicator for exchanges with the solid domain, including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[.comm_domain
communicator (including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[ of the top communicator). It will also use a communicator for exchanges with the fluid domain, including ranks [np_inlet + np_fluid, np_inlet + np_fluid + np_solid[ of the main communicator.These steps are shown on the following figure, with collective operations highlighted using frames.
Note also that when ending a computation, additional communicators must be freed and the coupling structures finalized and destroyed using the appropriate functions.
We will then locate discretization points from computational domains relative to their coupled domains, using a ple_locator_t object, and the ple_locator_set_mesh and ple_locator_exchange_point_var functions. These steps are shown on the next figure, with collective operations highlighted using frames. Note that use of a global ple_coupling_mpi_set_t synchronization object is not required, but when we have more than two domains, such as here, it can be very practical so as to avoid deadlocks or crashes due to computations stopping at different time steps.