Doxygen 1.9.1
Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN: Tasmanian v8.1
MPI Send/Receive/Bcast/Scatter for Sparse Grids
Collaboration diagram for MPI Send/Receive/Bcast/Scatter for Sparse Grids:

Functions

template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridSend (TasmanianSparseGrid const &grid, int destination, int tag, MPI_Comm comm)
 Send a grid to another process in the MPI comm. More...
 
template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridRecv (TasmanianSparseGrid &grid, int source, int tag, MPI_Comm comm, MPI_Status *status=MPI_STATUS_IGNORE)
 Receive a grid from another process in the MPI comm. More...
 
template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridBcast (TasmanianSparseGrid &grid, int root, MPI_Comm comm)
 Broadcast a grid to all processes in an MPI comm. More...
 
template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridScatterOutputs (TasmanianSparseGrid const &source, TasmanianSparseGrid &destination, int root, int tag, MPI_Comm comm)
 Split the grid across the comm where each rank receives an equal portion of the total outputs. More...
 

Detailed Description

Methods to send/receive TasGrid::TasmanianSparseGrid objects. The syntax mimics the raw MPI_Send and MPI_Recv calls, and the templates require Tasmanian_ENABLE_MPI=ON.

Function Documentation

◆ MPIGridSend()

template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridSend ( TasmanianSparseGrid const &  grid,
int  destination,
int  tag,
MPI_Comm  comm 
)

Send a grid to another process in the MPI comm.

Send the grid to the destination rank across an MPI comm. The grid is frist writtein to a stream in either binary or ASCII format and then send with a single MPI_Send() call. Binary results in smaller messages and less computational overhead; thus, ASCII is provided mostly for debugging purposes.

Template Parameters
binarydefines whether to use binary (true) or ASCII (false) mode. Recommended use with constexpr constant TasGrid::mode_binary and TasGrid::mode_ascii.
Parameters
gridis the grid to send.
destinationis the rank of the recipient MPI process.
tagis the tag to use for the MPI message.
commis the MPI comm where the source and destination reside.
Returns
the error code of the MPI_Send() command.

Note: this call must be mirrored by TasGrid::MPIGridRecv() on the destination process.

Example usage, process 0 creates a grid and sends it to process 1:

int me, tag = 0;
MPI_Comm_rank(MPI_COMM_WORLD, &me);
MPI_Status status;
if (me == 0) MPIGridSend(grid, 1, tag, MPI_COMM_WORLD);
else if (me == 1) MPIGridRecv(grid, 0, tag, MPI_COMM_WORLD, &status);
// at this line, process 1 has a grid equivalent to that of process 0
// processes with rank 2 and above do nothing, i.e., they have an empty grid
The master-class that represents an instance of a Tasmanian sparse grid.
Definition: TasmanianSparseGrid.hpp:293
void makeGlobalGrid(int dimensions, int outputs, int depth, TypeDepth type, TypeOneDRule rule, std::vector< int > const &anisotropic_weights, double alpha=0.0, double beta=0.0, const char *custom_filename=nullptr, std::vector< int > const &level_limits=std::vector< int >())
Make a Global Grid using Lagrange polynomials with support over the entire domain.
@ rule_clenshawcurtis
Classic nested rule using Chebyshev nodes with very low Lebesgue constant.
Definition: tsgEnumerates.hpp:289
@ type_level
Ignoring the polynomial space, use rules with index .
Definition: tsgEnumerates.hpp:209
int MPIGridRecv(TasmanianSparseGrid &grid, int source, int tag, MPI_Comm comm, MPI_Status *status=MPI_STATUS_IGNORE)
Receive a grid from another process in the MPI comm.
Definition: tsgMPIScatterGrid.hpp:155
int MPIGridSend(TasmanianSparseGrid const &grid, int destination, int tag, MPI_Comm comm)
Send a grid to another process in the MPI comm.
Definition: tsgMPIScatterGrid.hpp:127

◆ MPIGridRecv()

template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridRecv ( TasmanianSparseGrid grid,
int  source,
int  tag,
MPI_Comm  comm,
MPI_Status *  status = MPI_STATUS_IGNORE 
)

Receive a grid from another process in the MPI comm.

Receive a grid that has been send with TasGrid::MPIGridSend(). This call intercepts both messages and compiles them into a sparse grid object.

Template Parameters
binarydefines whether to use binary (true) or ASCII (false) mode. Recommended use with constexpr constant TasGrid::mode_binary and TasGrid::mode_ascii.
Parameters
gridis the output grid, it will be overwritten with grid send by the source rank similar to TasGrid::TasmanianSparseGrid::read().
sourceis the rank of the process in the MPI comm that issued the send command.
tagis the tag used in the MPI send command.
commis the MPI comm where the source and destination reside.
statusis the status of the MPI_Recv() command.
Returns
the error code of the MPI_Recv() command.

◆ MPIGridBcast()

template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridBcast ( TasmanianSparseGrid grid,
int  root,
MPI_Comm  comm 
)

Broadcast a grid to all processes in an MPI comm.

Make all grid variables for all process in the comm match the grid on the root process. This call uses two MPI_Bcast() calls, the grid size (in memory units) and the actual grid data. The transfer can be done in either binary or ASCII format, but binary results in smaller messages and less computational overhead; thus, ASCII is provided mostly for debugging purposes.

Template Parameters
binarydefines whether to use binary (true) or ASCII (false) mode. Recommended use with constexpr constant TasGrid::mode_binary and TasGrid::mode_ascii.
Parameters
gridis the grid to broadcast across the MPI comm, the grid on the root process will not be modified (i.e., treat as const), in all other cases, the grid will be overwritten similar to the TasGrid::TasmanianSparseGrid::read().
rootis the process that holds the data that needs to be send across.
commis the MPI comm of all process that need to share the grid.
Returns
the error code of the fist failed MPI_Bcast() command (corresponding to either the size of the data message), if MPI_SUCCESS is returned then both messages were successful.

Example usage, process 0 reads a grid from a file and sends it to all processes:

int me;
MPI_Comm_rank(MPI_COMM_WORLD, &me);
auto grid = (me == 0) ? TasGrid::readGrid("foo") : TasGrid::TasmanianSparseGrid();
MPIGridBcast(grid, 0, MPI_COMM_WORLD);
// at this line, every process has the same grid as if they all read it from "foo"
int MPIGridBcast(TasmanianSparseGrid &grid, int root, MPI_Comm comm)
Broadcast a grid to all processes in an MPI comm.
Definition: tsgMPIScatterGrid.hpp:211
TasmanianSparseGrid readGrid(const char *filename)
Factory method, creates a new grid and calls TasmanianSparseGrid::read().
Definition: TasmanianSparseGrid.hpp:2342

◆ MPIGridScatterOutputs()

template<bool binary = TasGrid::mode_binary>
int TasGrid::MPIGridScatterOutputs ( TasmanianSparseGrid const &  source,
TasmanianSparseGrid destination,
int  root,
int  tag,
MPI_Comm  comm 
)

Split the grid across the comm where each rank receives an equal portion of the total outputs.

Split the grid across the ranks in the comm so that each rank receives a grid with the same type, rule, points, etc., but with a subset of the total outputs. The distribution is as close to even as possible, if there are less outputs than ranks, some ranks will receive an empty grid.

Note: this does not use MPI_Scatter(), instead it makes multiple calls to MPIGridSend() and MPIGridRecv().

Template Parameters
binarydefines whether to use binary (true) or ASCII (false) mode, see TasGrid::MPIGridSend().
Parameters
sourcegrid located on the root rank is the grid to be distributed across, for all other ranks the source will not be referenced.
destinationis the grid where the local portion of the scatter will be stored, the existing grid will be overwritten. If the source outputs are less than the number of comm ranks, then some of the destination grids will be empty.
rootis the rank that will hold the source sparse grid.
tagsame as in TasGrid::MPIGridSend().
commis the MPI comm of all process that need to share a portion of the grid.

Example usage, rank 0 creates a large grid and scatters is across comm:

int me;
MPI_Comm_rank(MPI_COMM_WORLD, &me);
auto source = (me == 0) ? TasGrid::readGrid("grid_with_many_outputs") : TasGrid::TasmanianSparseGrid();
MPIGridScatterOutputs(source, grid, 0, 1, 2, comm);
// at this line, every process has a portion of the source at grid
// if the comm has 3 ranks,
// then 7 outputs will be split into 3 2 2
// and 2 outputs will become 1 1 empty
int MPIGridScatterOutputs(TasmanianSparseGrid const &source, TasmanianSparseGrid &destination, int root, int tag, MPI_Comm comm)
Split the grid across the comm where each rank receives an equal portion of the total outputs.
Definition: tsgMPIScatterGrid.hpp:276