Doxygen
1.9.1
|
Functions | |
template<class Likelihood > | |
int | TasDREAM::MPILikelihoodSend (Likelihood const &likely, int destination, int tag, MPI_Comm comm, int outputs_begin=0, int outputs_end=-1) |
Send a likelihood to another process in the MPI comm. More... | |
template<class Likelihood > | |
int | TasDREAM::MPILikelihoodRecv (Likelihood &likely, int source, int tag, MPI_Comm comm, MPI_Status *status=MPI_STATUS_IGNORE) |
Receive a likelihood from another process in the MPI comm. More... | |
template<class Likelihood > | |
int | TasDREAM::MPILikelihoodScatter (Likelihood const &source, Likelihood &destination, int root, int tag, MPI_Comm comm) |
Split the likelihood across the comm where each rank receives an equal portion of the total outputs. More... | |
Methods to send/receive DREAM likelihood objects. The syntax mimics the raw MPI_Send and MPI_Recv calls, and the templates require Tasmanian_ENABLE_MPI=ON.
int TasDREAM::MPILikelihoodSend | ( | Likelihood const & | likely, |
int | destination, | ||
int | tag, | ||
MPI_Comm | comm, | ||
int | outputs_begin = 0 , |
||
int | outputs_end = -1 |
||
) |
Send a likelihood to another process in the MPI comm.
Works with both isotropic and anisotropic Gaussian likelihood object implemented in Tasmanian. The usage is very similar to MPI_Send() and TasGrid::MPIGridSend(). Optionally only some of the outputs can be send over.
Likelihood | is a Tasmanian likelihood class, currently LikeliTasDREAM::LikelihoodGaussIsotropic and LikeliTasDREAM::LikelihoodGaussAnisotropic. |
likely | is the likelihood to send. |
destination | is the rank of the recipient MPI process. |
tag | is the tag to use for the MPI message. |
comm | is the MPI comm where the source and destination reside. |
outputs_begin | same as in LikelihoodGaussIsotropic::write(). |
outputs_end | same as in LikelihoodGaussIsotropic::write(). |
Note: this call must be mirrored by TasDREAM::MPILikelihoodRecv() on the destination process.
int TasDREAM::MPILikelihoodRecv | ( | Likelihood & | likely, |
int | source, | ||
int | tag, | ||
MPI_Comm | comm, | ||
MPI_Status * | status = MPI_STATUS_IGNORE |
||
) |
Receive a likelihood from another process in the MPI comm.
Works with both isotropic and anisotropic Gaussian likelihood object implemented in Tasmanian. The usage is very similar to MPI_Recv() and TasGrid::MPIGridRecv().
Likelihood | is a Tasmanian likelihood class, currently LikeliTasDREAM::LikelihoodGaussIsotropic and LikeliTasDREAM::LikelihoodGaussAnisotropic. |
likely | is the output likelihood, it will be overwritten with the one send. |
source | is the rank of the process in the MPI comm that issued the send command. |
tag | is the tag used in the MPI send command. |
comm | is the MPI comm where the source and destination reside. |
status | is the status of the MPI_Recv() command. |
Note: see TasDREAM::MPILikelihoodSend().
int TasDREAM::MPILikelihoodScatter | ( | Likelihood const & | source, |
Likelihood & | destination, | ||
int | root, | ||
int | tag, | ||
MPI_Comm | comm | ||
) |
Split the likelihood across the comm where each rank receives an equal portion of the total outputs.
Splits both the data and the variance across a comm.
Note: this does not use MPI_Scatter(), instead it makes multiple calls to MPILikelihoodSend() and MPILikelihoodRecv().
Likelihood | is a Tasmanian likelihood class, currently LikeliTasDREAM::LikelihoodGaussIsotropic and LikeliTasDREAM::LikelihoodGaussAnisotropic. |
source | likelihood located on the root rank is to be distributed across, for all other ranks the source will not be referenced. |
destination | is the likelihood where the local portion of the scatter will be stored, the existing likelihood will be overwritten. If the source outputs are less than the number of comm ranks, then some of the destination likelihoods will be empty. |
root | is the rank that will hold the source likelihood. |
tag | same as in TasGrid::MPILikelihoodSend(). |
comm | is the MPI comm of all process that need to share a portion of the grid. |
Note: see TasGrid::MPIGridScatterOutputs() for the way the outputs will be distributed.