Re: [AMBER-Developers] Sending Structs in MPI

From: Scott Brozell <>
Date: Mon, 26 Sep 2011 15:01:05 -0400


This is inaccurate:
> On MPI_Type_struct ... No need to mess
> with aligning / padding anything.

As is pointed out by your web link
there are still compiler dependent padding and aligning issues,
but some of them are hidden inside the mpi implementation
and some of them may still be obvious in the C structs.
So it still makes sense to order struct members from biggest
down to smallest in size. And arrays of structs may still have
alignment related costs.

As far as the cleanest software engineering approach, it depends:
if your data structures and your data to be communicated
fit naturally into small chunks of logically related info
then structs and MPI_Type_struct may be the cleanest;
but if you have large chunks of data to send in batches,
such as several things for a bunch of atoms, then linear arrays
over atoms of individual things may not only be most efficient
but also most clean.

In addition, you can hide your implementation details
and then cleanly support multiple implementations, ie
a simple universal correct one and one optimized for some
platform. Then you can later unravel your own layer
and profile to see its cost; ahhh the joy of computer
programming :)


On Mon, Sep 26, 2011 at 05:11:15PM +0000, Duke, Robert E Jr wrote:
> 3) there is a third option - send the data in sequent
> ial arrays. Thus, say you logically have several ints and dbls that relate to one atom. Well, if you are sending data relating to 100's of atoms, you can just send all the ints first, and then all the dbls.
> On the other hand, the cleanest software engineering approach is the one that Ross refers to.
> - Bob
> ________________________________________
> From: []
> Sent: Monday, September 26, 2011 9:35 AM
> To: AMBER Developers Mailing List
> Subject: Re: [AMBER-Developers] Sending Structs in MPI
> Hrmm, I seem to have gotten conflicting advice now; Bob said that
> converting everything to byte streams is costly, you say that sending
> things as structs is non-optimal. Should I convert the integers in my
> structs to doubles for sending? That seems like it might be costly as
> well. Alternatively, I could even devise a way (actually, I have, and
> it's been in the code for a very long time anticipating that I might want
> to do this) to pack doubles of small real numbers into ints and ship 'em
> off for greater precision than floating point conversions would allow.
> Just let me know what you think would give the best performance, and I'll
> keep that in mind as I develop the parallel implementation. Much better
> to get things right in the beginning.

Ross wrote
> > Take a look at the Paramfit source code in AMBER 11 (not the git tree
> > version unless you go back in time as we have since ripped out all the MPI
> > stuff and just use OpenMP), it will show you exactly how to do this in C.
> > Essentially you just build your own MPI Datatypes containing all the
> > offsets
> > etc and you can send structs around to your heart's content. No need to
> > mess
> > with aligning / padding anything. It works great.

> mpi_data->MPI_atom_struct_type_blocklen,
> mpi_data->MPI_atom_struct_type_disp,
> mpi_data->MPI_atom_struct_type_construct,
> &mpi_data->MPI_atom_struct_type );
> MPI_Type_commit(&mpi_data->MPI_atom_struct_type);

> > Now, should you be sending structs etc? - Probably not for performance
> > reasons, but it can be useful during startup to broadcast parameters etc.
> > The pain is you HAVE TO remember to update the data types etc if you ever
> > modify the structures.

AMBER-Developers mailing list
Received on Mon Sep 26 2011 - 12:30:03 PDT
Custom Search