RE: amber-developers: MPI 2 for Amber 10?

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 18 Oct 2006 10:11:18 -0700

Hi Scott,

I don't think it is as simple to do as this since the calls are different
depending on whether you want to do in place or not. And what is more the
format is not the same between different MPI calls. E.g.

FOR MPI_ALLREDUCE
The "in place" option for intracommunicators is specified by passing the
value MPI_IN_PLACE to the argument sendbuf on each task. In this case, the
input data is taken at each task from the receive buffer, where it will be
replaced by the output data.

FOR MPI_REDUCE
The "in place" option for intracommunicators is specified by passing the
value MPI_IN_PLACE to the argument sendbuf at the root. In this case, the
input data is taken at the root from the receive buffer, where it will be
replaced by the output data.

So for an MPI_ALLREDUCE the code would look for example like this:

#ifdef USE_MPI_IN_PLACE
         call mpi_allreduce(MPI_IN_PLACE, reduced_data, nsend, &
               MPI_DOUBLE_PRECISION,mpi_sum,commsander,ierr)
#else
         call mpi_allreduce(reduced_data, tmpbuf, nsend, &
               MPI_DOUBLE_PRECISION,mpi_sum,commsander,ierr)
         do i=1, nsend
            reduced_data(i) = tmpbuf(i)
         end do
#endif

But for MPI_REDUCE it would look like:

# ifdef USE_MPI_IN_PLACE
        if (master) then
          call mpi_reduce(MPI_IN_PLACE,reduced_data,nsend, &
                        MPI_DOUBLE_PRECISION,mpi_sum,0,commsander,ier)
        else
          call mpi_reduce(reduced_data,0,nsend, &
                        MPI_DOUBLE_PRECISION,mpi_sum,0,commsander,ier)
        end if
# else
        call mpi_reduce(reduced_data,tmpbuf,nsend, &
                        MPI_DOUBLE_PRECISION,mpi_sum,0,commsander,ier)
        if (master) &
           reduced_data(1:nsend)=tmpbuf(1:nsend)
# endif

So how do we deal with this based on the module approach you suggested? The
problem I see is that in most cases we are replacing an array with a
constant in some cases but not in others.

Suggestions?

All the best
Ross

/\
\/
|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

> -----Original Message-----
> From: owner-amber-developers.scripps.edu
> [mailto:owner-amber-developers.scripps.edu] On Behalf Of Scott Brozell
> Sent: Tuesday, October 17, 2006 20:10
> To: amber-developers.scripps.edu
> Subject: RE: amber-developers: MPI 2 for Amber 10?
>
> Hi,
>
> Ok, just thought of this as I pressed send:
> In this case if you want an all or none MPI_IN_PLACE then use a macro
> #define AMBER_MPI_IN_PLACE MPI_IN_PLACE,
>
> call mpi_reduce(AMBER_MPI_IN_PLACE sendbuf, 100, MPI_REAL, MPI_SUM, 0,
>
> Scott
>
> On Tue, 17 Oct 2006, Scott Brozell wrote:
>
> > Hi,
> >
> > On Mon, 16 Oct 2006, Ross Walker wrote:
> >
> > > For Amber 10 we will be requiring a Fortran 95 compiler.
> Will we also be
> > > requiring an implementation of MPI v2?
> >
> > On Mon, 27 Mar 2006 11:58:01 -0800 Dave indicated that a Fortran 95
> > compiler was a requirement for sander. So Amber 9 requires
> Fortran 95.
> > We should have put a comment in the Amber 9 manual.
> >
> > On Mon, 16 Oct 2006, Ross wrote, Dave wrote, then Ross wrote:
> >
> > > > > There are some functions of MPI 2 that I would like to use
> > > >
> > > > what are these functions?
> > >
> > > Initially the main ones I would like to use are
> MPI_IN_PLACE versions of
> > > many mpi v1 commands.
> > >
> > > E.g.
> > > if (master) then
> > > call mpi_reduce(MPI_IN_PLACE, sendbuf, 100, MPI_REAL,
> MPI_SUM, 0,
> > > else
> > > call mpi_reduce(sendbuf, 0, 100, MPI_REAL, MPI_SUM, 0,
> commworld, ier)
> > > end if
> >
> > Use of MPI_IN_PLACE may improve efficiency. (I wouldnt be surprised
> > if some or many MPI v1 implementations already have this
> optimization
> > ( in particular, I mean with out having to specify MPI_IN_PLACE ),
> > especially for reduce operations. Furthermore, I wouldnt
> be surprised
> > if some or all MPI v2 implementations do not have the MPI_IN_PLACE
> > optimization for all instances mandated by the standard.)
> > Thus, I encourage you to share your profiling data and to make a
> > persuasive case before you optimize.
> >
> > > While messy this can be worked around with some ifdefs to
> do both versions.
> > > However, I would ultimately like to be able to use single
> sided messaging
> > > operations (mpi_get and mpi_put) as this makes
> diagonalization in parallel
> > > significantly more efficient. However, there is no MPI v1
> equivalent of this
> > > and so it would be very difficult to make both MPI v1 and
> MPI v2 compliant
> > > versions of the code.
> >
> > There are a number of approaches. ifdef-ing is perhaps the
> most familiar,
> > but otherwise does not have much going for it.
> > At the very least use just one ifdef by hiding the mpi v1
> or v2 selection
> > inside the guts of a module. The Trace module is an
> example of this approach.
> > But here are some details, eg:
> >
> > #file.f
> > call mpi_reduce(sendbuf, 0, 100, MPI_REAL, MPI_SUM, 0,
> commworld, ier)
> > ->
> > use ambermpi
> > ...
> > call ambermpi_mpi_reduce(sendbuf, 0, 100, MPI_REAL,
> MPI_SUM, 0, commworld, ier)
> >
> >
> > # ambermpi.f
> > module ambermpi
> >
> > #ifdef AMBER_MPI_IN_PLACE
> > logical :: use_mpi_in_place = true
> > #else
> > logical :: use_mpi_in_place = false
> > #endif
> >
> > subroutine ambermpi_mpi_reduce(sendbuf, ...
> >
> > if ( use_mpi_in_place ) then
> > call mpi_reduce(MPI_IN_PLACE, sendbuf, ...
> > else
> > call mpi_reduce(sendbuf, ...
> > endif
> > end sub
> > end mod
> >
> > Since this is the 21st century, any fortran compiler will
> optimize away the
> > if inside ambermpi_mpi_reduce.
> >
> >
> > Sorry if I have overkilled you on the details, but I agree
> with Bob Duke
> > that we can and should do a better job of specifying our interfaces.
> > The technique above preserves the mpi v1 interface,
> > hides an implementation detail inside a module,
> > enables compile time control of the guts,
> > and does not reduce efficiency.
> >
> > See my next post with subject
> > interfaces booklist
> > for references.
> >
> >
> > Date: Mon, 27 Mar 2006 11:58:01 -0800
> > From: Scott Brozell <sbrozell.scripps.edu>
> > To: amber-developers.scripps.edu
> > Subject: Re: amber-developers: Re: More sun issues
> >
> > Hi,
> >
> > On Mon, 27 Mar 2006, David A. Case wrote:
> >
> > > On Mon, Mar 27, 2006, Scott Brozell wrote:
> > >
> > > > > > integer :: system_coord_id = 0
> > >
> > > > Agreed, but is it or is it not Fortran standard compliant ?
> > >
> > > It *is* standard-compliant in F95: see p. 139 of Metcalf,
> Reid and Cohen,
> > > "Fortran 95/2003 Explained."
> > >
> > > This syntax is not required to be supported in Fortran
> 90. I guess this
> > > means that an F95-compliant compiler is required for
> sander. Since we are now
> > > 11 years past 1995, this seems like a reaonsble
> requirement. Other compilers
> > > that are called "F90" (such as from SGI) allow this construct.
> >
> > Mongan> f90: Sun Fortran 95 8.2 2005/10/13
> >
> > Wow, ok; thanks for the language lawyering.
> >
> > Aside: interestingly, this means that every such object is getting
> > initialized even as a local variable.
> >
> > Scott
> >
>
>
Received on Sun Oct 22 2006 - 06:07:08 PDT
Custom Search