Re: [AMBER-Developers] Suggestions for dealing with mpich2-1.2.1p1

From: Jason Swails <jason.swails.gmail.com>
Date: Fri, 16 Apr 2010 00:28:41 -0400

On Fri, Apr 16, 2010 at 12:09 AM, Ross Walker <ross.rosswalker.co.uk> wrote:
> Hi All,
>
> I am trying to address the mpich2-1.2.1p1 issue regarding pmemd (and also
> parts of sander I believe). This version of mpich2 does not accept aliasing
> of send and receive buffers unless using the MPI2 MPI_IN_PLACE argument. The
> fix I propose is:
>
>  use parallel_dat_mod
>
>  implicit none
>
> ! Formal arguments:
>
>  integer               :: atm_cnt
>  double precision      :: vec(3, atm_cnt)
>
> ! Local variables:
>  if ( MPI_VERSION == 2 ) then
>    if ( master ) then
>      call mpi_gatherv(MPI_IN_PLACE, &
>                       vec_rcvcnts(mytaskid), mpi_double_precision, &
>                       vec, vec_rcvcnts, vec_offsets, &
>                       mpi_double_precision, 0, mpi_comm_world,
> err_code_mpi)
>    else
> !Pre MPI_VERSION 2 officially one cannot have send and receive buffers that
> !alias each other, however, this has been working fine for gatherv for
> !many years. It is only recent error checking in mpich2-1.2.1p1 which has
> !caused problems.
>      call mpi_gatherv(vec(1, atm_offsets(mytaskid) + 1), &
>                       vec_rcvcnts(mytaskid), mpi_double_precision, &
>                       vec, vec_rcvcnts, vec_offsets, &
>                       mpi_double_precision, 0, mpi_comm_world,
> err_code_mpi)
>    end if
>  else
>    call mpi_gatherv(vec(1, atm_offsets(mytaskid) + 1), &
>                     vec_rcvcnts(mytaskid), mpi_double_precision, &
>                     vec, vec_rcvcnts, vec_offsets, &
>                     mpi_double_precision, 0, mpi_comm_world, err_code_mpi)
>  end if
>  return
>
> However, there are a few problems with this.
>
> 1) I do not know if MPI_VERSION is always defined in every MPI
> implementation. I guess it is, but not sure.
>
> 2) MPI_IN_PLACE is only defined if this is a MPI2 implementation.
>
> Any ideas how best to address this.
>
> I thought about:
>
> #ifndef MPI_VERSION
> #  define MPI_VERSION 1
> #endif
>
> #ifndef MPI_IN_PLACE
> #  define MPI_IN_PLACE 0
> #endif
>
> However, it looks like not all MPI implementations have these defined. Some
> specify them as 'parameters' in which case I do not believe the above will
> work.
>
> ideas?

A very naive proposal: perhaps throw an -mpich2 flag into configure
that adds -DMPICH2 to the preprocessor flags (or something more
general perhaps, like -mpi2 or something if we don't want to single
mpich2 out). Then we could just put the different calls into an
"#ifdef MPICH2/MPI2 #else " structure. That way, if any more MPIs
crop up that have this issue, they can be told to just use that flag
(and it can at least be documented that the alternative flag must be
used for mpich2 upon release).

People would certainly write to the list asking about it, but I'm
guessing they will regardless (unless a transparent solution can be
found, which is probably what you're holding out for...)

All the best,
Jason

>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
>
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>



-- 
---------------------------------------
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Thu Apr 15 2010 - 21:30:03 PDT
Custom Search