Re: [AMBER-Developers] MPI_HOME

From: Scott Brozell <sbrozell.rci.rutgers.edu>
Date: Thu, 3 Mar 2011 23:29:02 -0500

Hi,

On Wed, Mar 02, 2011 at 08:09:15AM -0500, Jason Swails wrote:
>
> I had looked into this before, and my conclusion was that MPI_HOME is not
> really necessary as long as cpp is not responsible for locating and
> inserting mpif.h. For Fortran files that have the PP directive
>
> #include "mpif.h"
>
> instead of just
>
> include "mpif.h"
>
> lack of MPI_HOME proves fatal to the build process. I don't think that
> actually exists anymore, though, as Professor Case went through and changed
> all #include "mpif.h" to include "mpif.h".

Yes, now they are all include "mpif.h"

> However, I still think it's useful to have the defined MPI_HOME. I think
> what we should actually do is define the MPI compilers as
> $(MPI_HOME)/bin/mpif90, mpicc, mpiCC, etc.

I agree that it might be useful to be able to specify an alternative
mpi to that which is first in ones path. However, the vast majority
of installers won't need that ability, so the default should be to use
what is in the path. The configure option could easily be modified to
support alternative mpis:
configure -mpi=/my/great/mpi bla bla

Thus, i suggest that if MPI_HOME is not needed then we completely remove it.
And if someone wants alternative mpis then we add that configure
command line option.

> A number of systems have multiple MPIs installed (especially Macs, whose
> Developer package comes pre-installed with a Fortran-disabled version that
> cannot work for Amber), and this is an effective way of avoiding the need to
> define PATH in a special order.

An industrial strength solution is
http://modules.sourceforge.net/

scott

> > On Mon, Feb 28, 2011 at 12:17:28PM -0500, Tyler Luchko wrote:
> > > Is the MPI_HOME environment variable necessary in the configure script?
> > It appears that it is only used to add a directory to the include path.
> > However, MPI compilers should know where the MPI headers are. Setting
> > MPI_HOME=. works for me though I have not tried building CUDA PMEMD nor
> > compiling on a Cray XT5.
> > >
> >
> > It is also used in pmemd_cu_includes.
> > Note that MPI_HOME=. is not a good test since some mpif90, etc's,
> > make symbolic links to the current directory:
> > $ ~/amber/qa/amber mpif90 -show
> > ln -s /usr/local/mpi/mvapich-1.1-fixes-pgi/include/mpif.h mpif.h
> > pgf90 -noswitcherror -fPIC -L/usr/lib64 -Wl,-rpath-link
> > -Wl,/usr/local/mpi/mvapich-1.1-fixes-pgi/lib/shared
> > -L/usr/local/mpi/mvapich-1.1-fixes-pgi/lib/shared
> > -L/usr/local/mpi/mvapich-1.1-fixes-pgi/lib -lmpichf90nc -lmpichfarg -lmpich
> > -L/usr/lib64 -Wl,-rpath=/usr/lib64 -libverbs -libumad -lpthread -lpthread
> > -lrt
> > rm -f mpif.h
> >
> > I'm attempting more testing, but it would be nice to hear from the
> > pmemd cuda people.

_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Thu Mar 03 2011 - 20:30:03 PST
Custom Search