[AMBER-Developers] Fwd: [AMBER] Error Compiling pmemd.cuda.MPI (i.e. Multiple GPUs)

From: Jason Swails <jason.swails.gmail.com>
Date: Sun, 25 Mar 2012 11:43:05 -0400

Hello,

As Scott pointed out, it is possible (although in my experience unusual) to
install MPIs in such a way that the includes and libs directories are _not_
actually $MPI_HOME/include and $MPI_HOME/lib when the compiler wrappers are
in $MPI_HOME/bin.

This will always require hand-editing of the config.h file for "strange"
systems to build pmemd.cuda.MPI (since not even setting MPI_HOME will fix
it).

I think I have a long-term solution that will work with any (correctly)
configured MPI, but I'm not positive that it's general. Basically what it
does is parse the output of "mpicc -show" and dump the entries with include
paths (-I/bridge/to/nowhere) into the NVCC includes path. Does anyone know
where this would _not_ work? Would people expect it to work better than
just adding an include path based on the location of mpif90?

(Note that this _only_ affects pmemd.cuda.MPI, since the rest of Amber uses
mpif90 and mpicc).

Thanks!
Jason

---------- Forwarded message ----------
From: Scott Le Grand <varelse2005.gmail.com>
Date: Sat, Mar 24, 2012 at 12:49 PM
Subject: Re: [AMBER] Error Compiling pmemd.cuda.MPI (i.e. Multiple GPUs)
To: Adam Jion <adamjion.yahoo.com>, AMBER Mailing List <amber.ambermd.org>


MPI is installed in a manner the configure script doesn't recognize. A
long-term fix is needed for this, but in the meantime, what Jason said will
work once you locate the correct mpi.h for whatever MPI (if any) is
installed on your machine.

Scott

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Sun Mar 25 2012 - 09:00:02 PDT
Custom Search