Re: [AMBER-Developers] PMEMD now built by default

From: Robert Duke <>
Date: Tue, 2 Mar 2010 08:30:26 -0500

Okay, good! Please don't remove the old framework though. I guess I am
wondering about the delta in performance between the two different build
methods. I have been absorbed with other things and not had a lot of
machine access, but have you guys now got the basic amber build so optimized
that this generic build is as fast as the tuned builds? In particular, how
do you handle the issue of all the performance-related defines in pmemd?
One thought I have had would be to include a define in this generic build
that causes printout of the fact that it is a generic, basically not
super-optimized build. The other thought on the old framework - it was
basically there not only to allow a different level of effort on
optimization, but also because pmemd had to hit a whole bunch of real
supercomputer targets, and frequently these sites would just be interested
in deploying pmemd, not the entire amber package on a given supercomputer.
We still need to maintain that. When are we shipping again? Do I have time
to do a bit of tweaking around some of these issues on the pmemd side?
Regards - Bob
----- Original Message -----
From: "Ross Walker" <>
To: "'AMBER Developers Mailing List'" <>
Sent: Monday, March 01, 2010 11:41 PM
Subject: [AMBER-Developers] PMEMD now built by default

> Hi All,
> I have updated the configure and makefiles to build pmemd as part of the
> regular AMBER build. I have tested this in parallel with gfortran and
> intel
> with and without MKL but this is by no means exhaustive so I would
> appreciate it if people can test this on their systems and see if it
> builds
> okay. E.g. I have no access to Solaris or OSX.
> If you just want to build pmemd then you can do:
> cd $AMBERHOME/src
> ./configure gnu
> cd pmemd
> make -f Makefile.amber
> All the old pmemd framework is still there and will remain for the time
> being. Right now one can build serial and parallel using the configure
> script. I have not added cuda support yet. Also note that serial pmemd is
> built and called pmemd while the parallel version if called pmemd.MPI.
> Any problems please let me know.
> All the best
> Ross
> /\
> \/
> |\oss Walker
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- |
> | | |
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
> _______________________________________________
> AMBER-Developers mailing list

AMBER-Developers mailing list
Received on Tue Mar 02 2010 - 06:00:02 PST
Custom Search