Hi All,
I am getting ready for the release of the AMBER 11 patch that will add
support for running on multiple GPUs. As well as fix a number of outstanding
bugs. For now what is in the GIT tree I believe is release ready code but I
would appreciate it if people could try a wide range of real world
simulations with it, both on single GPU and in parallel and see if they find
any issues.
Currently you need to have an MPI v2 installation and then you can checkout
and build the serial and parallel pmemd.cuda executables with:
cd ~
git clone gitosis.git.ambermd.org:amber.git
export AMBERHOME=~/amber
cd $AMBERHOME/AmberTools/src
./configure -cuda intel
cd ../../src
make cuda
cd ../test
make test.cuda
cd $AMBERHOME/AmberTools/src
make clean
./configure -cuda -mpi intel
cd ../../src
make clean
make cuda_parallel
cd ../test
export DO_PARALLEL='mpirun -np 2'
...whatever else you need for your MPI - e.g. mpdboot etc...
make test.cuda.parallel
(The dhfr ntb=2 ntt=3 test case differences are expected. All other
differences should be minor).
Any feedback is greatly appreciated.
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Fri Oct 01 2010 - 14:00:05 PDT