Hi Jason,
Works fine for me. Files I used to build along with my environmental config
files are attached.
I did.
tar xvjf AmberTools-1.4.tar.bz
tar xvjf Amber11.tar.bz2
cd $AMBERHOME
wget
http://ambermd.org/bugfixes/AmberTools/1.4/bugfix.all
patch -p0 < bugfix.all
rm -f bugfix.all
wget
http://ambermd.org/bugfixes/11.0/bugfix.all
wget
http://ambermd.org/bugfixes/apply_bugfix.x
chmod 755 apply_bugfix.x
./apply_bugfix.x bugfix.all
cd AmberTools/src/
./configure -cuda -mpi intel
cd ../../src
make cuda_parallel
cd ~/
mkdir parallel_fail
cd parallel_fail
tar xvzf ../parallel_fail.tgz
qsub -I -l walltime=0:30:00 -q Lincoln_debug
cd parallel_fail
mpirun -np 2 ~/amber11/bin/pmemd.cuda.MPI -O -p hairpin_0.mbondi2.parm7 -ref
hairpin_0.mbondi2.heat.rst7 -c hairpin_0.mbondi2.heat.rst7 </dev/null
Output file is attached.
All the best
Ross
> -----Original Message-----
> From: Jason Swails [mailto:jason.swails.gmail.com]
> Sent: Saturday, December 04, 2010 3:21 PM
> To: AMBER Developers Mailing List
> Subject: [AMBER-Developers] more pmemd.cuda.MPI issues
>
> Hello,
>
> I ran a GB simulation on NCSA Lincoln using 2 GPUs with a standard nucleic
> acid system, and every energy term was ***********. Running in serial,
all
> results were reasonable. I've attached the mdin, restart, and prmtop
files
> for this error.
>
> All the best,
> Jason
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
- application/octet-stream attachment: .soft
- application/octet-stream attachment: .cshrc
- application/octet-stream attachment: mdout
- application/octet-stream attachment: mdinfo
Received on Sat Dec 04 2010 - 21:00:03 PST