Jason Swails wrote:
> On Tue, Apr 13, 2010 at 11:44 AM, Mark Williamson <mjw.sdsc.edu> wrote:
>> We think it is something to do with the latest version of mpich2, were are
>> actively investigating. Jason, what version of mpich2 are you using?
>
> 1.2.1 -- I hadn't thought to check the MPI implementations, as I
> generally always used OpenMPI on my Mac, but for reasons of
> convenience I used MPICH2 when I recently reconfigured my system. I'm
> working on verifying everything works with OpenMPI right now on
> Ubuntu, and if that works I'll move it over to my Mac.
I am seeing this in vanilla AMBER 10 as well :(
tar xfj Amber10.tar.bz2
export AMBERHOME="/server-home/mjw/code/AMBER/amber10"
cd $AMBERHOME ; mkdir exe
cd $AMBERHOME/src/pmemd
export MPI_HOME=/server-home/mjw/code/mpi/mpich2-1.2.1p1
export PATH=$MPI_HOME/bin:$PATH
./configure linux_em64t ifort mpich2 nobintraj
#answer no to MLK
make clean && make install
cd ../../test/gb_rna/
export DO_PARALLEL="mpirun -np 2"
export TESTsander="../../exe/pmemd"
./Run.gbrna
| WARNING: Stack usage limited by a hard resource limit of 536870912 bytes!
| If segment violations occur, get your sysadmin to increase
the limit.
Assertion failed in file helper_fns.c at line 337: 0
memcpy argument memory ranges overlap, dst_=0x81f7c0 src_=0x81f7c0 len_=7680
internal ABORT - process 0
rank 0 in job 12120 bunny.sdsc.edu_33162 caused collective abort of
all ranks
exit status of rank 0: killed by signal 9
I'm going to see if I can persuade mpich2 to give me an offending line
number within the Fortran code....
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Apr 13 2010 - 10:00:04 PDT