Hi,
icc version 16.0.3 (gcc version 4.8.5 compatibility)
mvapich2/2.2
pmemd.MPI passes on the rxsgld_4rep test with 8, 12, and 16
(4 is not an allowed number).
pmemd.MPI passes on the mwabmd test with 8.
pmemd.MPI segfaults on multid_remd with 8 and 16 (the test right before rxsgld_4rep).
pmemd.MPI fails big time on cd gbsa_xfin && ./Run.gbsa3:
60c60
< Etot = -789.3474 EKtot = 308.9410 EPtot = -1098.2884
---
> Etot = -254.2740 EKtot = 308.9410 EPtot = -563.2151
These are the only sig. pmemd.MPI problems with this combo.
scott
Apr 10 4:46:20pm ruby01.osc.edu 637$ /tmp/amber/test/rxsgld_4rep ./Run.rxsgld
This test case requires 8, 12, or 16 MPI threads!
On Fri, Apr 06, 2018 at 09:37:28PM +0200, Gerald Monard wrote:
> Hi Dan,
>
> I've quickly checked (Intel 2017.4 + IntelMPI, 8 threads): no segfault
> on rxsgld_4rep but mwabmd hangs forever...
>
> I've double-checked with gcc 6.3.0 + openmpi 2.1.0, 8 threads, no problem.
>
> Gerald.
>
> On 04/06/2018 07:34 PM, Daniel Roe wrote:
> > Hi All,
> >
> > I've been trying to test Amber and AmberTools in parallel with higher
> > thread counts. Currently for me (using Intel 17.0.4 /mvapich 2.2)
> > pmemd.MPI will segfault and hang on the rxsgld_4rep test with 8
> > threads (4 threads is ok).
> >
> > Also, all of the mwabmd tests appear to have issues with 8 threads
> > (sander as well). A few more details are here:
> > http://ambermd.org/pmwiki/pmwiki.php/Main/Amber18Test
> >
> > Has anyone else seen this behavior? Is anyone else testing with higher
> > thread counts?
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Apr 10 2018 - 14:00:02 PDT