Re: [AMBER-Developers] infinite ptraj.MPI, was: First AmberTools release candidate

From: Lachele Foley <lfoley.ccrc.uga.edu>
Date: Tue, 16 Mar 2010 21:54:41 -0400

Are you getting slowdown or hangs forever? For me, it never completes -- or, at least, doesn't complete after 45 minutes on four processors. Compared to two seconds, that's close enough to forever for me.

I'm not sure if you've seen bug 126, but... We've been getting corrupted output files. For example, every two million characters or so, the file will be missing a couple. Matt is setting up tests to use the different mount points (file systems), and will run tomorrow.

I was hoping these are related...

:-) Lachele
--
B. Lachele Foley, PhD '92,'02
Assistant Research Scientist
Complex Carbohydrate Research Center, UGA
706-542-0263
lfoley.ccrc.uga.edu
----- Original Message -----
From: Daniel Roe
[mailto:daniel.r.roe.gmail.com]
To: AMBER Developers Mailing List
[mailto:amber-developers.ambermd.org]
Sent: Tue, 16 Mar 2010 21:37:45
-0400
Subject: Re: [AMBER-Developers] infinite ptraj.MPI, was: First
AmberTools 	release candidate
> On Tue, Mar 16, 2010 at 8:05 PM, Lachele Foley <lfoley.ccrc.uga.edu> wrote:
> 
> > It is related to the filesystem... Yay for every little piece of
> > information...
> >
> 
> Confirmed. I don't see any slowdown on a local disk, but over NFS it becomes
> quite slow. Still looking into specifics of the slowdown and whether this
> can be 'fixed'...
> 
> No slowdown for ptraj vs ptraj.MPI 1 proc on a local disk (whew), so I don't
> think it's anything particularly related to MPI.
> 
> As to the why, this is just my wild conjecture but it might have to do with
> the way ptraj.MPI is currently set up to read frames; each thread gets a
> frame then skips by offset, which when combined with a network filesystem
> might be too much moving around for the filesystem to handle efficiently. I
> am in the process of implementing a division of frames (e.g. with 2 threads,
> thread 0 reads frames 0 up to N/2, thread 1 gets the rest) which *might* be
> more efficient - we'll see.
> 
> -Dan
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
> 
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Mar 16 2010 - 19:00:05 PDT
Custom Search