Re: [AMBER-Developers] infinite ptraj.MPI, was: First AmberTools release candidate

From: Daniel Roe <daniel.r.roe.gmail.com>
Date: Tue, 16 Mar 2010 21:37:45 -0400

On Tue, Mar 16, 2010 at 8:05 PM, Lachele Foley <lfoley.ccrc.uga.edu> wrote:

> It is related to the filesystem... Yay for every little piece of
> information...
>

Confirmed. I don't see any slowdown on a local disk, but over NFS it becomes
quite slow. Still looking into specifics of the slowdown and whether this
can be 'fixed'...

No slowdown for ptraj vs ptraj.MPI 1 proc on a local disk (whew), so I don't
think it's anything particularly related to MPI.

As to the why, this is just my wild conjecture but it might have to do with
the way ptraj.MPI is currently set up to read frames; each thread gets a
frame then skips by offset, which when combined with a network filesystem
might be too much moving around for the filesystem to handle efficiently. I
am in the process of implementing a division of frames (e.g. with 2 threads,
thread 0 reads frames 0 up to N/2, thread 1 gets the rest) which *might* be
more efficient - we'll see.

-Dan
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Mar 16 2010 - 19:00:02 PDT
Custom Search