Re: [AMBER-Developers] Please read: we need to look at pmemd.cuda pair list fidelity

From: David Cerutti <dscerutti.gmail.com>
Date: Tue, 16 Jan 2018 22:45:36 -0500

A lot of study, and posts to a subset of the developers, happened before I
made this thread on the actual listserv. I was able to make a case, for
example, where the box volume equilibrates under NPT and we have the
problem of shrinking hash cells that eventually get too small for the
cutoff. The problem was fixed for the case of rectangular unit cells, but
the non-orthorhombic case isn't correctly covered. The simulation settles
out to a volume where each unit cell is in fact less than the cutoff in
thickness, irrespective of any pair list margin, but the code thinks it's
much thicker than therefore keeps right on going. I'll make an NVE case to
study the energy conservation effects when one of these things happens.

In the meantime, I've been looking in to the other issue I raised.
Apologies--I had kept the first issue off of the listserv until I had a
better idea of what I was seeing, but I fired off the second issue without
as much discussion. Nonetheless, it seems that there is something going
on, although it may not quite be what I had thought. The attached plot
shows the problem. I constructed a cubic box containing 216 TIP3P
waters--familiar to anyone who's ever tiled such a thing using TIP3PBOX in
tleap. It's about 18.6A on a side--just big enough to handle an 8A cutoff
with a 1A margin. Equilibrating and then simulating that in the NVE
ensemble, I see the energy increase as shown on the attached image. Red
line = old code. Black line = new code. I need to run it further to
verify that the new code isn't any more prone to this than the old code,
but it doesn't appear that may changes to the precision model are doing
anything detrimental. Furthermore, the occurrence and the size of the
jumps are random. Something is weird. I've tried various permutations on
this--pushing the cutoff up to 8.7A with a 0.6A skin, dialing up the
reciprocal space settings and cutting the time step down to 0.5fs (from
1fs) to remove as much noise as possible. I have still seen one of these
jumps in that situation, but it's too early as of yet to say whether the
trend has been affected. If I switch to double precision, I can only go
1/10th as fast, and I haven't YET seen a jump, but I'm only out to 28ns so
it might. Some of you may notice that, in between jumps, there is a slight
DOWNWARD trend to the energy. If the energy isn't flat, errors of any sort
should drive the energy UP, we thought, but there may be something about
the fixed precision rounding that imparts an overall tendency to lose
energy at a very slow rate (not sure on that yet, but the effect is real).
I tried making a larger box, one which will have three hash cells in any
direction, and so far I haven't seen any jumps in the energy but there is
still that slight downward pressure on the overall energy. If I then take
that larger box and increase the cutoff, so that there are now only 2x2x2
hash cells once again (but each o them is highly oversized), I am not (yet)
seeing any jumps of this sort, but that kind of jibes with my original
hypothesis given what I know about how the pairlist works. You're going to
see this problem when the box is really small (< 20 thick) and only two
hash cells thick.

By tomorrow I'll hopefully have more investigation of the first problem I
brought up. For now, it looks like I wasn't crying wolf over the second
one.

Dave


On Tue, Jan 16, 2018 at 10:57 AM, Daniel Roe <daniel.r.roe.gmail.com> wrote:

> Hi,
>
> On Mon, Jan 15, 2018 at 10:11 PM, David Cerutti <dscerutti.gmail.com>
> wrote:
> > What's happening is that, for non-orthogonal unit cells (and the
> > only case the tleap really knows how to construct is the truncated
> > octahedron, translating it into a parallelepiped with all angles at
> > 109.47), sander and pmemd do not know how to properly gauge the distance
> > between the box faces, and therefore the number of hash cells to create
> > when building their pair lists.
>
> I don't think you are right about this. Sander (and cpptraj and I
> think pmemd as well) determines the number of grid cells in each
> dimension using the length of the *reciprocal* cell vectors, which is
> the inverse of the fractional coordinate system. Because this
> determination is done in reciprocal (fractional) space, it doesn't
> matter if the unit cell is orthorhombic or not. All of the "gridding"
> during the pair list build is also done in fractional space, so I
> think the pair list builds are fine. Maybe I'm not understanding what
> you're saying the problem is though. Do you have a specific test case
> where the pair list is not properly set up (i.e. an atom that is
> within the cutoff of another atom is skipped)?
>
> -Dan
>
> --
> -------------------------
> Daniel R. Roe
> Laboratory of Computational Biology
> National Institutes of Health, NHLBI
> 5635 Fishers Ln, Rm T900
> Rockville MD, 20852
> https://www.lobos.nih.gov/lcb
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>


_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers

Received on Tue Jan 16 2018 - 20:00:02 PST
Custom Search