Re: [AMBER-Developers] On the precision of SPFP in pmemd.cuda

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 16 May 2017 23:11:21 -0400

Hi Dave,

This is a lot of cool information and a nicely complete investigation of
this -- it was an enjoyable and informative read. Maybe I could encourage
you to start up a Wiki page with some of your investigations so your
investigative work survives a bit longer past the end of this discussion.

On Tue, May 16, 2017 at 10:10 PM, David Cerutti <dscerutti.gmail.com> wrote:

> (In SP modes pmemd.cuda
> can represent positions to a precision of 1/1048576 A, a number which won't
> change unless the cutoff gets needlessly large or unadvisably small.)


​What does the internal representation of the coordinates look like for a
PME simulation in pmemd.cuda when iwrap is set to 0 (i.e., no wrapping is
done)? As coordinates grow, this precision you pointed out degrades (the
difference between adjacent real numbers that can be represented with
fixed-size floating point numbers grows as the absolute values of the
numbers themselves grow). Obviously with a nicely packed box (like how
Amber simulations usually start or restarting from a simulation using
iwrap=1), the precision won't vary much across 64 A. But if coordinates
are allowed to grow without bound, I suspect this precision could quickly
become the largest source of error.

Probably the safest way to run an iwrap=0 simulation is to maintain the
"compact" representation of coordinates internally (so as to avoid losing
precision when computing energies and/or forces) as well as a set of
translations for each atom so that the "unwrapped" representation can be
returned for printing in the corresponding output files. I'm pretty sure
this is what OpenMM does, but I have no clue about pmemd.cuda.


> representation. In this format I can handle forces up to 2000 kcal/mol-A.
>
> I'd conclude this by saying "I don't think that any simulation that's
> stable at all is going to break that," but it sounds too ominous and will
> no doubt invite comments of "famous last words..." If really necessary, I'm
> pretty confident that I can push to 19 bits of precision past the decimal
> (force truncation is then getting us slightly more error than the
> coordinate rounding, but still way way below any of the other sources), and
> let the accumulators take up to 4000 kcal/mol-A forces. About one in a
> million individual forces gets above 64 kcal/mol-A, and it becomes
> exponentially less likely to get forces of larger and larger sizes the
> higher up you go.
>

​An obvious counterexample is minimization, although it's not all that bad
to make people minimize on the CPU. Another perhaps less obvious
application that may result in overflowing forces/energies come from hybrid
MD/MC methodologies (e.g., H-REMD) with aggressive move sets that can
result in a few forces and/or energies becoming extremely large. The
obvious thing to happen here is to just reject that move and soldier on,
but if the simulation crashes or becomes corrupted , that may prove
limiting to some of these kinds of applications.

Just some thoughts.

All the best,
Jason

-- 
Jason M. Swails
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue May 16 2017 - 20:30:03 PDT
Custom Search