Re: [AMBER-Developers] Oddities in runmd.F90 in pmemd

From: Scott Le Grand <>
Date: Fri, 14 Dec 2012 14:59:15 -0800

And it appear I'm correct as my fix no longer shows this behavior for at
least one test case. It's also ~4x faster than the previous code for
pressure scaling constraint coordinates at no additional charge...

On Fri, Dec 14, 2012 at 8:33 AM, Scott Le Grand <>wrote:

> I'm working on the fix if this is indeed a result of FPRE. My belief
> right now is that NTP scaling of the restraint data causes it to contract
> and expand and this cumulatively shrinks the box it occupies. This then
> causes a pressure bleed by pushing the real atom coordinates towards the
> origin and this then raises pressure. The real atom coordinates have the
> PE function operating to prevent this from happening as a result of
> pressure-scaling, but the restraint coordinates do not. I admit it's a
> hypothesis, and if it's true, it also means that I would ultimately expect
> to see this after hundreds of millions of iterations in full DP, but it's
> currently the only thing I can think of that fits the bizarre behavior here
> that doesn't manifest itself at all until 200K iterations or so and then
> gradually heats things up.
> I've seen similar behavior with transformation matrices in the past that
> requires frequent renormalization. The solution I'm trying to is always
> use the original but pre-centered restraint data, with the centers of mass
> pre-transformed into fractional coordinates and then regenerating the
> restraint coordinates for each step from the unit cell matrix.
> Scott
> On Thu, Dec 13, 2012 at 2:27 PM, Scott Le Grand <>wrote:
>> As noted in the bug report:
>> 1) Pressure scaling is all done in one place for the GPU:
>> kCalculateSoluteCOM(gpu);
>> kPressureScaleCoordinates(gpu);
>> if (gpu->sim.constraints > 0)
>> {
>> kCalculateSoluteConstraintsCOM(gpu);
>> kReduceSoluteConstraintsCOM(gpu);
>> kPressureScaleConstraintCoordinates(gpu);
>> }
>> Whatever's going on (and I suspect it's ultimately FPRE) it's not that or
>> it would blow up a lot sooner...
>> 2) While the else looks strange, it appears to be the end case for chain
>> starting at line 1214 if (ntp .eq. 1) with an else if (ntp .eq 2) at line
>> 1221 finished by the else at line 1257 (not my code though)...
>> On Thu, Dec 13, 2012 at 11:40 AM, Duke, Robert E Jr <>wrote:
>>> Hmmm, all past my tenure on pmemd itself, as I expect you know. I am
>>> buried in the amoeba code for right now, but will be interested in going
>>> back to see what happened to pmemd in the time period I have not been
>>> working on it when I get time. I would think the origin of npt could be
>>> tracked down, and I would agree it is not the correct way to fix a typo...
>>> It did not exist in Amber 10. Scott?
>>> - Bob
>>> ________________________________________
>>> From: David A Case []
>>> Sent: Thursday, December 13, 2012 5:54 AM
>>> To:
>>> Subject: [AMBER-Developers] Oddities in runmd.F90 in pmemd
>>> (1) about the problem that occurs when ntr=1, ntp=1 on GPU:
>>> In the CPU code, when ntp>0 and ntr=1, there is a call to
>>> pressure_scale_restraint_crds(). But there is no gpu equivalent to
>>> this.
>>> Assuming that the restraint coordinates acutally used are the ones on
>>> the
>>> gpu(?), this looks like an error.
>>> (2) There are several places where a specific if has been commented out,
>>> and replaced with a generic else: e.g. at about line 1257:
>>> !else if (npt .gt. 2) then
>>> else
>>> Is there a reason for this? the effect of the above changes is that
>>> the following section is run even when ntp = 0, which looks wrong.
>>> Is it
>>> just that "npt" above is a typo (should be "ntp"), and was fixed in a
>>> funny way? Similar thing happens (but with no typo) for the
>>> csurften==3
>>> option.
>>> ...thx...dac
>>> _______________________________________________
>>> AMBER-Developers mailing list
>>> _______________________________________________
>>> AMBER-Developers mailing list
AMBER-Developers mailing list
Received on Fri Dec 14 2012 - 15:00:02 PST
Custom Search