Re: amber-developers: amber performance

From: Robert Duke <rduke.email.unc.edu>
Date: Thu, 1 Mar 2007 17:07:37 -0500

Well, I am trying to figure out what is immediately best for me and
everyone. I do think the community can use better sampling, and I have some
credibility in the area of being able to churn out code that will do the
job. There does come a point where beating on things is a waste of time,
and I sometimes get caught in tight loops working on an optimization that is
going nowhere, but mostly, except when I am totally burned out, it is
productive. I need to probably publish a pmemd paper, on some of the basic
algorithms. It is going to be like root canal work, and I really don't
think it is all that interesting. This whole issue of getting code to run
fast is an art, involving as much intuition about how to do the various
tradeoffs as anything. So Shaw et al. produce some nice papers with all
kinds of math that show why certain things work so well. What they don't
tell you is that there is a whole herd of elephants in the room that they
are ignoring. So I should do this paper, but it is one of those career
markers, not really useful (anybody that thinks I read "how I did it" papers
to do all the stuff I do just doesn't understand the kind of s/w developer I
am - I mostly glance at the papers, noting all the half-truths and
omissions). On the chemistry/forcefield front, I actually find that more
interesting from a paper perspective, but man, there is a ton of work to be
done to say definitive things, and I think saying things that are not
definitive is not all that useful. So I have plans to extend the current
work and it may make a nice paper, but nothing really important is waiting
on this occurring. It would be a different story if I thought a smooth
cutoff method would produce the exact same or higher quality results and be
faster too. Then I would be in a box to validate why I want to use a smooth
cutoff, and why it is okay to do so. But this is not the case. But
ultimately I'll try to figure out if this stuff is truly useful, and also do
a bit of debunking of less carefully constructed methods.
Best Regards - Bob

----- Original Message -----
From: "Scott Brozell" <sbrozell.scripps.edu>
To: <amber-developers.scripps.edu>
Sent: Thursday, March 01, 2007 4:17 PM
Subject: Re: amber-developers: amber performance


> Hi,
>
> If Dave's comments are only worth $0.02 then mine surely will be
> valueless, due to inflation, by the time you finish reading this :)
>
> I started the sunday night conversation between Ross, Tom C, and Mike
> because I am an Amber representative and I do not have enough expertise
> to explain in detail to people like Kent Milfeld and other higher ups
> why the well advertised NAMD and Gromacs are not the best performing MD
> codes. Of course, I can mention throughput vs scaling and everybody gets
> it, but the tradeoffs between single and double precision vis-a-vis
> energy conservation, and other good vs dubious science issues are
> more involved.
>
> Thus, I think that we do need publications, not just benchmarks,
> that underscore the tradeoffs. I see the pedagogical value of such
> work as much greater than its Amber advertising value.
>
> Scott
>
> ps and in the category of free advice,
>
> It reads like Bob has quite a bit of work that could be
> published - this could be a case of not following Knuth's quote:
> Premature optimization is the root of all evil.
> In other words, you may due more communal good by publishing than
> optimizing pmemd for feature x on platform y or beating namd
> performance yet again.
>
>
> On Thu, 1 Mar 2007, David A. Case wrote:
>
>> On Wed, Feb 28, 2007, Ross Walker wrote:
>> >
>> > Tom C and I spent some time talking about this on Sunday evening. We
>> > pretty
>> > much came to the conclusion that we want to do is design a series of
>> > calculations that attempt to address the various issues that people
>> > simply
>> > "believe" at present. I.e. find something that is sensitive to single
>> > vs
>> > double precision and see if we can address whether single precision is
>> > okay
>> > or not.
>>
>> In my view, this could very easily become a time sink where you don't get
>> any definitive results and just waste time and create controvesy. The
>> way in
>> which gromacs and desmond use single precision is quite different, and
>> both
>> would be different from trying to compile sander in single precision.
>>
>>
>> > In addition we thought about answering PME vs force switch
>> > simulations. Here a salt water solution simulation might be useful as
>> > this
>> > would be an extreme test of the electrostatics and we could address
>> > whether
>> > a force switch cutoff is actually okay or if you really need pme.
>>
>> Bob has already answered this, pointing out quite eloquently ways in
>> which
>> this could become another wild-goose chase. And, since he already has so
>> much
>> data in this area, starting some new effort here doesn't seem very
>> productive.
>>
>> >
>> > A straight benchmark paper is unlikely to get published so I think
>> > wrapping
>> > it up in a paper that attempts to address issues regarding PME, time
>> > step,
>> > single vs double precision etc is a good way to go.
>>
>> We don't necessarily need a paper on benchmarks: a good web site that
>> collects
>> numbers we already have lying around (plus some new calculations) would
>> go a
>> long way in my view. A key need: someone in the Amber community needs to
>> be
>> able to run gromacs and namd, so we can do our own side-by-side
>> comparisons.
>>
>> The usual $.02 disclaimers go here.
>
Received on Sun Mar 04 2007 - 06:07:46 PST
Custom Search