On 2/12/14, 9:48 AM, "Jason Swails" <jason.swails.gmail.com> wrote:
>On Wed, Feb 12, 2014 at 12:16 PM, Ross Walker <ross.rosswalker.co.uk>
>wrote:
>
>> Hi Jason,
>>
>> I agree that pairs_calc.i is getting out of control. This is part of the
>> reason I removed the vectorized version of it and have been attempting
>>to
>> clean things up. The ultimate goal is to simplify this as much as
>>possible
>> and then go through it with the aim to aggressive optimize for x86_64 -
>>I
>> think it is crazy trying to keep different code paths for different
>> architectures these days. That said all of the code duplication is
>> necessary for performance and to avoid if statements - although I think
>> with modern compilers and branch prediction it might not be as bad as it
>> used to be. This is something that needs to be done once everything else
>> has settled down though so will likely be a goal for between the code
>> freeze (which is looking and more like a feature freeze given the amount
>> of changes in the last few weeks) and release.
>>
>
>I wasn't saying do away with code duplication. I was simply saying that
>at
>some point it's easier to have the same code in 5 different subroutines
>than to auto-generate those 5 subroutines from the same header by playing
>with preprocessor defines. In any case, Scott's way in CUDA is cleaner
>(i.e., leave the subroutine definitions out and just have the code in the
>header -- that would probably help reduce the #ifdef clutter).
I 110% agree with you! - And that is the plan - having preprocessor
directives everywhere and include files of code that include other include
files multiple times is just hellish for someone trying to understand it.
Hence why I am beginning to clean it up - the idea being though top strip
as much legacy stuff as possible first to make the expansion into
duplicated routines as simple as possible.
>
>
>> Since from my understanding 12-6-4 does not involve any algorithmic
>> clashes with the various methods so I think it should be supported for
>>GB,
>> IPS, TI and PME and should not be enabled until that is true. This is
>>just
>> my opinion others may differ on this.
>>
>
>Well I'll start by saying it only makes sense in explicit solvent (it
>fixes divalent metal ions interacting with water), so GB is unnecessary to
>support. That said, enforcing all-or-nothing implementations universally
>is silly -- a compromise should be struck here. Do as complete a job as
>possible given time and expertise constraints and document what works and
>what doesn't (and what you think might but aren't sure). Nobody is an
>expert in everything and such policies will hinder progress in our science
>(or deter developers toward packages like OpenMM ;).
Agreed - but it doesn't look to me like it would be much work to add it to
GB although as you say it isn't designed for that so that should be
documented in the code comments and around the protection code - so
someone realizes specifically why it was never added. Note IPS is explicit
solvent so it makes sense for it to be here but again if it is not it
should be well documented why. If someone has time contraints on
completing a full implementation then that's ok but there should be a
clear plan for adding the missing parts later - e.g. a feature request in
bugzilla detailing what still needs to be completed and why.
>
>Note, I agree the IPS implementation sucks, it should never have been done
>> as separate pairs_calc and I am looking to try and simplify this. The
>>main
>> thing though, if the new method works with TI in sander so it should be
>> made to work with TI in pmemd - that way we keep the consistency across
>> sander and pmemd for our users as much as possible.
>>
>
>This is not at all the model we have currently. There is a lot of
>functionality that sander has that pmemd does not. It would be less usual
>to support something in pmemd that is unsupported in sander.
Yes although these are mostly BIG methods - like QM/MM for example. Small
things, like barostats or tweaks to VDW terms I would like to keep as
consistent as possible between the codes.
>
>Also, the TI code in pmemd is so _drastically_ different from that in
>sander (and harder to make new potential terms consistent with it), that
>it makes making a new model compatible with TI a much bigger undertaking.
> This is done to speed up TI in pmemd (a lot in explicit solvent), but
>comes at a cost (it won't work with as much stuff as it will in sander).
> Incremental progress here is key to steady improvement.
Agreed - Joe Kaus understands it beautifully - you should have Pengfei
contact him with details of what is being added and Joe will hopefully
take care of it under Pengfei's guidance. This is why we need people who
add new code (such as Joe's TI code) to commit to supporting it and
helping others with it for a decent amount of time. This means beyond
simply adding it for a phd, publishing, taking a job and then never being
heard from again. I think if people add stuff to the code base there
should be a reasonable expectation that they will donate their time for a
decent length into keeping it maintained and/or helping others learn /
maintain it.
>Also note that this was added to the trunk and yet there are messages
>> like: "This ...it very well may be supported." - Ideally these things
>> should have been properly tested and addressed before this was checked
>>in.
>>
>
>I disagree here. [1] If you're not sure whether or not it works, disable
>it
>and let an expert in the method that wants to use your modification
>determine if it works or not. The fastest way to address this complaint
>is to have the developer say "it doesn't work" if they're unsure. Why not
>give an interested party a head start by saying "it might work, but i'm
>not
>sure"?
Agreed BUT make sure you contact the developer responsible for that code,
make them aware of it and work with them to help them check your new
method. Simply delegating that responsibility without getting an ok on
accepting the delegation is not going to work.
>This is done already, I believe, with TI + REMD in pmemd which is disabled
>but probably works fine. ;)
So there should be a bug filed for this in bugzilla - assigned to the
relevant people so that we have tracking of when and if this will get
resolved. Otherwise it will just fall between the gaps and get forgotten.
>P.S. TI (only) in pmemd is broken and failing most of the tests. I'm
>trying to figure out why.
I stripped a bunch of stuff around it in get_nb_energy.i yesterday but the
pmemd test cases were working at that point. Unless running
./test_amber_serial.sh do not test everything in pmemd. :-(
On a related note we need to update the test targets such that one can
test an individual package, say sander or pmemd in serial or parallel (or
GPU) with a single make target. E.g. make test.pmemd.serial should test
EVERY pmemd feature that is supported in serial. If you just wanted to
test TI for example then this should probably be a target like
test.pmemd.serial.TI that is included under test.pmemd.serial (or better
test.pmemd.TI which is agnostic to serial or parallel and included under
both high level test targets).
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Associate Research Professor |
| San Diego Supercomputer Center |
| Adjunct Associate Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Wed Feb 12 2014 - 10:30:03 PST