Re: amber-developers: Fw: How many atoms?

From: Ken Merz <merz.qtp.ufl.edu>
Date: Tue, 4 Dec 2007 20:46:35 -0500

Hi,
  If it costs us nothing then why not scale PMEMD beyond 999,999
atoms. Someone out there might want to do 1MM+ atom simulation with
the AMBER program suite! Kennie

On 4 Dec 2007, at 2:14 PM, Robert Duke wrote:

> Hello folks!
> I am working hard on high-scaling pmemd code, and in the course of
> the work it became clear to me, due to large async i/o buffer and
> other issues, that going to very high atom counts may require a
> bunch of extra work, especially on certain platforms (BG/L in
> particular...). I posed the question below to Dave Case; he
> suggested I bounce it off the list, so here it is. The crux of the
> matter is how people feel about having an MD capability in pmemd
> for systems bigger than 999,999 atoms in the next release. Please
> respond to the dev list if you have strong feelings in either
> direction.
> Thanks much! - Bob
>
> ----- Original Message ----- From: "Robert Duke" <rduke.email.unc.edu>
> To: "David A. Case" <case.scripps.edu>
> Sent: Tuesday, December 04, 2007 8:45 AM
> Subject: How many atoms?
>
>
>> Hi Dave,
>> Just thought I would pulse you about how strong the desire is to
>> go above 1,000,000 atom systems in the next release. I personally
>> see this as more an advertising issue than real science; it's hard
>> to get good statistics/good science on 100,000 atoms let alone
>> 10,000,000 atoms. However, we do have competition. So the prmtop
>> is not an issue, but the inpcrd format is, and one thing that
>> could be done is to move to supporting the same type of flexible
>> format in the inpcrd as we do in the new-style prmtop. Tom D. has
>> an inpcrd format in amoeba that would probably do the trick; I can
>> easily read this in pmemd but not yet write it (I actually have
>> pulled the code out - left it in the amoeba version of course,
>> but can put it back in as needed). I ask the question now because
>> I am hitting size issues already on BG/L on something like
>> cellulose. Some of this I can fix; some of it really is more
>> appropriately fixed by running on 64 bit memory systems where
>> there actually is a multi-GB physical memory. The problem is
>> particularly bad with some new code I am developing, due to
>> extensive async i/o and requirements for buffers that at least
>> theoretically could be pretty big (up to natom possible; by
>> spending a couple of days writing really complicated code I can
>> actually handle this in small amounts of space with effectively no
>> performance impact - but it is the sort of thing that will be
>> touchy and require additional testing). Anyway, I do want to
>> gauge the desire to move up past 999,999 atoms, and make the point
>> that on something like BG/L, it would actually require a lot more
>> work to be able to run multi-million atom problems (basically got
>> to go back and look at all the allocations, make them dense rather
>> than sparse by doing all indexing through lists, allow for
>> adaptive minimal i/o buffers, etc. etc. - messy stuff, some of it
>> sourcing from having to allocate lots of arrays dimensioned by
>> natom).
>> Best Regards - Bob
>

Professor Kenneth M. Merz, Jr.
Department of Chemistry
Quantum Theory Project
2328 New Physics Building
PO Box 118435
University of Florida
Gainesville, Florida 32611-8435

e-mail: merz.qtp.ufl.edu
http://www.qtp.ufl.edu/~merz

Phone: 352-392-6973
FAX: 352-392-8722
Cell: 814-360-0376
Received on Wed Dec 05 2007 - 06:07:35 PST
Custom Search