Folks -
I just committed changes in cvs for pmemd 9, that should complete all the
planned functionality for this release. The comments added were
incomplete.
The functionality added includes:
1) igb == 7 - John Mongan's new generalized born stuff.
2) alpb support (analytical linearized poisson-boltzmann).
3) netcdf binary files support (bintraj) - This stuff is so cool I dropped
my own binary trajectory files scheme, which was simple and fast but not
portable (simple fortran binary i/o, which is anything but portable). We
get a full nsec of additional throughput on an sp5 for factor ix using
this
stuff (talking topend, 320 procs here).
4) a new -suffix command line option. If you put -suffix 020906.1.4proc
in
the command line, then all the output files will have this appended to the
default name unless you explicitly specified a name for them on the
command
line. So this applies to mdout, mdinfo, mden, mdcrd, mdvel, restrt, and
logfile (pmemd now permits logfile naming also with a -l flag). This is a
really simple, easy-to-use command line interface enhancement you all may
want to pick up for sander. The code is in get_cmdline.fpp for pmemd.
5) massive configuration file changes, mostly a refactoring, and an
addition
of an em64t config file for ifort that is basically like the opteron one.
Okay, I need to complete the manual doc and the readme, but code
functionality for 9 should be done. I also need to do more configuration
work, especially in two areas 1) getting stuff that relates to em64t right
(I may be missing some lib64 vs. lib issues), and 2) moving forward to
correct library selection for the latest open mpi implementations (I think
only mpich2 and mvapich are affected). I encounter a fair bit of grief
with
nonstandard mpi installs every time I build pmemd someplace new, but there
is not a lot we can do when system folks whack the mpi trees. If anyone
has
any input/suggestions regarding config changes or additions, let me know.
I
have not added g95/gfortran because slow compilers coverage is not in my
mission statement, as it were (plus I can do other things that are more
critical with the time). Mac coverage I leave to somebody with a Mac, but
you will note that just like the stuff done for optimizing GB on the altix
for sander, if the changes get too machine-specific they tend to not
survive
release of a new version. I also have a "request for comments" on both
Intel
MPI and OpenMPI. I dinked with Intel MPI, and it was 1) not all that easy
to install, and 2) I could not easily figure out how to make it anywhere
near as fast as mpich on my machines (some of this has to do with pointing
at netcards dedicated to the mpi interconnect task (xo, so no switch
delay,
and fast cards to boot, but who the heck wants to dink with a slow card
through a slow ethernet hub - the typical card associated with your
hostname). So on Intel MPI, I am left wondering why the heck anyone would
pay several hundred dollars for something that works worse than something
that is free. It DOES look highly configurable on performance, but you
have
to really want to tune it to wade through the poor doc. On OpenMPI, I
just
looked at their site and noticed no documentation other than a several
hundred line readme, so I thought we may be a little early on the curve
for
throwing wholehearted support behind this stuff. If it is actually the
greatest thing since sliced bread, somebody tell me and I'll check it out
and hack in a configuration.
Best Regards - Bob
Received on Wed Apr 05 2006 - 23:49:43 PDT