> I'm not sure what the "this" refers to in your final sentence. The
> machine I was using has 8 physical cores, but "cat /proc/cpuinfo" reports
> 16, since hyperthreading is turned on. Both pmemd and desmond perform
> better if I ask for 16 processes (the same as threads, at least as I am
> using the term) instead of 8; to be specific, I used "-P 16" for desmond,
> and "mpirun -np 16" for pmemd. I will try "-P 8 -tpp 2" for desmond, but
> hyperthreading makes the OS and programs act as though there were
> really 16 cores. [Apologies for any loose language here: I really don't
> know the correct words to use to give a good description of hyperthreaded
> systems....]
This makes it clear, thanks. In my case I ran Desmond also on
8-physical-core machines, but no hyperthreading and, therefore, using
-P 16 meant two nodes connected with Infiniband, which is different
from your setup.
Sorry for being sloppy, by "this" I meant that with the current jac
test -noopt doesn't make a difference. It is a good safeguard though,
e.g., when one sets non-power-of-two PME grid resolution where the
Schrodinger desmond script (not Desmond itself) will (or at least used
to) reset it to the closest power-of-two resolution.
> Reminder: I don't want people to think these are serious benchmark
> comparisons. My basic conclusion is what I have thought since the
> original announcment of desmond results on commodity hardware: at low
> amounts of parallelism, pmemd and desmond are not that different in
> performance -- i.e. much less than a factor of 2. Of course, a 30-40%
> speed difference can be important, and results at higher levels of
> parallelism (probably) favor desmond by increasing amounts over pmemd.
Agreed. Desmond is most useful when run on a large number of nodes and
Infiniband interconnect.
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Mon Jul 26 2010 - 04:30:04 PDT