[AMBER-Developers] Help needed with MPI parallelism

From: <dcerutti.rci.rutgers.edu>
Date: Thu, 28 Feb 2013 12:24:25 -0500 (EST)

Hello Amber Devs,

Does anyone out there have strong MPI experience and an ability to help me
over the next few weeks, perhaps through phone conversations? I am making
another push to get the molecular dynamics engine of mdgx to run on 64 or
more processors so that I can do more science with the code over the
coming year.

Currently, there is a parallel implementation, and it is done "the right
way" in some respects but there are some weak links which I will need to
shore up, one by one until each of them will enable a truly parallel code.
 There are steps I will be taking over the next week, but before I make
big changes to the code I want to have a better understanding of why the
current code gives me the performance I see.

Things I'm trying to understand include why I'm having to use
MPI_Barrier() after my MPI_Waitall() calls, and whether my attempt to mask
the communication time of a certain message is really working as intended.
 I've got a system for collecting timing data in the code, and for sorting
the time spent into an arbitrary number of categories, so it's convenient
to make and test hypotheses.


AMBER-Developers mailing list
Received on Thu Feb 28 2013 - 09:30:04 PST
Custom Search