case wrote:
> On Mon, Oct 12, 2009, Ben Roberts wrote:
>
> [This discussion should be moved over to amber-developers.]
>
> Right now, the -openmp flag only applies to NAB. I'll update the configure
> script to make this point.
>
> ...dac
Dave,
Not a problem. I had only sent it to the main list because it seemed
that non-developers with SMP machines might be tempted to give it a go.
Ross,
I've responded here for ease of further discussion.
> Simple answer is no it does not. Parts of AMBER Tools (NAB) support openMP
> parallelization but that is all.
>
> Long answer: The openmp flag is there but undocumented since it is work in
> progress and meant only for developmental use. It actually turns on a hybrid
> openMP / MPI support but ONLY for matrix diagonalization within the QM/MM
> code. It is mainly intended to allow support for the openMP parallelization
> of lapack routines such as dspevd and dsyevd inside Intel's MKL libraries.
> The idea is to be able to use these in concert with a MPI run such that all
> 8 cores of an SMP node can be used by the master MPI thread to do SMP
> diagonalization in QM/MM while the other 7 MPI threads running on this code
> are idle at a barrier waiting for the diagonalization to finish. In short it
> is VERY experimental and best left alone unless you want to experiment with
> this to improve QM/MM performance. If you do then it is probably best to
> contact me directly and I can give you some examples of how to use this. It
> is tricky given that a LOT of MPI libraries set thread affinity so if you
> just naively turn it on you get all 8 SMP threads locked to the same core as
> the master node and thus performance goes through the floor.
OK, so the upshot of all that is that it's only supported for very few
parts of the code. That makes sense given my results the other day.
Specifically, I tried increasing OMP_NUM_THREADS but the programs would
still only access one CPU; I thought it might be because I was trying to
run things the wrong way, but clearly that's not the case. (Or, at
least, that's not the only reason why things might not work.)
Since I'm not running many QM/MM simulations using Amber's QM/MM engine
at the moment, I wasn't too worried about the matrix diagonalisation
code. So I'll leave -openmp out of my builds for the time being. I was
mostly interested in whether it was something sander MD could take
advantage of at present; you and Dave have answered that question quite
definitively.
Are there any views on the desirability of OpenMP parallelisation in
general vs. MPI? If it's likely to be a lot of gain for not much pain, I
suppose I (or someone else, of course) could start looking into it. But
if the reverse, I would be quite content to stick with an MPI
implementation.
Cheers,
Ben
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Mon Oct 12 2009 - 12:30:04 PDT