RE: [AMBER-Developers] Re: [AMBER] OpenMP

From: Ross Walker <>
Date: Mon, 12 Oct 2009 14:23:58 -0700

Hi Ben,

> OK, so the upshot of all that is that it's only supported for very few
> parts of the code. That makes sense given my results the other day.
> Specifically, I tried increasing OMP_NUM_THREADS but the programs would
> still only access one CPU; I thought it might be because I was trying
> to
> run things the wrong way, but clearly that's not the case. (Or, at
> least, that's not the only reason why things might not work.)

Indeed, although as usual it is not quite so simple. The code, if you turn
on openMP will ALWAYS ignore OMP_NUM_THREADS and will set the number of OMP
threads to 1. This is because MKL now includes SMP parallelization of the
vector routines. Thus if you run MPI with 8 threads on a dual quad core
machine and link in the parallel MKL (needed for the SMP diagonalization)
then you would get 64 threads for every vdinvsqrt call which is, as you can
imagine, a disaster!

Thus right now if you turn on openMP it switches the MKL linking to parallel
but deliberately sets the number of MKL threads to 1 to 'protect' you from
thrashing with MPI + openMP together. The number of openMP threads to use
for diagonalization in QM/MM is set in the qmmm namelist with


#ifdef OPENMP
 integer :: qmmm_omp_max_threads !Maximum openmp threads to use inside QMMM
routines. If diag_routine /= 0 then this
                                  !value will be used as the argument to
omp_set_num_threads for all threaded QMMM
                                  !functions. if diag_routine = 0 then the
code will test the performance from
                                  !1 thread to this number to find the
optimum value to use.

Thus if you do not change this you get the default of 1 so you would see
nothing from enabling OPENMP.
> Since I'm not running many QM/MM simulations using Amber's QM/MM engine
> at the moment, I wasn't too worried about the matrix diagonalisation
> code. So I'll leave -openmp out of my builds for the time being. I was
> mostly interested in whether it was something sander MD could take
> advantage of at present; you and Dave have answered that question quite
> definitively.

Mike Crowley and I considered this a while ago. The big issue is
maintenance. The openMP code is VERY fragile and when mixed with MPI code as
well, whether independent or used in a hybrid fashion it gets extremely
difficult to maintain. With lots of people checking into sander I think it
would get broken in a matter of days. The other BIG issue with openMP is
that it is VERY intolerant of cache thrashing. It does not fail very
gracefully. As soon as one of your openMP loops goes below the cache size so
that two threads write to the same cache line then performance gets
destroyed. Not just a little bit, you go from a calculation taking seconds
to taking years, literally. This is the MAJOR flaw in openMP that the chip
manufacturers 'conveniently' ignore when they talk about mixing openMP with
MPI etc. The net result of this is that MPI, surprisingly, is actually much
easier to program and obtain good performance with than openMP.

> Are there any views on the desirability of OpenMP parallelisation in
> general vs. MPI? If it's likely to be a lot of gain for not much pain,
> I
> suppose I (or someone else, of course) could start looking into it. But
> if the reverse, I would be quite content to stick with an MPI
> implementation.

Please feel free to try, given the caveats above... I have not been
successful with it but others have. E.g. NAB works well I believe with
openMP. But then the turnover rate for the code is a lot less.

All the best

|\oss Walker

| Assistant Research Professor |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- |
| | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

AMBER-Developers mailing list
Received on Mon Oct 12 2009 - 14:30:03 PDT
Custom Search