RE: amber-developers: State of PIMD / NEB test cases.

From: Ross Walker <ross.rosswalker.co.uk>
Date: Tue, 8 May 2007 20:11:04 -0700

Hi Francesco,

> i just updated the tests for PIMD/NMPIMD/CMD and NEB.
> Everything is in
> test/PIMD.

You didn't need to add NEB test cases I created a new directory called neb
under test and have started putting NEB test cases in here. That is tests
for both classical and QMMM neb with a series of different options. Since
NEB runs under the regular sander executable, like PIMD does it didn't seem
to make sense to have it under a PIMD directory. So can you remove the NEB
test cases that you added.

Note there is still a problem with NEB and I assume PIMD as well in parallel
when you have more processors than groups. This doesn't show up on ifort I
think because arrays are zeroed when allocated but with xlf90 it is
disasterous. I have told Wei about it and I think he is looking into it. I
think the force array is not zeroed correctly on all the nodes but I figured
it would be quicker for Wei to track down the exact problem.

Anyway, this does not show up in the test cases for PIMD since they get hard
wired to have nproc = ngroup which means the case of more cpus than groups
never gets tested. So I think you might want to edit the PIMD test cases and
remove the following:

  set MY_DO_PARALLEL="$DO_PARALLEL"
  set numprocs=`echo $DO_PARALLEL | awk -f numprocs.awk `
  if ( $numprocs != 4 ) then
    echo "this test is set up for 4 nodes only, changing node number to
4..."
    set MY_DO_PARALLEL=`echo $DO_PARALLEL | awk -f chgprocs.awk`
  endif

Have a look at how I have done it in test/neb/neb_gb/ which I think is more
generic:

if( ! $?DO_PARALLEL ) then
  echo " NEB can only be run in parallel. "
  echo " This test case requires a minimum of 8 mpi threads to run."
  echo " set env var DO_PARALLEL"
  echo " Not running test, exiting....."
  exit(0)
else
  set numprocs=`echo $DO_PARALLEL | awk -f ../../numprocs.awk `
  if ( $numprocs != 8 && $numprocs != 16 && $numprocs != 24 ) then
    echo " This test case requires a least 8 mpi threads."
    echo " The number of mpi threads must also be a multiple of 8 and not
more than 24."
    echo " Not running test, exiting....."
    exit(0)
  endif

Having to have at least the same number of mpi threads as groups is a bit of
a pain for testing as it makes it tempting to create non-realistic test
cases - like having just 4 groups. Ideally for NEB this should be something
like 32 replicas but then it would require the user to set DO_PARALLEL to
use 32 cpus which could play havoc with all the other test cases. So I'm not
sure what the best option is here - for the moment I have been settling on 8
replicas for NEB which is at least more realistic than 4 but of course this
still requires running the parallel test cases with at least 8 cpus.

Anyway, just something for you to try - if you can think of a better
approach than above please let me know since it far from perfect...

I have also added a numprocs.awk to the route test directory - it is
probably best if all test cases use this instead of having a local copy in
their own test directory. That way when we work out how to deal with systems
that don't specify nprocs with "-np" we will only have one file to update.
 
> so, i think that the following tests can be removed:
> pimd_water/P=8/pimd
> pimd_water/P=16/pimd
> pimd_water/P=32/pimd
> neb_gb
> pimd_helium/P=4/pimd
> pimd_spcfw

Okay, I'll leave it to Dave to purge these from the master tree.
 
> tests for pimd_gb and qmm2+pimd are still missing. for these
> i need your
> help because i do not how to generate the proper prmtop files to run
> with the new implementation of PIMD within sander.MPI.

Ah okay, email me a suitable system that you would run classically and I'll
see about setting it up for QMMM. In terms of prmtop generation it should be
no different to running classically. That is if you can build a prmtop file
for running classically you don't have to change anything to run QM/MM just
edit the mdin file and add the necessary options.

All the best
Ross

/\
\/
|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
Received on Wed May 09 2007 - 06:07:39 PDT
Custom Search