Hi Scott,
I finally have time to get back to this.
thanks for the info- I didn't mean to imply that
you did something wrong, just pointing out (as you
had already done I guess!) that the NEB test case
didn't work and apparently had not been run in a long
time. I didn't make the test case (or write the code)
but in trying to use it I wanted to make sure everyone
knows it does not work.
the LES/PIMD/NEB stuff is convoluted and it's not clear
which parts of the code are for which method, and
I'm not sure who knows how any of it is working...
some of the "LES" code is really for PIMD or NEB and
there's not enough info in the code for me to know
how to fix it (as you said).
Perhaps Wei or whoever wrote this could please look
at it and either add more comments or fix the code?
thanks again
Carlos
On 9/6/07, Scott Brozell <sbrozell.scripps.edu> wrote:
>
> Hi,
>
> On Thu, 6 Sep 2007, Carlos Simmerling wrote:
>
> > Scott- I tracked this problem down to something you recently changed in
> > sander.f
> >
> > ! SRB 07/2007 added pimd initialization for neb; this should be
> > verified!
> > if ( ipimd > 0 .or. ineb > 0 ) then
> >
> > this causes the same variables to be allocated in pimd_init and
> neb_init.
> >
> > was there a reason for needing this, or should I revert it back?
>
> The reasons were given in the cvs logs:
> RCS file: /thr/loyd/case/cvsroot/amber10/src/sander/sander.f,v
> ----------------------------
> revision 9.27
> date: 2007/07/20 21:58:53; author: sbrozell; state: Exp; lines: +4 -1
> Added pimd initialization for the case ipimd == 0 and ineb > 0.
> Pimd initialization for neb is necessary because neb uses
> several allocated pimd variables; pimd_vars:: nrg_all and
> full_pimd_vars:: xall for example.
> However, I have no idea whether I have done this correctly !
> In addition, the role of neb in subroutine do_pme_recip looks
> very suspicious.
> There are currently no test cases with LES and neb.
> Thus, pimd, neb, and les related code should be carefully inspected
> by their developers !!!!!
> ----------------------------
> RCS file: /thr/loyd/case/cvsroot/amber10/src/sander/ew_force.f,v
> revision 9.18
> date: 2007/07/20 21:21:04; author: sbrozell; state: Exp; lines: +7 -5
> Corrected the bug exercised by test/LES/Run.PME_LES.
> This bug caused a seg fault because allocated variable
> part_pimd_vars::pimd_mmchg was used without being defined.
> I added the same guarding-if-statement used in another section
> of subroutine do_pme_recip. That seems correct; however, do_pme_recip
> should be carefully examined because the interplay between pimd, neb,
> les, and variable mpoltype is not clear to me; in particular,
> I wonder whether the last les ifdef code fragment should be inside
> the mpoltype == 0 if-statement ????? Here is the fragment:
> #ifdef LES
> if( ipimd > 0 ) then
> eer = eer_sum/ncopy
> frc = frc + ftmp/ncopy
> frcx_copy = frcx_copy/ncopy
> end if
> #endif
> Regardless, the logic in do_pme_recip should be less convoluted or
> at least clarified with comments.
> In addition, the role of neb in do_pme_recip looks very suspicious.
> There are currently no test cases with LES and neb.
>
>
> And the mess was reported to amber-developers:
> Date: Fri, 20 Jul 2007 15:18:46 -0700
> From: Scott Brozell <sbrozell.scripps.edu>
> Subject: amber-developers: PIMD, NEB, LES - request for code inspection
> and tests
>
>
> My commits were patches to get the code to compile and to not crash.
> This was in context of the nightly testing.
> In addition to my requests above for more tests of these features,
> let us have some tests with small numbers of processors - 1 or 2 or maybe
> 4.
> The nightly tests are not running the neb tests because they
> uses only 2 processors:
> cd neb/neb_gb; ./Run.neb_classical
> This test case requires a least 8 mpi threads.
> The number of mpi threads must also be a multiple of 8 and not more than
> 24.
> Not running test, exiting.....
> cd neb/neb_gb_large_system; ./Run.neb_ls_classical
> This test case requires a least 32 mpi threads.
> The number of mpi threads must also be a multiple of 32 and not more than
> 128.
> Not running test, exiting.....
> export TESTsander=/tmp/amber10/exe/sander.LES.MPI; make
> test.sander.PIMD.partial
> make[1]: Entering directory `/tmp/amber10/test'
> cd PIMD/part_pimd_water; ./Run.pimd
> This test not set up for parallel
> cannot run in parallel with #residues < #pes
> cd PIMD/part_nmpimd_water; ./Run.nmpimd
> This test not set up for parallel
> cannot run in parallel with #residues < #pes
>
>
> The ball is in the court of the PIMD, NEB, LES developers.
>
> Scott
>
>
> > On 9/6/07, Carlos Simmerling <carlos.simmerling.gmail.com> wrote:
> > >
> > > is the neb test case failing for anyone else with the current amber10
> CVS
> > > version?
> > > The test case as well as my own runs are giving me
> > >
> > > ASSERTion 'ierr.eq.0' failed in pimd_init.f at line 240.
> > >
> > > I don't see anything obviously wrong with that code.
> > >
> > > just checking to see if anyone else is seeing this.
> > > carlos
> >
>
>
--
===================================================================
Carlos L. Simmerling, Ph.D.
Associate Professor Phone: (631) 632-1336
Center for Structural Biology Fax: (631) 632-1555
CMM Bldg, Room G80
Stony Brook University E-mail: carlos.simmerling.gmail.com
Stony Brook, NY 11794-5115 Web: http://comp.chem.sunysb.edu
===================================================================
Received on Wed Sep 19 2007 - 06:07:42 PDT