Re: [AMBER-Developers] lmod xmin problems with sander and sqm

From: Andreas Goetz <agoetz.sdsc.edu>
Date: Wed, 09 Dec 2009 17:31:15 -0800

Hi Scott,

Thanks for creating the bug report!

The optimized geometries obtained with sqm are OK. I checked AM1
geometry optimizations with sqm vs mopac2009 and orca 2.6.35 (which I
trust in particular). I tested both Sustiva and NMA.

Looking into the AMBER10 sustiva test
($AMBERHOME/test/antechamber/sustiva) shows that actually the mopac6
geometry optimization does not converge, hence the difference in
optimized structure with respect to this old test. sqm gives the right
answer.

This still leaves the question why sqm requires so many steps to
converge. For my NMA test for example I got 51/87/188 steps with
orca/mopac2009/sqm (all optimizations in Cartesian coordinate space with
tight convergence criteria, see below).

Convergence criteria seem to be OK. From the ab initio/DFT world I am
used to having well converged geometries with Cartesian gradient
convergence criteria somewhere along
GRMS = 1.0E-04 au = 0.12 kcal/(mol*A)
GMAX = 3.0E-04 au = 0.36 kcal/(mol*A)
or extremely well converged geometries with GRMS/GMAX one order of
magnitude tighter (which is what I used for orca).

sqm at the moment uses by default
GRMS = 2.0E-02 kcal/(mol*A) = 1.7E-05 au
which seems to be reasonable to me (if one wants very well converged
geometries).

For comparison, mopac 2009 with the PRECISE keyword uses
GRMS = 5.E-02 kcal/(mol*A) = 4.2E-05 au

Thanks and all the best,
Andy

Scott Brozell wrote:
> Hi,
>
> On Sat, Dec 05, 2009 at 09:56:34PM -0500, case wrote:
>> On Fri, Dec 04, 2009, Andreas Goetz wrote:
>>> Charges in mol2 file are different (sustiva.mol2 vs sustiva.mol2.save).
>> These small diffs in charges (about 0.001 electron) have been seen for
>> divcon, mopac and now sqm. I did have hopes that using sqm would allow for
>> completely reproducible minimizations with different compilers, but that does
>> not seem to be the case. Of course, it is possible that we can fix things
>> (perhaps by forcing a very small gradient?) in a way that would remove
>> diffs between compilers; but I don't think the current results are a
>> show-stopper for AmberTools.
>
> I created a bug for the sander issues; these have also been reported before.
> http://bugzilla.ambermd.org/show_bug.cgi?id=120
>
> As far as sqm its not clear those issues merit a bug.
> We may still try to tidy up some tests for at 1.3
>
> thanks,
> Scott
>
> On Sat, Dec 05, 2009 at 06:41:46PM +0100, istvan.kolossvary.hu wrote:
>> Hi Andreas,
>>
>> Ben also reported similar issues with using xmin. It is only very
>> recently that Sander and SQM use xmin and lmod from AmberTools. Sander
>> used to have its own lmod/xmin and SQM didn't use either. The problems
>> you guys see must be related to the new lmod/xmin drivers. The lmod
>> and xmin libraries are completely self-contained and thoroughly tested
>> in AmberTools. If they get the right input, they should work fine. I
>> will look at the new drivers to see what might be going on and will
>> probably ask Dave's help since I am not familiar with either Sander or
>> SQM. It would also make sense to have a single set of lmod/xmin tests,
>> preferably those already in AmberTools. I'll work on this and will get
>> back to you.
>>
>> Istvan
>>
>> Quoting Andreas Goetz <agoetz.sdsc.edu>:
>>
>>> Hi,
>>>
>>> I have been looking into geometry optimizations with sander using xmin
>>> (ntmin=3) and sqm (which uses xmin by default). There are two problems
>>> with the output and the testjobs for sander and sqm fail. This affects
>>> both the Amber11 development tree (sander and sqm) and the AmberTools
>>> 1.3 release candidate (sqm).
>>>
>>> I am running opensuse 11.2:
>>> uname -a
>>> Linux gecko 2.6.31.5-0.1-default #1 SMP 2009-10-26 15:49:03 +0100
>>> x86_64 x86_64 x86_64 GNU/Linux
>>>
>>> I compiled with Intel 11.0.074 with MKL:
>>> ifort -V
>>> Intel(R) C Intel(R) 64 Compiler Professional for applications
>>> running on Intel(R) 64, Version 11.0 Build 20081105 Package ID:
>>> l_cproc_p_11.0.074
>>>
>>> and gnu compilers without MKL:
>>> gcc --version
>>> gcc (SUSE Linux) 4.4.1 [gcc-4_4-branch revision 150839]
>>>
>>> Ross confirmed problems described below for Amber11 with Intel
>>> Compiler version 10.1.018 on RHEL 5.
>>>
>>>
>>> A) sander:
>>> ==========
>>> Amber11 cvs tree (updated Dec 3rd, 2pm)
>>>
>>> $AMBERHOME/test/dhfr/Run.dhfr.lmodxmin
>>> --------------------------------------
>>> I am attaching my output file (intel compiler) for reference
>>> (mdout.dhfr.lmodxmin)
>>>
>>> There are three issues:
>>> 1)
>>> The RMS value of the gradient is printed as being zero for all steps
>>> of the geometry optimization. The relevant code is in subroutine
>>> run_xmin ($AMBERHOME/src/sander/lmod.f). The RMS value should be
>>> returned by function xminc (variable grms), so something goes wrong
>>> here - any help is appreciated.
>>>
>>> In addition, for printing during the geometry optimization, this is
>>> the *wrong place* to calculate the RMS value of the gradient. The
>>> gradient is calculated in subroutine gradient_calc *after* call of
>>> xminc and the progress of the minimization is printed after the
>>> gradient has been calculated there by subroutine
>>> report_min_progress. Thus either report_min_progress has to be called
>>> outside of gradient_calc or the RMS of the gradient has to be
>>> calculated in gradient_calc. Comments?
>>>
>>> 2)
>>> The second error is that the number of steps NSTEP printed for the
>>> FINAL RESULTS is wrong. The reason is that the variable xmin_iter (see
>>> subroutine run_xmin in $AMBERHOME/src/sander/lmod.f) does not count
>>> the number of iterations correctly. xmin_iter is updated by function
>>> xminc - any idea what it counts? The variable n_force_calls contains
>>> the correct number of geometry optimization steps (one force call per
>>> geometry optimization step) and should probably be used instead. Comments?
>>>
>>> 3)
>>> The geometry optimizer takes different steps. The energies and maximum
>>> gradient element differ already after the first step.
>>>
>>>
>>> I will file a bug report on the Amber Bugzilla about this.
>>>
>>>
>>> According to the cvs log, the test
>>> $AMBERHOME/test/dhfr/Run.dhfr.lmodxmin has been created only very
>>> recently (2009/08/18). I checked the Amber10 manual and the xmin
>>> method for geometry optimization is described there, so there should
>>> be other test jobs to verify the implementation. I found only one
>>> ($AMBERHOME/test/gbrna/Run.gbrna.xmin) but it is not invoked by the
>>> test Makefiles - any idea why? If invoked, this test fails for the
>>> same reasons as given above.
>>>
>>>
>>> B) sqm:
>>> =======
>>> AmberTools 1.3 RC (AmberTools.24nov09.tar.bz2):
>>> AmberTools cvs tree (updated Dec 3rd, 2pm)
>>>
>>> $AMBERHOME/test/antechamber/sustiva
>>> -----------------------------------
>>> I am attaching my output files (intel compiler) for reference
>>> (sustiva.mol2 and sqm.out)
>>>
>>> Charges in mol2 file are different (sustiva.mol2 vs sustiva.mol2.save).
>>> Reason: sqm needs 105 additional geometry optimization steps to
>>> converge and the energies are different already during the first
>>> couple of steps. As a consequence the geometry and charges are
>>> different. My guess is that this is related to the problems I observed
>>> for sander since sqm is using xmin for geometry optimization.
>>>
>>> I also compared the structures obtained from sqm with mopac (test
>>> output in Amber10) - the structures (and obtained charges) are
>>> different. I find this worrisome. Also, sqm requires many more steps
>>> to converge the geometry than mopac does - this, however, may be
>>> related to the coordinate set (Cartesian vs internal or redundant
>>> internal?) in which the optimization is performed. I will set up
>>> different test cases and look into this.
>>>
>>> $AMBERHOME/test/antechamber/ash
>>> -------------------------------
>>> Charges in mol2 file are different (ash.mol2 vs ash.mol2.save).
>>> Reason: Probably same as for $AMBERHOME/test/antechamber/sustiva (no
>>> sqm.out.save here to check)
>>>
>>> All other antechamber tests which use sqm for charge generation
>>> pass. My guess is that the problem remains but the optimized
>>> geometries are close enough to generate identical charges (there is no
>>> way to check because there is no sqm.out.save).
>>>
>>> $AMBERHOME/test/sqm/AM1
>>> -----------------------
>>> This test is not invoked by "make -f Makefile_at test" (is this
>>> intentional?).
>>> However, it fails when invoked manually. Again, the energies of the
>>> geometry optimization steps are different and an additional 23 steps
>>> are required for convergence which results in different geometry and
>>> charges.
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers

_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Wed Dec 09 2009 - 17:30:03 PST
Custom Search