Re: [AMBER-Developers] Looking for another volunteer--make a "fast" test suite?

From: Hai Nguyen <nhai.qn.gmail.com>
Date: Sun, 10 Sep 2017 03:27:03 -0400

Hi all,

Please see the attached file for timing of 546 serial tests for AT.
It's great if you can have a look and try to reduce the time for the
package(s) that you maintain.

Top 10:

cd ../src/mm_pbsa/Examples && AMBER_SOURCE=/home/haichit/amber_git/amber
./Run.mmpbsa.test: 1800.06 (s)

cd nmropt && make all : 763.98 (s)

cd antechamber && make -k test : 595.61 (s)

cd abfqmmm/abfqmmm_lysozyme_md && ./Run.abfqmmm_lysozyme_md: 567.46 (s)

cd rism3d.periodic/2igd && ./Run.2igd.kh.pme: 410.24 (s)

cd rism3d.periodic/1d23 && ./Run.1d23.kh.pme: 389.76 (s)

cd mmpbsa_py && make test : 363.11 (s)

cd abfqmmm/abfqmmm_dmpoh_md && ./Run.abfqmmm_dmpoh_md: 318.02 (s)

cd qmmm_DFTB/aladip_tip3p_ewaldpme && ./Run.aladip_ewald_ntb1_link_atoms:
244.31 (s)

cd sander_pbsa_frc && ./test : 181.40 (s)

cd qmmm2/MG_QM_water_MM_AM1_periodic && ./Run.notimaged_md_pme_qmewald:
146.09 (s)

cd qmmm2/MG_QM_water_MM_AM1_periodic &&
./Run.notimaged_md_pme_qmewald_lowmem: 142.08 (s)

cd /home/haichit/amber_git/amber/AmberTools/src/cpptraj/test && make -k
test: 140.24 (s)

cd qmmm_DFTB/MG_QM_water_MM_DFTB_periodic &&
./Run.notimaged_md_pme_qmewald: 138.89 (s)

cd rism3d/ala && ./Run.ala : 133.76 (s)

cd rism3d.periodic/4lzta && ./Run.4lzta_5.kh.pme: 111.82 (s)

cd qmmm_DFTB/aladip_tip3p_ewaldpme && ./Run.aladip_ewald_ntb1_qmewald2:
107.47 (s)

cd amd && make -k test : 106.39 (s)

cd qmmm_DFTB/aladip_tip3p_ewaldpme && ./Run.aladip_ewald_ntb1: 105.20 (s)

cd qmmm_DFTB/aladip_tip3p_ewaldpme && ./Run.aladip_ewald_ntb2: 098.91 (s)

cheers
Hai

On Fri, Sep 8, 2017 at 12:33 AM, Hai Nguyen <nhai.qn.gmail.com> wrote:

>
>
> On Fri, Sep 8, 2017 at 12:23 AM, Hai Nguyen <nhai.qn.gmail.com> wrote:
>
>> Just FYI for all: below is the slowest tests ( >= 60 s) for serial AT:
>>
>> test.mm_pbsa : 1885.67 (s)
>>
>> test.sander.BASIC : 1174.36 (s)
>>
>> test.sander.DFTB : 1125.35 (s)
>>
>> test.sander.ABFQMMM : 1084.86 (s)
>>
>> test.rism3d.periodic : 946.75 (s)
>>
>> test.sander.QMMM : 830.86 (s)
>>
>> test.antechamber : 660.70 (s)
>>
>> test.sander.RISM : 459.25 (s)
>>
>> test.nab : 416.18 (s)
>>
>> test.mmpbsa : 394.37 (s)
>>
>> test.pbsa : 362.49 (s)
>>
>> test.sander.CHARMM : 348.36 (s)
>>
>> test.rism1d : 269.22 (s)
>>
>> test.serial.sander.SEBOMD : 258.01 (s)
>>
>> test.cpptraj : 146.83 (s)
>>
>> test.sander.PIMD.partial : 135.31 (s)
>>
>> test.parmed : 122.80 (s)
>>
>> test.sander.GB : 106.94 (s)
>>
>> test.serial.sander.AMD : 102.74 (s)
>>
>> test.amoeba : 087.58 (s)
>>
>> test.FEW : 064.08 (s)
>>
>>
>>
> I have an experimented script that run all serial tests in parallel and
> measure the timing.
>
> - do "git pull" and "git submodule update" to get the script
> - install ambertools and "source amber.sh"
> - run test: python $AMBERHOME/AmberTools/src/
> ambertools-binary-build/conda_tools/amber.run_tests -t all -n 4
> - there will be some files in working folder: test_dif.log test_out.log
> test_summary.log test_timing.log
> (check the test_timing.log)
>
> all are experienced but it's useful (I have been using that for nightly
> build/test).
> Hai
>
>
>>
>> On Thu, Sep 7, 2017 at 2:27 PM, Scott Brozell <sbrozell.rci.rutgers.edu>
>> wrote:
>>
>>> Hi,
>>>
>>> Quality assurance is more important than quality control in
>>> scientific programming. This thread demonstrates that we are doing
>>> a competent job at QA, and it always helps to have a reminder regarding
>>> best practices, such as running the tests before committing.
>>>
>>> Although i almost dozed off reading Jason's long post :-O, i think he
>>> hits the right note, namely that our process should be designed for
>>> more (ie, QA) than just keeping out a bad commit (ie, QC).
>>>
>>> Some good QC ideas in this thread...
>>>
>>> scott
>>>
>>> On Thu, Sep 07, 2017 at 12:41:29PM +0000, B. Lachele Foley wrote:
>>> >
>>> > Er.... not to annoy anyone, but another plug for pre-push tests:
>>> >
>>> >
>>> > * Does not need to impact the current testing setup.
>>> > * Can be a small subset of existing tests or new tests or
>>> whatever.
>>> > * Does not require a special server or software: only needs git.
>>> > * Will require minor some modification of the make script to
>>> use our way of doing it
>>> > * You might want to find another way to implement it
>>> > * Blocks pushes if the requirements (tests) fail - so the code
>>> never reaches the main repo
>>> > * Not terribly hard to implement in any case, *but*
>>> > * Devs would need to each specify a small set of critical tests
>>> from their domain or there's no point.
>>> > * Has helped us a lot and is not specific to what we do
>>> > * And, we already have experience doing this!
>>> >
>>> > I'll stop now if no one shows interest.
>>> >
>>> > :-) Lachele
>>> >
>>> > Dr. B. Lachele Foley
>>> > Associate Research Scientist
>>> > Complex Carbohydrate Research Center
>>> > The University of Georgia
>>> > Athens, GA USA
>>> > lfoley.uga.edu
>>> > http://glycam.org
>>> > ________________________________
>>> > From: Ross Walker <ross.rosswalker.co.uk>
>>> > Sent: Thursday, September 7, 2017 5:44:06 AM
>>> > To: AMBER Developers Mailing List
>>> > Subject: Re: [AMBER-Developers] Looking for another volunteer--make a
>>> "fast" test suite?
>>> >
>>> > Just a note from experience here. Be careful just shortening things
>>> since it can often have unexpected side effects. For example back in the
>>> days of Amber 7 there was an issue where the list builder was broken and it
>>> stayed that way for a long time because all the test cases were passing so
>>> it was assumed nothing was wrong. Turns out none of the tests were running
>>> long enough to actually trigger a list build. Took a long time to figure
>>> that out.
>>> >
>>> > Just something to keep in mind.
>>> >
>>> > All the best
>>> > Ross
>>> >
>>> > > On Sep 6, 2017, at 19:33, David Cerutti <dscerutti.gmail.com> wrote:
>>> > >
>>> > > One other thing we might do is have each of us go into our respective
>>> > > projects and ensure that the tests are running as efficiently as
>>> possible.
>>> > > For example, rather than 50 steps of MD printing every ten
>>> iterations, can
>>> > > the same quality assurance be got in ten steps printing every two
>>> > > iterations? For sander/pmemd, startup time is still a significant
>>> > > overhead, but shave 40% off many of the test cases and that'll take
>>> the
>>> > > edge off the problem. (Another thing to mention is that if your test
>>> needs
>>> > > 50 steps to monitor numbers with four places after the decimal and
>>> ensure
>>> > > the code is not subtly corrupted, a more sensitive metric needs to be
>>> > > devised to get at lower significant figures.) I think that part of
>>> the
>>> > > problem here is like pollution: each test contributing an extra few
>>> seconds
>>> > > goes a long way to making the suite as a whole bloated.
>>> > >
>>> > > Dave
>>> > >
>>> > >
>>> > > On Wed, Sep 6, 2017 at 12:14 PM, Daniel Roe <daniel.r.roe.gmail.com>
>>> wrote:
>>> > >
>>> > >> On Wed, Sep 6, 2017 at 10:30 AM, Jason Swails <
>>> jason.swails.gmail.com>
>>> > >> wrote:
>>> > >>> My suggestion is to move from gitosis to a tool that implements a
>>> > >>> PR/CI-gating workflow like GitLab (which can be self-hosted).
>>> Disable
>>> > >>> pushing directly to master and make every change pass through a
>>> gated
>>> > >> pull
>>> > >>> request that enforces some level of quality before merging is
>>> permitted.
>>> > >>
>>> > >> Yes, let's do this! But only once we've come up with a far more
>>> > >> compact test suite per DAC's previous request. The full test suite
>>> can
>>> > >> still be run nightly.
>>>
>>> _______________________________________________
>>> AMBER-Developers mailing list
>>> AMBER-Developers.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber-developers
>>>
>>
>>
>


_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers


Received on Sun Sep 10 2017 - 00:30:03 PDT
Custom Search