Re: [AMBER-Developers] Looking for another volunteer--make a "fast" test suite?

From: Scott Brozell <>
Date: Thu, 7 Sep 2017 14:27:56 -0400


Quality assurance is more important than quality control in
scientific programming. This thread demonstrates that we are doing
a competent job at QA, and it always helps to have a reminder regarding
best practices, such as running the tests before committing.

Although i almost dozed off reading Jason's long post :-O, i think he
hits the right note, namely that our process should be designed for
more (ie, QA) than just keeping out a bad commit (ie, QC).

Some good QC ideas in this thread...


On Thu, Sep 07, 2017 at 12:41:29PM +0000, B. Lachele Foley wrote:
> Er.... not to annoy anyone, but another plug for pre-push tests:
> * Does not need to impact the current testing setup.
> * Can be a small subset of existing tests or new tests or whatever.
> * Does not require a special server or software: only needs git.
> * Will require minor some modification of the make script to use our way of doing it
> * You might want to find another way to implement it
> * Blocks pushes if the requirements (tests) fail - so the code never reaches the main repo
> * Not terribly hard to implement in any case, *but*
> * Devs would need to each specify a small set of critical tests from their domain or there's no point.
> * Has helped us a lot and is not specific to what we do
> * And, we already have experience doing this!
> I'll stop now if no one shows interest.
> :-) Lachele
> Dr. B. Lachele Foley
> Associate Research Scientist
> Complex Carbohydrate Research Center
> The University of Georgia
> Athens, GA USA
> ________________________________
> From: Ross Walker <>
> Sent: Thursday, September 7, 2017 5:44:06 AM
> To: AMBER Developers Mailing List
> Subject: Re: [AMBER-Developers] Looking for another volunteer--make a "fast" test suite?
> Just a note from experience here. Be careful just shortening things since it can often have unexpected side effects. For example back in the days of Amber 7 there was an issue where the list builder was broken and it stayed that way for a long time because all the test cases were passing so it was assumed nothing was wrong. Turns out none of the tests were running long enough to actually trigger a list build. Took a long time to figure that out.
> Just something to keep in mind.
> All the best
> Ross
> > On Sep 6, 2017, at 19:33, David Cerutti <> wrote:
> >
> > One other thing we might do is have each of us go into our respective
> > projects and ensure that the tests are running as efficiently as possible.
> > For example, rather than 50 steps of MD printing every ten iterations, can
> > the same quality assurance be got in ten steps printing every two
> > iterations? For sander/pmemd, startup time is still a significant
> > overhead, but shave 40% off many of the test cases and that'll take the
> > edge off the problem. (Another thing to mention is that if your test needs
> > 50 steps to monitor numbers with four places after the decimal and ensure
> > the code is not subtly corrupted, a more sensitive metric needs to be
> > devised to get at lower significant figures.) I think that part of the
> > problem here is like pollution: each test contributing an extra few seconds
> > goes a long way to making the suite as a whole bloated.
> >
> > Dave
> >
> >
> > On Wed, Sep 6, 2017 at 12:14 PM, Daniel Roe <> wrote:
> >
> >> On Wed, Sep 6, 2017 at 10:30 AM, Jason Swails <>
> >> wrote:
> >>> My suggestion is to move from gitosis to a tool that implements a
> >>> PR/CI-gating workflow like GitLab (which can be self-hosted). Disable
> >>> pushing directly to master and make every change pass through a gated
> >> pull
> >>> request that enforces some level of quality before merging is permitted.
> >>
> >> Yes, let's do this! But only once we've come up with a far more
> >> compact test suite per DAC's previous request. The full test suite can
> >> still be run nightly.

AMBER-Developers mailing list
Received on Thu Sep 07 2017 - 11:30:04 PDT
Custom Search