Re: [AMBER-Developers] General test failure question

From: Lachele Foley <lfoley.ccrc.uga.edu>
Date: Tue, 23 Feb 2010 18:40:36 -0500

I like:

1. Tests that cannot run for some reason do not stop the testing altogether.

2. A log file is written that lists, with brief description, any test that does anything other than pass.

3. The log file is named differently for the AT, AT-parallel, serial and parallel tests.

4. I like the summary report idea, and that written to the log file, too.

The main issues with skipped parallel tests are:

* You have to run the entire test suite multiple times because some of the tests are mutually exclusive (np must=2 or must be >= 4, etc.), but they aren't in separate test sets (I think Jason fixed that).

* Unless you employ non-documented procedures, the skipped tests are done so silently.

On that note:

5. Can TEST_FAILURES get a new name each time? I like to keep them all around for users to inspect if they suspect a run is bad. Currently, I rename them, which is acceptable, but it would be nice to set it all up, go home, come back the next morning and read the report.

And I realize that, being the person who can't help for a month, I shouldn't complain at all. I'll try to make up for it...

:-) Lachele
--
B. Lachele Foley, PhD '92,'02
Assistant Research Scientist
Complex Carbohydrate Research Center, UGA
706-542-0263
lfoley.ccrc.uga.edu
----- Original Message -----
From: Ben Roberts
[mailto:roberts.qtp.ufl.edu]
To: AMBER Developers Mailing List
[mailto:amber-developers.ambermd.org]
Sent: Tue, 23 Feb 2010 18:05:05
-0500
Subject: [AMBER-Developers] General test failure question
> All,
> 
> Is there a consensus or preferred approach when it comes to test failures?
> At the moment, some tests will fail and keep going, while if others fail
> (especially with "Program error") the test stops entirely. From my point of
> view this is unhelpful, since establishing precisely which tests fail may
> require several runs through.
> 
> I know I brought this up briefly at the meeting, but forget what the
> preferred solution was.
> 
> One that occurs to me is to precede each test with a - (in the Makefile).
> But this seems like a blunt instrument, insofar as if it's just done by
> itself the user may not necessarily be aware of catastrophic problems.
> 
> What would people think about the possibility of a summary report at the
> end? For example, something like this:
> 
> 73 tests were requested. Of these:
> 8 tests were skipped (system/environment requirements not met)
> 58 tests passed
> 4 tests failed diff - check output
> 3 tests encountered errors
> 
> (Traffic on the earlier thread about processor requirements for parallel
> tests suggested that skipping tests is a bad idea, however I would expect
> that the real problem is not so much the skipping itself, as the fact that
> it is done silently.)
> 
> What do people think? Bad idea, must-have, or good if someone puts in the
> time?
> 
> Cheers,
> Ben
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
> 
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Feb 23 2010 - 16:00:02 PST
Custom Search