All,
Is there a consensus or preferred approach when it comes to test failures? At the moment, some tests will fail and keep going, while if others fail (especially with "Program error") the test stops entirely. From my point of view this is unhelpful, since establishing precisely which tests fail may require several runs through.
I know I brought this up briefly at the meeting, but forget what the preferred solution was.
One that occurs to me is to precede each test with a - (in the Makefile). But this seems like a blunt instrument, insofar as if it's just done by itself the user may not necessarily be aware of catastrophic problems.
What would people think about the possibility of a summary report at the end? For example, something like this:
73 tests were requested. Of these:
8 tests were skipped (system/environment requirements not met)
58 tests passed
4 tests failed diff - check output
3 tests encountered errors
(Traffic on the earlier thread about processor requirements for parallel tests suggested that skipping tests is a bad idea, however I would expect that the real problem is not so much the skipping itself, as the fact that it is done silently.)
What do people think? Bad idea, must-have, or good if someone puts in the time?
Cheers,
Ben
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Feb 23 2010 - 15:30:02 PST