Re: [AMBER-Developers] General test failure question

From: Gustavo Seabra <gustavo.seabra.gmail.com>
Date: Fri, 26 Feb 2010 13:06:14 -0300

OK, now we know why, even though everyone agrees to its worthiness, no
one has done this change before ;-)

I suggest a simple "quick-and-dirty" solution: Just put the minuses
("-") before each test, so the test doesn't stop at any of those, and
direct the output to some file. Then, in the "finished.serial"
"finished.parallel" rules, add a call to a "count.errors" rule that
just awks the output file looking for error messages / signs and
prints the messages.

How does it sound?

Gustavo.


On Fri, Feb 26, 2010 at 12:55 PM, Ben Roberts <roberts.qtp.ufl.edu> wrote:
> OK, following feedback, I've started working on this kind of thing,
> beginning in the nab makefile.
>
> One problem I've run into is that it seems I can't set a variable from
> within a Makefile rule. For example:
>
> requested_tests = ""
>
> duplex_test::
>        <do stuff>
>        requested_tests += "duplex_test"
>
> doesn't seem to achieve anything. Instead, I get a grumble about how
> requested_tests is not a valid shell command. [Gee, sh, ya think?]
>
> Extensive Googling hasn't helped all that much. Indeed, my forays into
> Google have suggested (without actually saying so explicitly) that Make
> expects all variables to be firmly and finally set before one starts
> actually running anything. Does anyone know whether there is a way to
> achieve this? Another option would be to re-cast the whole set of test
> scripts into another language, like Perl or Python. But that seems a lot of
> work, not to mention making it harder for the end user/developer to run
> specific tests.
>
> - Ben
>
> On 24/02/2010, at 12:00 PM, Sally Pias wrote:
>
>>> What would people think about the possibility of a summary report at the
>>> end? For example, something like this:
>>>
>>> 73 tests were requested. Of these:
>>> 8 tests were skipped (system/environment requirements not met)
>>> 58 tests passed
>>> 4 tests failed diff - check output
>>> 3 tests encountered errors
>>
>> I think such a report would be helpful.  It would additionally be
>> useful to know *which* tests were skipped or encountered errors (and
>> what the errors were).
>>
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>

_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Fri Feb 26 2010 - 08:30:03 PST
Custom Search