bcosta's avatar
bcosta
Occasional Visitor
6 years ago
Status:
New Idea

Differentiate between a failure and an error

A failure in a check step is not the same thing as an error. A failure simply means that the check/test did not pass. An error should be reserved for unexpected exceptions, the application under test crashing for instance. This should also be configurable on a per step basis. In other words, some failures at check steps can cause the test to stop while others don't. This way we can design tests to continue executing on some failures while stopping on critical "ensure" style checks. Likewise, my test should continue executing on most failures but default to stopping when the AUT crashes.

4 Comments

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor

    IT is up to you, as the developer of the automation, to make that determinination in how you've implemented your code.  You should handle errors in code to determine "Is this a test case failure or is this a code exception?"

    IMO, technically, a code exception IS a test case failure... the test case failed to complete properly with a "Green" status.  This could be due to an unhandled situation (new dialog, redesigned UI, etc) or an actual code error (type mismatch, etc).  Something happened that was counter to the expected results.

    If a human being were following a documented manual test and the documentation didn't match up to the application, that would ALSO be a failed test case...in that situation, the "code" is the documented test case that was incorrect... sure, it wasn't an actual PROBLEM with the application under test, but it WAS a failed test case indicating a required mitigation.

  • bcosta's avatar
    bcosta
    Occasional Visitor

    I can see your point but what I am referring to is a keyword test for a desktop application. From what you are suggesting it sounds like you are referring to unit test? In this case we have an application that is exhibiting abhorrent behavior intermittently. The problem stemmed from a race condition. The only indication to the user was a delay in the output. So, the test was designed to click on something and do an image check and click away, over and over. Sometimes the problem happens, sometimes it doesn't. So just because the image check failed the test shouldn't stop. I need the ability to run the check some X-thousands of times and know the number of occurrences in that execution, but I also need to tell the test to stop if the application under test crashes. I know this isn’t proving anything because of the implied negative infinitive but when your client is the DoD logic doesn't always apply. I need to be able to say I haven’t seen the problem again in X tests after fix Y. Likewise, I don’t think I should have to write code to run on every execution of a built-in feature. Adding these couple of parameters would greatly increase the flexibility of the software and put it more in line with something like TestStand.

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor

    A keyword test = code... just of a different format.  So, what you will need to do is build into your keyword test the same sort of checks of for loops/while loops, etc, checking for whether or not a particular criteria is met within a time frame... if it is not met in the time frame, log message (error, warning, etc., custom priority, etc) indicating the failure to meet the conditions and report it.   Then, "clean up" the situation and move on to the next part of the test.  It's not a "unit test", it's just a matter of a well-designed automated test that a) tests what you need it to test b) accounts for potential anticipated fail points and c) handles "unexpected" scenarios and then log/document the results for human interpretation....

  • cunderw's avatar
    cunderw
    Community Hero

    If you are using images to do the comparison you could have them log a warning instead of an error, or write out the number of times you see it. You can also use event handlers for the OnError event to suppress image check errors vs application errors. There is a lot of built in stuff that should be able to accomplish what you are trying to do.