Contributions
Re: Differentiate between a failure and an error
I can see your point but what I am referring to is a keyword test for a desktop application. From what you are suggesting it sounds like you are referring to unit test? In this case we have an application that is exhibiting abhorrent behavior intermittently. The problem stemmed from a race condition. The only indication to the user was a delay in the output. So, the test was designed to click on something and do an image check and click away, over and over. Sometimes the problem happens, sometimes it doesn't. So just because the image check failed the test shouldn't stop. I need the ability to run the check some X-thousands of times and know the number of occurrences in that execution, but I also need to tell the test to stop if the application under test crashes. I know this isn’t proving anything because of the implied negative infinitive but when your client is the DoD logic doesn't always apply. I need to be able to say I haven’t seen the problem again in X tests after fix Y. Likewise, I don’t think I should have to write code to run on every execution of a built-in feature. Adding these couple of parameters would greatly increase the flexibility of the software and put it more in line with something like TestStand.1.2KViews0likes0CommentsDifferentiate between a failure and an error
A failure in a check step is not the same thing as an error. A failure simply means that the check/test did not pass. An error should be reserved for unexpected exceptions, the application under test crashing for instance. This should also be configurable on a per step basis. In other words, some failures at check steps can cause the test to stop while others don't. This way we can design tests to continue executing on some failures while stopping on critical "ensure" style checks. Likewise, my test should continue executing on most failures but default to stopping when the AUT crashes.1.2KViews0likes4Comments