Forum Discussion
Hi,
Initial assumption: we are talking about automated functional (end-to-end) regression tests.
By its definition, regression test verifies that some given version of the tested application behaves exactly like the previous one. And nothing more. Point.
From the above definition it follows that:
a) Test that is used for regression must pass if the examined version of the tested application behaves exactly like the previous one and fail otherwise. Note that the criterion is 'the same behavior' but not the 'correct behavior' (or the 'expected behavior' or the 'behaves as required');
b) In the process of its creation, regression test must match the current behavior of the tested application. Again, the current behavior may be not absolutely correct, may contain some problems and/or do not correspond to requirements, but the test must pass for this version of the tested application. Obviously, if the tested application contains critical blockers or something that functions incorrectly and does not have a workaround, these parts of the tested application should not be included into regression set (just because this functionality should not be delivered to production) and found problems must be reported during the process of test creation. (Or earlier, during manual testing.)
When executed, regression test must not fail and stop because of known problems. Even better if those known problems are not reported as errors in order not to congest the log with, actually, false negatives and make it easier for you not to miss the actual change of the behavior. (Which is the goal of the regression testing.)
It is up to you how to implement the above requirement. One of the most intelligent approaches that I have met was designed like this:
-- When test failed for the first time because of the error in the tested application (or when the problem was already known at the moment of test creation), the reference to the corresponding case in the issue tracking system was inserted in test code;
-- If the workaround existed, it was programmed into test code to make it possible for the test to proceed;
-- During subsequent test runs, test code requested issue tracking system for the state of the problem. If the problem was still not resolved, test code followed workaround path. If the problem was resolved, then test code followed primary expected path. If some problem occurred on any of these paths, it was reported as an error to the test log and could be indicating either regression or some new issue.
Update: Funny, we wrote practically the same with Robert 🙂
SuperTester what a great conversation idea, thank you!
Thanks Alex, Robert!
Related Content
- 12 years ago
- 6 years ago
Recent Discussions
- 24 hours ago
- 3 days ago