Test Result analysis with automation tool : "first detection" Vs "already known" defects
Hello,
After execution of test case we must analyze the test result and check the defect detected by tool like Test Complete, tosca tricentis, selenium or other. In the first iteration, we can easily show the failed test steps highlighted in “Red” and the pass Test Steps highlighted in “Green”. However, in the next iterations it’s very difficult to do easily the same analyzes and focus the Tester and Developer attention at “first detection” defects. This last category is highlighted in “Red” with the “already known” defect. Is there any tool, addon, or mean that allow to agreggate test results from one iteration to another and make distinction between "first detection" and "already known" defects ? The idea is to have something like this : "first detection” defects highlighted in “Red”, "already known" defects [because detected by automation tool at the previous iteration] highlighted in orange color for example.
Thank you in advance for your help.
Ismail Agr
Not within the TestComplete tool itself, no. The tool is what it is: a development and execution tool for automated functional tests. The output from the tool is aimed at simply reporting the results of a test run as executed.
For test result analysis, that's what tools like QA Complete are for. Investigate in that direction.
Hi Ismail,
One of the possible ways (exact implementation depends on the used tools/components):
-- Prerequisite: a) issue tracking system (ITS) is used to register problems; and b) it is possible to query ITS via some means (DCOM, http, tcp, etc.);
-- Within your common reusable test code, implement assert method that accepts condition (true/false) and issue identifier as parameters;
-- Within this assert method, you should check the condition. If the condition is true, then everything is fine;
-- If the condition fails, then check issue identifier. If it is empty (or zero, or null, ...) then this means that this is the 'first detection' that must be reported;
-- After you analyse test log and decide that the reported problem is a problem indeed, you must report it to the ITS and provide its identifier in test code as a parameter to the relevant call to assert;
-- On subsequent test runs, when condition fails again, you need to query the ITS for the issue status. If the status is not the final one (e.g. 'fixed/deployed'), then this means that the problem is known but not fixed yet. So you may report it as an 'already known' one;
-- If the condition fails but the status of the issue is the final one, this means that this is 'regression' detection.