Florian_Hofmann
11 years agoContributor
Log analysis by groups rather than items
Hi everybody,
maybe I'm lacking some imagination and just don't find the right words that might provide me with an answer from existing sources (or make for a more descriptive subject), sorry if yes.
I understand that the TestComplete log contains information on how many project items where ran, and how many of them passed/failed.
I'm wondering if there's a way that the log counts the number of (sub)groups as set in the "organize tests"-view (Ctr-Alt-T). In order to make this information useful, It might make sense to include only some of the groups into the counting.
What I want to achieve is the following:
Recently our company started using TestComplete for automating JIRA testcases for GUI testing, which means I "translate" workflows intended for human readers whith a high level of knowledge about the tested apllication into something TestComplete is able to execute.
Not surprisingly, there is no 1:1-mapping from JIRA keys to test items. Many items will be called in multiple testcases, but only to create the setting needed to run the item(s) that make up the testcase as such.
I can name groups and subgroups after JIRA keys, and put into them the sequence of items that form the associated JIRA testcase. In the log tree, I than can easily see which JIRA testcases passed and which ones failed, but I can't seem to find a way to use that information beyond...well, seeing it in the log tree.
The most basic thing I might want to achieve is a statistics that says "X out of Y (relevant) groups passed the test" rather than "X out of Y items passed the test".
A more sopisticated goal might be that JIRA testcases will be reported as "passed" if all the items included in the group passed, and "failed" if at least one of them fails (of course I have to exclude the possibility of reporting st. as failed purely due to previous failure).
Since I'm new to TestComplete as well as automated testing in general, feel free to correct anything that you feel needs correction about my approach - I will appreciate it, I promise ;-)
Florian
maybe I'm lacking some imagination and just don't find the right words that might provide me with an answer from existing sources (or make for a more descriptive subject), sorry if yes.
I understand that the TestComplete log contains information on how many project items where ran, and how many of them passed/failed.
I'm wondering if there's a way that the log counts the number of (sub)groups as set in the "organize tests"-view (Ctr-Alt-T). In order to make this information useful, It might make sense to include only some of the groups into the counting.
What I want to achieve is the following:
Recently our company started using TestComplete for automating JIRA testcases for GUI testing, which means I "translate" workflows intended for human readers whith a high level of knowledge about the tested apllication into something TestComplete is able to execute.
Not surprisingly, there is no 1:1-mapping from JIRA keys to test items. Many items will be called in multiple testcases, but only to create the setting needed to run the item(s) that make up the testcase as such.
I can name groups and subgroups after JIRA keys, and put into them the sequence of items that form the associated JIRA testcase. In the log tree, I than can easily see which JIRA testcases passed and which ones failed, but I can't seem to find a way to use that information beyond...well, seeing it in the log tree.
The most basic thing I might want to achieve is a statistics that says "X out of Y (relevant) groups passed the test" rather than "X out of Y items passed the test".
A more sopisticated goal might be that JIRA testcases will be reported as "passed" if all the items included in the group passed, and "failed" if at least one of them fails (of course I have to exclude the possibility of reporting st. as failed purely due to previous failure).
Since I'm new to TestComplete as well as automated testing in general, feel free to correct anything that you feel needs correction about my approach - I will appreciate it, I promise ;-)
Florian
- Hi Florian -
We also have some tests that are repeated many times with different data, but we don't exclude them from the log because sometimes the data itself is a clue to finding the root cause of a test failure.
What we did do in our logs is start using the Group feature and the Append Log and Pop Log features inside all of our tests. We use an Append/Pop for the beginning and end of each test, at the beginning and end of each Group, and at the beginning and end of any loops.
Take a look at my screenshot. It looks like a lot of extra work in the code but each of those indents in the log is an Append and that really helps the readability when a test step fails.