Forum Discussion

Florian_Hofmann's avatar
Florian_Hofmann
Contributor
10 years ago

Log analysis by groups rather than items

Hi everybody,

maybe I'm lacking some imagination and just don't find the right words that might provide me with an answer from existing sources (or make for a more descriptive subject), sorry if yes.



I understand that the TestComplete log contains information on how many project items where ran, and how many of them passed/failed.

I'm wondering if there's a way that the log counts the number of (sub)groups as set in the "organize tests"-view (Ctr-Alt-T). In order to make this information useful, It might make sense to include only some of the groups into the counting.



What I want to achieve is the following: 

Recently our company started using TestComplete for automating JIRA testcases for GUI testing, which means I "translate" workflows intended for human readers whith a high level of knowledge about the tested apllication into something TestComplete is able to execute. 

Not surprisingly, there is no 1:1-mapping from JIRA keys to test items. Many items will be called in multiple testcases, but only to create the setting needed to run the item(s) that make up the testcase as such.



I can name groups and subgroups after JIRA keys, and put into them the sequence of items that form the associated JIRA testcase. In the log tree, I than can easily see which JIRA testcases passed and which ones failed, but I can't seem to find a way to use that information beyond...well, seeing it in the log tree.

The most basic thing I might want to achieve is a statistics that says "X out of Y (relevant) groups passed the test" rather than "X out of Y items passed the test".

A more sopisticated goal might be that JIRA testcases will be reported as "passed" if all the items included in the group passed, and "failed" if at least one of them fails (of course I have to exclude the possibility of reporting st. as failed purely due to previous failure).





Since I'm new to TestComplete as well as automated testing in general, feel free to correct anything that you feel needs correction about my approach - I will appreciate it, I promise ;-)



Florian
  • Hi Florian -



    We also have some tests that are repeated many times with different data, but we don't exclude them from the log because sometimes the data itself is a clue to finding the root cause of a test failure.  



    What we did do in our logs is start using the Group feature and the Append Log and Pop Log features inside all of our tests.  We use an Append/Pop for the beginning and end of each test, at the beginning and end of each Group, and at the beginning and end of any loops.  



    Take a look at my screenshot.  It looks like a lot of extra work in the code but each of those indents in the log is an Append and that really helps the readability when a test step fails.
  • Hi Florian -



    We also have some tests that are repeated many times with different data, but we don't exclude them from the log because sometimes the data itself is a clue to finding the root cause of a test failure.  



    What we did do in our logs is start using the Group feature and the Append Log and Pop Log features inside all of our tests.  We use an Append/Pop for the beginning and end of each test, at the beginning and end of each Group, and at the beginning and end of any loops.  



    Take a look at my screenshot.  It looks like a lot of extra work in the code but each of those indents in the log is an Append and that really helps the readability when a test step fails.
  • Thanks a lot Marsha,

    looks like whenever I start guessing around what kind of solution I might need, it turns out TestComplete already includes a better solution, and I just failed to find it.

    Florian
  • Starting to wonder if either my issue or the description thereoff might be a bit....far from other users' workflow.

    If it's about the description, I'll try to give another version - let's assume I want to automate the following steps:



    0) Start application, enter the GUI on first tab

    1) enter Mastermode by pressing/clicking a key/button

    2) perform some mastermode actions

    3) leave Mastermode

    4) switch to next tab

    5) repeat steps 1-5 for all tabs of the GUI

     

    Rather than creating one single item for testing of each tab, each of them including steps 1,3,4  It seemed natural to me to create one item for each step 1,3,4 respectively, one item for each possible step 2. 

    The result ist that more than half of the items do test the same basic steps over and over again...is there a way to exclude them from being counted in the log analysis?

    Or rather included into some bigger unit than items...



    I have to admit, the question is more about aesthetics than about functionality, but I'm trying to produce some logs that at first glance tell also non-TC-users where problems might lie...
  • However, one question remains: 



    The "details" section of the log is counting the total numbers of items that were run, and those that failed/passed the test.

    Is there a way of  "telling" some items that they are associated with JIRA testcases, and include into the log how many thereof exist/passed/failed?

    The current log might be helpful for developers and QA, but we also want to have logfiles with easy access to information resrticted to predefined testcases, rather than to the GUI and the workflow as general as possible...

    Thanks in advance,

    Florian





    Edit: I tried the thing about append/pop log, but it didn't quite work the way I hoped for...the way I understand it, they have to be within the same item to get the log structure from the picture? If yes - what would I do if they are divided all among the group/item-tree? In that case, I still don't see how I can group the items...
  • Since I can't seem to find the "edit" button, I write here what I tried so far:



    For all items associated with testcases, I added a Description starting with "Testcase" followed by the JIRA key. 



    In a "Parameter" item, I introduced a variable, by dim folder_main



    At the very beginning of the project, there is some item that uses 'USEUNIT Parameter to refer to that variable:

    folder_main=log.CreateFolder("main")

    log.PushLogFolder(folder_main) 





    Then I use a OnStartTestEvent Handler(including 'USEUNIT Parameter) that by regular expressions checks if Project.TestItems.Current.Description matches "Testcase.*"

    If yes, I use 

    dim Folder


    Folder=Log.CreateFolder(itemDescription, , , ,folder_main)


    Log.PushLogFolder(Folder)  







    I also have an OnStartTestEvent Handler, quite similar except that on stopping items with descriptions starting with "Testcase", it invokes

    Log.PopLogFolder(itemDescription)





    In between, there's an OnStartErrorEvent Handler that also checks for a description starting with "testcase" and invoking

    Log.Message("Error")





    Now after a run, I would expect my log to contain some folder "folder_main" with subfolders named according to the description of the relevant items.

    This does not seem to be the case. Where did I go wrong?