jreutter's avatar
jreutter
New Contributor
9 years ago
Status:
New Idea

Diff tool for Project Suite test run result Logs

We use a nightly build and test environment with TestComplete.

The ProjectSuite test run takes 12h and has a huge test result tree.

It is pretty complex to compare the actual ProjectSuite test result with the prevenient test result, to see new divergences.

 

It would be helpful to see new errors, fixed errors, new tests and not executed tests of a selected test result relating to a other selected test result.

16 Comments

  • maximojo's avatar
    maximojo
    Frequent Contributor

    mgroen2 it is still quite coarse. It does not allow you to drill down to individual test items and their reliability over time.

     

    It might be that each test or grouping of tests could be given a GUID so they are uniquely comparable and certainly it would have to be up to the test runner to determine and keep track of these groupings.


    There are certainly requirements to define with this feature but it would be invaluable for in helping determine e.g.:

     

    - reliability of the area of the application over time more granular control of defining the metrics for this

    - reliability of the tests themselves i.e. do the tests fail as the result of the script logic or other errors e.g. object not found, etc.

    - I'm sure there are more

     

    Right now you'd have to create your own scheme and keep track of of this yourself e.g. in a spreadsheet. Very tedious and time consuming.

     

    And once you have a lot of tests the most time consuming part can be going through the logs after the test runs. This would assist in making this process faster.

  • mgroen2's avatar
    mgroen2
    Super Contributor

    maximojo I think maybe it's better if TestComplete would provide export function including details of all testitems. To .xlsx or .csv files for example.

     

    So that the comparison (and calculation) can be easily/better done on tools better suited for that (Excel-like programs).

     

    Don't forget TestComplete is a testing tool not a spreadsheet/analysis tool.

  • maximojo's avatar
    maximojo
    Frequent Contributor
    Because we all love spreadsheets :)
    I will let SmartBear figure out the rest. I think they have enough input from us. The rest is market research and how they want to position their products.
  • jreutter's avatar
    jreutter
    New Contributor

    I found a solution for us... (just an option for other stakeholders)

     

    Yesterday I've created a TestComplete script method to add a test item name and its result into a csv file.

    This method is called by the OnStopTest-Event for each test item and the current result is flagged by the OnLogError-Event.

    New test items will be appended to csv file (its result in the correct test run column), disabled test items which are already known in csv file, register a space.

    The csv file handling will be executed only when a entire test suite run is in progress (not if only a single test item is executed).

     

    csv file content example:

    Testrun ;TR1 ;TR2 ;TR3 ;TR4 ;TR5

    TestItem1 ;passed ;passed ;failed ;passed ;passed

    TestItem2 ; ;failed ;passed ;passed ;passed

    TestItem3 ;passed ; ; ;passed ;failed

    TestItem4 ; ; ; ; ;passed

    ...

    At the beginning of a test suite run (or end) TC edits the csv file and adds " ;" to each line = existing but not executed test items are registered - and new results are separated in a new csv column

     

    After a test suite run is finished, I edit the csv file manually with conditional formatting rules, then it looks like the "NetSparker"-screenshot.

  • mgroen2's avatar
    mgroen2
    Super Contributor

    jreutter very creative solution. Nice job I would say.

     

    Maybe still possible for Smartbear to implement similar feature for nonscript-testers?

     

     

  • Manfred_F's avatar
    Manfred_F
    Regular Contributor

    To have a Diff Tool for TC is a great idea.

     

    The Result of the diff shall be as hierarchic as the log is.

    The diff result shall be accessible for Scripting, so that I can open/Close nodes for the user