jreutter's avatar
jreutter
New Contributor
9 years ago
Status:
New Idea

Diff tool for Project Suite test run result Logs

We use a nightly build and test environment with TestComplete.

The ProjectSuite test run takes 12h and has a huge test result tree.

It is pretty complex to compare the actual ProjectSuite test result with the prevenient test result, to see new divergences.

 

It would be helpful to see new errors, fixed errors, new tests and not executed tests of a selected test result relating to a other selected test result.

  • jreutter's avatar
    jreutter
    New Contributor

    Yes, you're right.

    This is another reason why the TestComplete users should do the DIFF. They know when it makes sense.

     

    To have a meaningful test report e.g. after a regression test for Continuous Integration, the test suite run should be reproducible by its test coverage.

     

    I think, the danger is manageable to compare only the result of each single "Test Item" (new test, new error, fixed, skipped) without its TestItem-Content. The reason of a changed TestItem result, must be checked within the new/old TestComplete test log. (linked in the DIFF-Tool :smileywink: )

  • mgroen2's avatar
    mgroen2
    Super Contributor

    I think one danger of working with such "diff" reports is that what if the same test is run but with for example 80% of the scriptcode is disabled, in the second run? Then, the numbers on the diff reports don't make any sense!

     

    I think this is only workable if you have a solid not-changing test framework in place, in which you do not (or very very limited) make any changes in the testlogic/scripts itself.

     

     

  • jreutter's avatar
    jreutter
    New Contributor

    In our case, we don't use QAComplete.

    We also don't have a 'real TestManager', we work agil in small SCRUM-Teams.

     

    In my opinion the TestComplete responsible persons should be able to do the DIFF of test logs, because they create, change, execute and know the test cases and they can rank the new deviation of this 'list'

    (then commit this to their Testmanager or QAComplete)

     

     

     

     

  • maximojo's avatar
    maximojo
    Frequent Contributor

    Certainly QAC integration could be included at some point or yes it could indeed solely be a feature of QAC. However, it would definitely be great to have in TC where you could simply select two test runs from the log and go "Show me the differences".

     

    Perhaps QAC would allow for more detailed reports or trending reports.


    FYI here's an example of the Netsparker reports I'm talking about. Scroll down mid-way through this page to "Comparison Reporting" or here's a screenshot. Instead of security vulnerability related entries each row might be a TC test or group of tests. Each column would be a test run.

     

    https://www.netsparker.com/blog/releases/announcing-netsparker-21/

     

     

     

     

  • mgroen2's avatar
    mgroen2
    Super Contributor

    Although I like the idea as well, I think this feature request could be more the domain of the test manager, rather then test execution. Maybe QAComplete (TestManager's tool) have a feature to compare test logs? Otherwise feature request for QAComplete?

     

     

     

  • maximojo's avatar
    maximojo
    Frequent Contributor
    Great idea! Security scanning tools have this feature eg Netsparker where you can compare scan results between various releases to be sure no vulnerabilities crept in.

    It would be great to be able to filter on what went from pass to fail and vice versa. Great idea!