Forum Discussion

mike_lyons's avatar
mike_lyons
Occasional Contributor
11 years ago
Solved

How to best monitor testing progress

Hello,

  My company is new to QA Complete.  We have an enterprise version running locally and are using QA complete to handle our manual and automated tests.  We are not using it for requirements or defects. On the defects side we have integrated it to Jira which is used for Requirements and Defects.

  My question is how to best use the reports and dashboards to see how testing is going.  We have writing a bunch of test scripts in the library.  We have then created test sets that group these tests by requirement (based just on the name of the test set).  We have created releases and sprints in QA Complete with the test sets assigned to the correct sprint.  



  What I would like to see is a dashboard that shows me how we are doing with the current sprint.  There is the out of the box dashboard called "Test Runs by Test Set" but there are two problems with that report.  First if you run a test set more than once, the numbers are added together.  For example if I have a test set that has two tests and run it once resulting in a passed tests and a failed test then run it again resulting in both passed then the dashboard shows the test set with 1 failed test and 3 passed tests.

The second problem is that if I have not run a test set yet it does not appear on the report.  



What I expect to see is a report that shows me all the test sets and the result of the last run of each test set.  It would be ideal to also see a report on tests for a given sprint that shows me I have X tests passed, Y tests failed, and Z tests have not be executed.





Am I using the tool incorrectly or organizing my tests wrong?  This seems like something that should be available out of the box.  I don't want to have to purchase crystal reports just to build this one report.





Thanks,

Mike

  • I have struggled with the same problem and I think it is the design of the product that is incorrect not your use of it. 

     

    We need to answer the most basic questions for a sprint.  How many tests need to be run for the sprint?  Of those tests, how many have passed, how many have failed and how many still need to be run?  Every day these statistics have to be sent to management.  I was not able to get these statistics from the system as it is. 

     

    My solution is this.  At the very beginning of the sprint, before any testing begins, I make very large test sets (for example, 500 tests) of all the tests that we have to run for the sprint.  We have a custom field that identifies the tests needed to be run for a release, so using a filter I can easily identify these tests.  I link each of these test sets to the release iteration and then run each of them.  I immediately end the run as incomplete.  This takes me between 5 and 10 minutes.  What this does is update two fields for each test that are critical for our stats, Last Run Release and Last Run Status (this is set to “Skipped”).  I can then use the dashboard chart called “Tests by Last Run Status”, select the release iteration and I will see the number of tests that I have to run for this iteration shown as “Skipped”.  As people run the tests, the “Skipped” gets replaced by “Failed” or “Passed”.  We have built two additional dashboard charts based on this one that show us tests that have not been run (broken down by assignee) and tests that have failed (broken down by assignee).  The issue with these charts is that if you run the same test for multiple releases or release iterations, then your prior data gets wiped out.  But that is not an issue for us. 

     

    It is not a perfect solution, but it works and it gives us what we need.  We have found the vast majority of canned reports to be absolutely useless and we need an easier solution than Crystal Reports.

4 Replies

  • kwiegandt's avatar
    kwiegandt
    Occasional Contributor
    I have struggled with the same problem and I think it is the design of the product that is incorrect not your use of it. 

     

    We need to answer the most basic questions for a sprint.  How many tests need to be run for the sprint?  Of those tests, how many have passed, how many have failed and how many still need to be run?  Every day these statistics have to be sent to management.  I was not able to get these statistics from the system as it is. 

     

    My solution is this.  At the very beginning of the sprint, before any testing begins, I make very large test sets (for example, 500 tests) of all the tests that we have to run for the sprint.  We have a custom field that identifies the tests needed to be run for a release, so using a filter I can easily identify these tests.  I link each of these test sets to the release iteration and then run each of them.  I immediately end the run as incomplete.  This takes me between 5 and 10 minutes.  What this does is update two fields for each test that are critical for our stats, Last Run Release and Last Run Status (this is set to “Skipped”).  I can then use the dashboard chart called “Tests by Last Run Status”, select the release iteration and I will see the number of tests that I have to run for this iteration shown as “Skipped”.  As people run the tests, the “Skipped” gets replaced by “Failed” or “Passed”.  We have built two additional dashboard charts based on this one that show us tests that have not been run (broken down by assignee) and tests that have failed (broken down by assignee).  The issue with these charts is that if you run the same test for multiple releases or release iterations, then your prior data gets wiped out.  But that is not an issue for us. 

     

    It is not a perfect solution, but it works and it gives us what we need.  We have found the vast majority of canned reports to be absolutely useless and we need an easier solution than Crystal Reports.

  • mike_lyons's avatar
    mike_lyons
    Occasional Contributor
    Kathy,

      Thank you very much for your detailed reply.  I was worried that I was missing something since what I wanted seemed so basic.  I can't believe we are the only ones who need this ability.  



      I will try what you have suggested.



    Thanks,

    Mike



  • mike_lyons's avatar
    mike_lyons
    Occasional Contributor
    I contacted support to make sure I was not missing anything and here is thier response.  I will try to install the upgrade tomorrow and see if it gives me what I need.



    Dear Mike,



    Thanks you for being patient. In the month of May we sent out an update. Once the mailed out update is applied you then will have the features that you request. In this update we included an test run ad-hoc report. Also, the update will give you a spreadsheet with all kinds of output data which lets you create filter(s). You may complete upgrades if you are on either 9.7.5 (build 9.7.226) or 9.7.0 (build 9.7.139). The update instructions are available on our website:



    www.softwareplanner.com/alm981updaterguide.pdf



    www.softwareplanner.com/qac981updaterguide.pdf 



    The PDF includes links to the updater utility and updater package. ** Please make the give the upgrade a try and let me know if you are still having any issues?





    Thank you,

    Support Team
  • mike_lyons's avatar
    mike_lyons
    Occasional Contributor
    After a lot of work I created new dashboards that see to work well for me.  I put the in a new thread in this forum and titled the thread "A better way to see test status".