Forum Discussion

rbhaskaran's avatar
rbhaskaran
Occasional Contributor
12 months ago

ZephyrScale: How to re-execute the same test cases in a different sprint ?

ZephyrScale: How to re-execute the same test cases in a different sprint ?

If i clone then the count of test cases increases and is not right 

 

If i use "start new execution " the report count is still incorrect . 

Example in sprint 1 : test has failed so failed count increased , when i start new execution i expected the count of fail to become pass. 

The fail percentage is wrong because right now that 1 test case is in-progress state. 

If i run the same in the same sprint then the previous failure results will be over written right ?

 

8 Replies

  • Hi, It all depends on how you setup your test organization (Test cycles and plans particularly). Trying to understand your situation better.

     

    Assuming, the project has gone through several sprints and is currently in Sprint 10. Your requirement is that there are some test cases that you would like to run in Sprint 11 that were there in the previous Sprints.

    If I were in the above situation, I would create a separate test cycle for each sprint and have my test cases associated to the cycle. I would also include all those test cases that I would like to re-run in sprint11. When I commence my test execution for my sprint, I just run all the test cases.

     

    This helps me in multiple ways:

    1. I do not have to Re run, I just run the test case for that particular Sprint

    2. Since I am running a test case (previously run) independently for each test cycle, I do not have to select "Start a new execution"

    3. I am not distorting the history of that test case

    4. If a test case (that I am re running fails), I can trace back how many sprints it has passed  / failed in the past (previous sprints) and helps the developers for root cause analysis based on the release notes.

     

    Hope that helps.

    Happy to take any more questions.

  • MisterB's avatar
    MisterB
    Champion Level 3

    Hi hamiltonl,

     

    In your scenario (and probably all scenarios for that matter) I would manage the execution & tracking, and the reporting separately.  It's useful to see that list of test execution statuses within the test player, but if you fast forward a year then that list of results may not be so appealing to use versus what you can get in a report that show the entire history in easy-to-digest format.

     

    I would create a dashboard for your current regression testing that will report on the status of current testing, adding dashboard parts per function (cycle) if possible (i.e. there's enough space) or rolling those up where necessary into a group of related functions (or creating more dashboards if that's a better solution to the 'not enough space' issue).  I would then probably create a separate dashboard to summarise the overall position of regression testing (release-to-date), again, per function or grouped into whatever makes sense to your you and your stakeholders (dashboards can be easily shared with stakeholders).  Here's an example of what I mean.  You can imagine that each of those charts would report progress of testing per function.

     

     

    I would then create reports using the same criteria to present the detail of the above testing so you get your historic test results list but in a report instead of the test player.  Reports are dynamic so you can create, save to your favourites (share if you like) and re-run as needed.

     

    I think the combination of dashboards and reports opens up the possibility to use that "Start a new test execution" link in test cases, or to clone test cycles.

     

    Hope that helps.

     

    Andy

  • MisterB's avatar
    MisterB
    Champion Level 3

    I can't really tell exactly what it is you want help with.  There seem to be several problems that you are referring to.  I will try with some of those... I haven't checked or ran a test to be sure but from memory when you clone a test cycle which includes test cases that have been executed, the clone does not clone execution statuses - they are all set to Not Executed.  If you click the "Start new execution" link for a test case that also changes the status to Not Executed.

     

    If you need more help can you please provide more detail on each one, e.g.

     

    1. How do I re-execute the same test cases in a different sprint?

    2. Cloning a test cycle increases the test case count?

    3. Start new execution causes the ??? report count to be wrong?

    4.  Something about the 'same sprint' and previous failure results being overwritten?

     

    • hamiltonl's avatar
      hamiltonl
      New Contributor

      Hey MisterB, I'm not sure if I should ask a separate question for this or if I can just follow up here. I would like help re-executing the same test cases in a different sprint. I know there is a button to "Start a new test execution", but I don't want to have to do that for each individual test case. I would like to start a new test execution for each test case in a cycle. Is there a way to do that? Thanks for your help!

  • MisterB's avatar
    MisterB
    Champion Level 3

    Hi.

     

    I agree with the structured approach that mohana suggests - it's how I also work, except instead of creating a new sprint test cycle, I clone the previous sprint test cycle (which clones the test cycle and all test cases, and sets the execution status as Not Executed).  From there it's a simple case of removing test cases that you don't want to re-execute in that test cycle.

     

    Cheers, Andy

    • hamiltonl's avatar
      hamiltonl
      New Contributor

      Hi MisterB and mohana

      Thank you for your responses. The problem with cloning or creating a new test cycle is that I would lose the test execution history. I like easily seeing the previous test runs at the bottom (screenshot attached). I can explain a little better how we're using zephyr on my team (Maybe there's a better way). We use Zephyr for our regression testing. We have test cycles for each large feature and test cases within those cycles related to that feature. Whenever we have a big release, we like to run full regression and execute every test case we possibly can. This means we are executing multiple test cycles so it would be nice to easily start a new test execution for all of our test cycles. This would preserve the history of the previous regression testing so we can be better informed (Is this test case failing because of a new issue, or has it been broken for a while).

  • MisterB's avatar
    MisterB
    Champion Level 3

    Were you able to resolve this issue - can we close it?