Forum Discussion
Hi hamiltonl,
In your scenario (and probably all scenarios for that matter) I would manage the execution & tracking, and the reporting separately. It's useful to see that list of test execution statuses within the test player, but if you fast forward a year then that list of results may not be so appealing to use versus what you can get in a report that show the entire history in easy-to-digest format.
I would create a dashboard for your current regression testing that will report on the status of current testing, adding dashboard parts per function (cycle) if possible (i.e. there's enough space) or rolling those up where necessary into a group of related functions (or creating more dashboards if that's a better solution to the 'not enough space' issue). I would then probably create a separate dashboard to summarise the overall position of regression testing (release-to-date), again, per function or grouped into whatever makes sense to your you and your stakeholders (dashboards can be easily shared with stakeholders). Here's an example of what I mean. You can imagine that each of those charts would report progress of testing per function.
I would then create reports using the same criteria to present the detail of the above testing so you get your historic test results list but in a report instead of the test player. Reports are dynamic so you can create, save to your favourites (share if you like) and re-run as needed.
I think the combination of dashboards and reports opens up the possibility to use that "Start a new test execution" link in test cases, or to clone test cycles.
Hope that helps.
Andy
Related Content
- 4 years ago
Recent Discussions
- 16 days ago