Forum Discussion
Hi Bob,
I'm not aware of any best practice guide except for what's in the help files, but my approach is to think about what is the best structure for reporting on progress. I build my Test Case folders and Test Cycle folders to be the same. Those folders tend to be by Team or Function, and each have sub-folders to logically organise the tests. I put no limits on how many tests are in a cycle, but they tend to have fewer than 100 tests - perhaps because of the approach I mentioned above.
As for re-running a Test Plan entirely, I've found that there's always something that needs changing in the subsequent runs and for that reason I tend to clone the test plan/test cycles and amend those as needed (adding/removing test cases, etc.). There might be better ways to do that using other features...
Cheers, Andy
Andy,
This was a tremendous help, and explains the workflow that you utilize - and how it works in the real world.
Do you have any general words of advice for how detailed to go in test cases / test steps?
Imagine a button press is supposed to do something, and we need to verify it does X in most cases and Y in some odd corner cases. The cases involve 10 settings for option A, 3 settings for option B, 3 settings for option C. This is 10*3*3 = 90 test cases (steps) to evaluate.
This general 90 test steps then gets compounded by the type of button event, which operation state the device is in, and other factors. This is our 'simple' device and I can already see thousands upon thousands of test steps/cases that seem to be an overkill to specify - especially if this is a manual test. The value of testing each of these cases (steps) diminishes quickly after testing a few of the X outcomes, but would plan on making sure all of the Y outcomes are correct.
Any words of advice how you handle this?
Related Content
- 4 years ago
- 2 years ago
Recent Discussions
- 10 days ago