mgroen2 it is still quite coarse. It does not allow you to drill down to individual test items and their reliability over time.
It might be that each test or grouping of tests could be given a GUID so they are uniquely comparable and certainly it would have to be up to the test runner to determine and keep track of these groupings.
There are certainly requirements to define with this feature but it would be invaluable for in helping determine e.g.:
- reliability of the area of the application over time more granular control of defining the metrics for this
- reliability of the tests themselves i.e. do the tests fail as the result of the script logic or other errors e.g. object not found, etc.
- I'm sure there are more
Right now you'd have to create your own scheme and keep track of of this yourself e.g. in a spreadsheet. Very tedious and time consuming.
And once you have a lot of tests the most time consuming part can be going through the logs after the test runs. This would assist in making this process faster.
Related Content
- 4 years ago