Automation Execution Report
Apart from Test Complete logs, Does anybody have worked different kind of reports that can be generated for each test case. Something like HTML or Excel or some other which we will have Expected results Actual results for a step and etc.
Yes have worked on Excel as well as HTML reporting formats. Can you please let me know what exactly are you looking for and which scripting language are you using; which you don't say specifically in your post. As a heads up I am attaching screenshot of a sample excel result file which might be of some help to you in designing a reporting structure depending your project needs.
I seldom use the built in reporting. I wrote all my own custom stuff which is now embedded into my framework. Built-in stuff is only really used by me during building and debugging of test fixtures. Live stuff almost exclusively uses my results and logs.
Input data is all provided on Excel sheets (and small text, SQL, whatever else is required - files. These are referenced from the main input sheet). These Excel sheets then become results files. It writes a result sheet per test pack. And then compiles a summary sheet, with headline numbers for all packs in the run, and overall totals & percentages, at the end. The summary sheet contains clickable links to the results sheets for each pack, and the log file for each as well. (I write my own log files too) Input data and expected results are all present as it's the input sheet used for results as well.
It takes counts on total test steps. Passes, fails, errors, warnings etc. Everything is timestamped and colour coded. Counts are both at test step and test case (as per TFS - which I link into and write to during runs) level. One test "pack" contains many TFS tests "cases", each of which is made up of multiple TestComplete test "steps".
All this is stored locally or networked. (Which is a setting I can adjust)
And the summary is mailed to set of people specified at the end of the teardown pack at the end. (If required - it's configurable again)
I also write out a small HTML file with basic headline numbers and file links. If the run is done as part of the build process (some are, some aren't), the build process picks up this HTML file and presents it as tab on the web page with the build results.
That's how it all works. I can't really post any code examples for this as it's an entire framework so it's a fairly large and complex thing.
@Colin_McCrae Thanks for sharing your experience with Reporting. got some ideas how to make it.
I will give a try with your words in script. 🙂
Honestly, I'd like to see the code for this. It appears to be VERY well done and I wonder if I could adapt it for our use.
[Hall of Fame]
Please consider giving a Kudo if I write good stuff
Why automate? I do automated testing because there's only so much a human being can do and remain healthy. Sleep is a requirement. So, while people sleep, automation that I create does what I've described above in order to make sure that nothing gets past the final defense of the testing group.
I love good food, good books, good friends, and good fun.
Mysterious Gremlin Master
Vegas Thrill Rider
Sample report looks good. Based on the sample shown I got few more ideas to implement for my own purpose.
1. Segregating AUT, environment and build details under separate table.
2. Adding a summary of total test cases executed. How many are passed and how many are failed.
@shankar_r Perfect 🙂 Out of curiosity I am asking this question Where does the View hyperlink under 'Report' column takes? Will it point out to Test Complete logs or User Defined(I mean developer defined) custom log files?
Also as a heads up for my understanding I was thinking is there a way to differentiate between a functionality failure(Actual test case failure) and failure due to object property changes(object not found errors)