Yep.
I seldom use the built in reporting. I wrote all my own custom stuff which is now embedded into my framework. Built-in stuff is only really used by me during building and debugging of test fixtures. Live stuff almost exclusively uses my results and logs.
Input data is all provided on Excel sheets (and small text, SQL, whatever else is required - files. These are referenced from the main input sheet). These Excel sheets then become results files. It writes a result sheet per test pack. And then compiles a summary sheet, with headline numbers for all packs in the run, and overall totals & percentages, at the end. The summary sheet contains clickable links to the results sheets for each pack, and the log file for each as well. (I write my own log files too) Input data and expected results are all present as it's the input sheet used for results as well.
It takes counts on total test steps. Passes, fails, errors, warnings etc. Everything is timestamped and colour coded. Counts are both at test step and test case (as per TFS - which I link into and write to during runs) level. One test "pack" contains many TFS tests "cases", each of which is made up of multiple TestComplete test "steps".
All this is stored locally or networked. (Which is a setting I can adjust)
And the summary is mailed to set of people specified at the end of the teardown pack at the end. (If required - it's configurable again)
I also write out a small HTML file with basic headline numbers and file links. If the run is done as part of the build process (some are, some aren't), the build process picks up this HTML file and presents it as tab on the web page with the build results.
That's how it all works. I can't really post any code examples for this as it's an entire framework so it's a fairly large and complex thing.