My tests produce a little HTML summary file on completion of the run. This is picked up by TeamCity and included as a clickable link on the page after a build has completed and the tests have run.
It looks at a couple of tags within the file to determine if the run overall was a pass or fail. Clicking the link gets you headline figures (so times, tests attempted/passed/failed ... that sort of thing). And there are links on there that take you directly to the more detailed results and log files so you can drill down to as much detail as you need. The starting point for all this is the project page on TeamCity.
I should add, I run my own framework which produces it's own results and log files. So everything is proprietary and built to allow multiple layers of drill-down as not everyone wants the same thing from a set of test results. Managers want headline figures, test and dev analysts want detail of what a test did and EXACTLY where it went wrong.
I'll also add that I didn't do the TeamCity setup. We have a build engineer who dealt with that. I simply provided him with the project repo location in version control. Ditto where all the test packs are. Set up a BAT file to start the test. (Which is simple - every single project or test I produce all run off the same "Driver" script unit. That reads in all the user populated keywords and data to determine what tests are actually run. So the BAT file is always the same.) And finally output the HTML file containing headline figures and links to more detail. So the build engine pulls down the latest TC script code. The latest test packs. Runs. Checks the exit code from TE. Looks for the HTML file. Parses it (if present). Writes appropriate output to TeamCity. Having everything data driven and being able to (initially) report at a VERY high level makes this WAY easier.