Forum Discussion
The problem is SaveLogAs will save out the ENTIRE log each time. Some of my automation session logs take over 5 minutes to export after TE is done. Therefore towards the end of my test run each LogSaveAs will end up being minutes (if called by the timer).
Thanks, I will look at the test item option though I suspect, unless Smartbear adds it, I will have to come up with my own custom solution. Shouldn't be too hard. Will report it here when I'm done.
maximojo wrote:
The problem is SaveLogAs will save out the ENTIRE log each time.
There is an option under your project properties. If you go to Tools -> Current Project Properties -> Playback, at the bottom there is an option to "Save log every ____ minutes". Now, this doesn't do the export but does just a native save of the log file every few minutes. The idea is specifically for what you are looking for: if TestComplete crashes, the environment crashes, etc., you don't lose your log. It's not a delta export to a central location but it does create a log backup for you in the event of an unforeseen circumstance.
- maximojo9 years agoFrequent Contributor
Thanks tristaanogre.
However, if you look at the original question I am looking to monitor the logs at runtime so I can know if error conditions occur. I do know about the "save log every X" time as I was stung by that years ago when TC crashed towards the end of a long running test leaving me with no logs.
Live and learn though :)
- tristaanogre9 years agoEsteemed Contributor
One thing that I've been working on is constructing a data-driven framework built around a set of SQL tables. One of the requirements of this framework is that there are going to be semi-real-time indications of what is going on in the test execution. When a test run kicks off, we update a status to say it is In Progress. For each test scenario within that test run, we have individual status settings for In Progress, Passed, and Failed. We then, at the end of the run, do a roll up of the results of all those individual test cases to give an over all Passed or Failed status for the run.
Obviously, this is pretty high level and doesn't go into the detail that the TestComplete native log does, but we do get real-time monitoring of test runs. If I wanted to, I could even institute writing out details to an SQL table as a log linked back to those data records of test cases and test runs so that we could even get real-time info that way. We're not interested in that much detail, but this is something that you could achieve with relatively minor effort.
Just a thought...
- maximojo9 years agoFrequent Contributor
tristaanogre funny you should say that. I did EXACTLY that at my last employer! And it did work well.
And then we combined it with a Confluence SQL plugin which allowed you to make SQL queries from the page and display them in Confluence - worked like a charm! I also coupled it with a Confluence graph plugin so it showed a little execution bar to indicate how far along it was in a certain test item.
If you make that public that would be awesome for sure and go a long way toward an atomic level real time solution.