Examples of how you've implemented automatic test rerun on failures
I found some old threads on rerunning tests automatically on failure, and know this is an open feature request. We are implementing our own solution to automatically rerunning failed tests, but I wanted to see if others could share what they implemented to inform what we do.
What we have started implementing is using the OnStopTest event to see if the test failed and then rerunning it using information in Project.TestItems.Current. That seems like it will work well enough though as-is it limits us to one rerun per test and though that's what we'd like to start with it would be nice to easily allow for more than one rerun in the future. The other sort of tricky thing we are doing is we want to suppress errors using the Error events so that if the test passes on the second attempt it shows up as passing in TestComplete. I think that will work but it's messy with edge case issues. I don't expect all the little issues to be resolved, just wondering what other people have done and things that worked well or didn't work well. Code examples would be great when possible.
We are using python scripting in TestComplete 14.3 automating a Windows desktop application.
Solved! Go to Solution.
As for me i use mainly Excel as data source and in this excel i've a column with maxRetryAllowed and groupRetry value.
In the script i use code to implement theses values, groupRetry is a boolean telling the script if all tests of same group need to be rerun in case of error or only the current.
In the script i add some intelligence to adjust the retry action, for example on a web test if the error is http 401 no need to retry, if error is http 500 then add a delay and force to one retry only, on desktop test if error is low level windows alert then no retry and so on.....
I never use Stop on Error, i manage all by code.
Un sourire et ça repart
I haven't implemented any "retry" logic in any test frameworks I've created. The reason being... if an error occurs, that means one of two things: 1) Either the automation code I wrote is faulty and I need to investigate to correct the fault before a rerun or 2) There is an actual bug in the code that needs to be investigated and reported. All a rerun will do is confirm one of those two things. Even if a re-run passes... to me, as a tester, that's a problem because it means that something failed the FIRST time but didn't fail the second which means that either my faulty code is intermittent or the bug is intermittent... either way, I'm back where I started which means investigating why the error happened in the first place.
IF you want to do so... I'd go with @BenoitB suggestion. Somewhere in your framework for your automated tests, when you construct what tests are being executed, you build in retry logic. This is a little harder to do if you're using only what TestComplete provides out of the box with regards to Test Items and that linear execution. Using something like a table-based approach allows you to build your own "list" of tests to execute. Likewise, I think some of the other CI tools like Jenkins, QAComplete, and Jira allow you to mark items for a retest automatically.
[Hall of Fame]
Please consider giving a Kudo if I write good stuff
Why automate? I do automated testing because there's only so much a human being can do and remain healthy. Sleep is a requirement. So, while people sleep, automation that I create does what I've described above in order to make sure that nothing gets past the final defense of the testing group.
I love good food, good books, good friends, and good fun.
Mysterious Gremlin Master
Vegas Thrill Rider