Consistency running test suites in Test Complete vs. Test Execute
Greetings,
I am curious if anyone has noticed differences in test consistency between Test Complete and Test Execute. Currently our organization's work flow consists of creating groups of keyword tests and scripts in Test Complete on a dedicated QA machine. These tests are then run via Test Execute on a second decidated QA machine. We are currently running a core group of tests in stage and development enviroments and on three browsers(chrome, edge, firefox.) These suites are run twice a day. However we are having some issues achieving a baseline of 100% success. When run in Test Complete, the suites pass with 100%. However, when running the same suite in Test Execute approximately 5-15% of tests fail. Has anyone else noticed anything similiar? The types of failures and particular failing tests are not always consistent from browser to broswer or production environments.
Not on this end....at least not in major ways.
One thing to keep in mind has to do with execution speed. I have noticed that automations run on TestComplete run a little slower... there's an overhead of the debugger, the IDE, etc.. TestExecute is lighterweight and so tests run faster. We have noticed inconsistency ONLY with regards to timing issues... automation runs faster on TestExecute so we need to be more dligent and consistent with proper coding for these timings... use of "Wait", "WaitChild", etc. That's the advice I give you.