Forum Discussion

bmarcantel's avatar
bmarcantel
New Contributor
5 years ago
Solved

Consistency running test suites in Test Complete vs. Test Execute

Greetings, 

I am curious if anyone has noticed differences in test consistency between Test Complete and Test Execute. Currently our organization's work flow consists of creating groups of keyword tests and scripts in Test Complete on a dedicated QA machine.  These tests are then run via Test Execute on a second decidated QA machine. We are currently running a core group of tests in stage and development enviroments and on three browsers(chrome, edge, firefox.) These suites are run twice a day. However we are having some issues achieving a baseline of 100% success. When run in Test Complete, the suites pass with 100%. However, when running the same suite in Test Execute approximately 5-15% of tests fail. Has anyone else  noticed anything similiar? The types of failures and particular failing tests are not always consistent from browser to broswer or production environments.    

  • Not on this end....at least not in major ways.

    One thing to keep in mind has to do with execution speed.  I have noticed that automations run on TestComplete run a little slower... there's an overhead of the debugger, the IDE, etc.. TestExecute is lighterweight and so tests run faster.  We have noticed inconsistency ONLY with regards to timing issues... automation runs faster on TestExecute so we need to be more dligent and consistent with proper coding for these timings... use of "Wait", "WaitChild", etc.  That's the advice I give you.

4 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor

    Not on this end....at least not in major ways.

    One thing to keep in mind has to do with execution speed.  I have noticed that automations run on TestComplete run a little slower... there's an overhead of the debugger, the IDE, etc.. TestExecute is lighterweight and so tests run faster.  We have noticed inconsistency ONLY with regards to timing issues... automation runs faster on TestExecute so we need to be more dligent and consistent with proper coding for these timings... use of "Wait", "WaitChild", etc.  That's the advice I give you.

    • bmarcantel's avatar
      bmarcantel
      New Contributor

      This is consistent with what I had noticed watching the tests execute and was my main hypothesis. I'll update how and where in the scripts adding "waits: effects things.

       

      Thanks!

    • bmarcantel's avatar
      bmarcantel
      New Contributor

      Two quick follow up questions:

       

      1. Why is adding delays neccessary for the automation to run properly? i.e. does TestComplete/Execute have the equivalent of a document on load or on ready listener it uses to check for page resources, DOM elements, etc. to complete loading?

       

      2. Assuming no changes to a site and a static group of well written fully passing tests, is the expectation of a recurring 100% success rate reasonable? 

      • tristaanogre's avatar
        tristaanogre
        Esteemed Contributor

        1) Not delays necessarily in the way of hard coded "sleeps" or call to aqUtils.Delay... but "smart" wait times.  And yes, it does... Look up the documentation on the "Page" object and specifically look at the "Wait" method.    Why is this necessary?  Because TestComplete attempts to execute as quickly as possible.  There are some global values for waiting for when a component or such is added but they aren't always sufficient.  So, to allow the application to "think" until it's ready before interacting with the UI, you need these delays.  Consider a human being.  Before clicking on a button, that human being "waits" until the button is ready to be clicked.  You are, when writing test automation, acting like a robotic human so you need to program such "wait" calls into the code.  Common practice... regardless of tool.

         

        2) no, it's not unreasonable.  But it is not something that just happens... it takes good, smart, robust programming and proper practices to get to that point... bullet-proofing your code, adding in proper wait times, accounting for dynamically changing object identifciation factors, etc....