Forum Discussion

Vault76er's avatar
Vault76er
Occasional Contributor
2 months ago

Tuning Testcomplete

Hi,

We have a vendor creating automation of our .net based desktop application for us. I am having trouble getting the same results when I run their automation here, as when they run it there. I'm hoping someone can help me fine tune this thing since Testcomplete is kind of useless if the tests aren't reliable. I get about a 75% failure rate when I attempt to run the automation here.

Environment:

  • We are both using the newest revision of Testcomplete with the Intelligent Quality Add-on.
  • We are both using the same revision of the application to be tested, with the same database.
  • We both have the screen scaled to 125%
  • I have a second monitor, I am unsure what they have. But turning my 2nd off does not seem to alter results.

Some typical errors I see when I try to run the automation are:

  • "AttributeError: 'NoneType' object has no attrtibute 'entertext'"  (I find a lot of variations of this like attribute 'right_click','OCR_full_text', 'FindAllChildren', etc)
  • Unable to find the WinFormsObject("RescheduleDialog") See details for additional information (I can't find any details). I find lots of variations of this one too.
  • The object "WinFormsObject("RescheduleDialog") does not exist. And also lots of variations of this error.
  • Timeout:object did not become visible within 120 seconds.
  • Exception: object did not become visible.
  • --Element Exist but it is not visible

I would assume these are code bugs in their automation code, except they can run without triggering the errors and I can not.

Does anyone have any idea what I should be chasing here to help them make these tests hardy enough to work regardless of the install machine?

  • Hello,

    This documentation page https://support.smartbear.com/testcomplete/docs/working-with/best-practices/index.html touches on the subject. TestComplete by default is fine tuned and works beautifully as is, and as there is no easy answer to your dilemma. I hope the following would help you formulate a course of action. 

    You have a lot of variables and nothing you mentioned is specific, computer specs and OS image build as well as network are also something to consider for isolation. Not that they are the reason.

    That being said, a large part of automation, as in your situation, is for the developer to run the code in the pipeline on CI/CD machine. It would be the developer responsibility to stabilize the run on the CI/CD machine by debugging each failure one at a time. The fact that the code runs on a developer machine is never an indication that it will run equally on another machine, and hardly accomplished on first try.

    TestComplete does a beautiful job in waiting and in logging information to help the debug process, every step has log details as well as screenshot, and what is more important is to trace previous steps leading to the error itself to analyze and understand what prompted such condition. For example: click button did not actually materialize and application is no longer on the correct screen to properly continue or the button is in the background a popup.

    Stabilizing a project is sometimes the hardest thing to do, especially when you run the entire test suite that goes for long hours. Back-end servers and network traffic may get bugged down, machine CPU and memory may get overwhelmed, antivirus scan or windows scheduled optimization kicks in, and all that can cause for example a) race condition where automation is running faster than the application responding or b) trying to click on a moving target. 

    Debug is an art, and lots of effort goes into automation debug and stabilizing, automation must go as fast as it can and wait as long as is needed. Proper access to objects dictates checking in sequence:

    1. Object exist
    2. Object is enabled
    3. Object is visible on screen not just visible

     

  • rraghvani's avatar
    rraghvani
    Champion Level 3

    If I remember correctly, TC doesn't work with desktop scaling other than 100%. Also the screen resolution needs to be a certain size too. You should also use the Object Spy tool to ensure the name mappings are correctly defined - different OS used, may give difference results.

    Ideally, you should be getting back to the vendor, and asking them why it doesn't work for you. 

  • JDR2500's avatar
    JDR2500
    Frequent Contributor

    Debugging these issues without seeing them is challenging.  Problems running on different machines could be:
    - Performance differences with the environments
    - Different display settings.  You said scaling it the same, what about screen resolution?  Objects existing but not being visible can be an indication they are off screen.  Does the automation set the window size for the application under test?  We set it at the beginning of the run so we know it's always the same.
    - Different settings in Tools > Options
    - Name Mapping properties for objects that are sensitive to the environment.  If they are using XY coordinates you're setup for failure.

    For the issues finding the ("RescheduleDialog"), does the dialog appear when you run the test on your machine?  I would use breakpoints and step through so you can see what's happening.