Forum Discussion

lucieneven's avatar
lucieneven
Contributor
13 years ago

how to let TestComplete not record HoverMouse() automatically?

Hello,



This is a question about HoverMouse().



I play back the script and exception was popup. From what i noticed it was blocked by this method.



Call form.HoverMouse(1276, 22) 



I checked the Help like beow:

Description


Use the HoverMouse action to place the mouse pointer at the
specified position within an object. You can use this action to simulate a
hovering (hot-tracking) event over an onscreen object or window.


During my test, i don't know what it refers to. So i commented it out and then i can go through the test with another play back.



So, my concerning is if i don't want TestComplete too much sensitive, what i can do? Is there any configuration so that user can lower its sensibility to HoverMouse() method?



Thanks so much!



Lucien

3 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    I'm not sure if there is a configuration for sensitivity.  However, you've demonstrated yet another reason why simple record/playback means of automating tests is a means, not an end.  You can use the record feature of any tool to get started in automating a test, but the best tests and the ones that are the most robust, meaningful, flexible, reusable, and transportable are those that have some manual work done to them in editing, removing "extra" stuff, inserting logging events, etc.



    In short, unless someone comes around and says that, yes, there's a way of turning that off, it is a "best practice" to always do some judicious editing of your recorded tests in order to help them run best.
  • Hello Robert,



    Thanks for your time here. I can't agree with you more when it comes to best practice, :)



    We just bought the license two days ago, everything is from the scratch. That's why i have so many stupid question, hehe.



    btw, one thing i can't figure out is the stability of the scripts. I mean sometimes i run the script and saw the error like "The object or one of its parent objects does not exist", it's very weird that, now always, but a couple of times, i rerun the script and it pass through there. In that case, i swear that there's no change on the script, testcomplete or the test machine. Absoutely it's the same.



    Do you have any suggestion on this case? It makes me feeling frustrated when i see the same scripts doesn't work even it worked just a while ago!



    Please shed some light here,



    Thanks,



    Lucien
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    A general policy here on these forums is to try to keep questions as one per thread... just keep that in mind for next time... it makes it easier for folks to search for answers and such without having to sort through long threads...



    in any case, What you've described is a common "problem".  What it has to do with is what I call "timing issues".  When you record something, there are certain points in the process of recording that you wait for a screen to come up before proceeding.  However, the recording does not take that into consideration.  So, when the scripts play back, there is a default "auto-wait" of 10 seconds that, if the particular object does not come up in sufficient time, it fails with your specific error.  In other words, the scripts are playing back at a faster speed than the application under test is expecting.



    There are a number of ways to get around this.  The first is to increase that "auto-wait" time out (Tools | Current Project Properties | Playback) however that is, while more stable, still not a complete solution.  Say your playback is on a slower machine than the one you are on now.  The auto-wait time out may be fine for your machine but not for that other one.



    You can hard code delays in your scripts by adding code for "Delay(nnnn)" where nnnn is a number of milliseconds to delay (10000 = 10 seconds).  But this also is not very reliable for the same reason as above.



    So, the ultimate solution is to use "WaitNNN" methods.  These are special methods built in to TestComplete that will wait for an object up to a certain point and return that object if it appears within the timing, or returns an empty object.



    To start reading on this, check out the following articles:



    http://smartbear.com/support/viewarticle/14496/

    http://smartbear.com/support/viewarticle/12168/