Forum Discussion

dshriver's avatar
dshriver
Occasional Contributor
11 years ago

Problems recording tests

A coworker was showing me how she records tests in TestComplete.  I was shocked when it came time to playback the recorded tests because she was doing a lot of patching work on the recorded test (that I would have thought would be unnecessary).



For instance, there were steps that were put into the recorded test that she routinely disabled because they seemed to be spurious and cause problems (bogus clicks, drag operations, hover mouse operation (those always seemed to be bogus)).  Some made a bit of sense to me, for instance when entering a text value, TestComplete would sometimes capture partial results before capturing the entire value; but many of these patching steps seemed like we really shouldn't need to do them.



Worse yet, there are some rather inscrutable errors that crop up.  For instance, a user clicks a "NEXT" button on a form and for some reason in the recorded test the object clicked on is not the "NEXT" button but a bogus object reference.  Or the form is submitted and (for unknown reasons) the test times out before the reply happens (this seems to happen a lot inside the recorded tests, even if we increase the wait times to 30-40 seconds; which seems very odd because when we run the forms outside TC the reply times are typically less than 2 seconds).



What I'm wondering is if there is something about our setup/configuration of TestComplete that is generating bogus steps during test recording, and if so how can I fix it.



The patching of recorded tests currently represents the bulk of the workload (the time to patch the tests takes quite a while because sometimes patching a broken step doesn't work properly the first, second, third, ... tenth time....).  A form can be recorded in about 5-10 minutes or less, but the patching work can take in excess of one hour.



Some system details:

TestComplete: v 10.10 build 752

running on Windows 7 Enterprise 64 bit

machine has 8GB ram and 2 Xeon CPUs



Not sure that it matters, but I don't directly work on the box where I am using TestComplete.  I use a RemoteDesktop connection from my machine to the machine where TC is on.  This can't really be avoided (but if there are other better ways of connecting, or ways of setting up RemoteDesktop better let me know) since the machine running TC is likely in some back room full of servers, and people working on it (like me and my coworker) are physically located in non-adjacent states.

2 Replies

  • Marsha_R's avatar
    Marsha_R
    Icon for Champion Level 3 rankChampion Level 3
    I've seen TestComplete enter partial text in a text box before, but that was on version 8 and only at the end of the day when we'd been working on it all day, so I'm guessing that was a memory leak that has since been fixed (at least for us).



    Regarding "bogus" entries, are you saying that TestComplete is adding extra entries in your tests or that you expected it to be self-editing and take out the keystrokes or mouse movements that aren't essential to the test steps?
  • dshriver's avatar
    dshriver
    Occasional Contributor
    What I call "bogus entries" are any actions that the recording brews up that:



    1) aren't necessary to the test (the test can run just fine without them)

    2) typically they actually cause errors (so removing them isn't just optional it is necessary)



    I am completely confused why these steps appear.  Two examples I saw when a coworker was recording her test and showing me were:

         1) HoverMouse: nothing in our test required the user to over anything.  Moreover, the HoverMouse actions always seemed to cause an error.  My coworker would disable any step called "HoverMouse" and said they frequently cropped up in recordings and she ALWAYS disables them.

          2) there was another bogus step I saw with drag as part of its name.  It gave coordinates.  Again, nothing she did during recording the test involved dragging anything, and she disabled the step.



    -----



    Most of the other errors "seemed" to be the software getting messed up on its identifiers.  The one that was obvious was a button labeled "NEXT" was clicked on in the test, but in the recording that action attached itself to some object that didn't exist.  My colleague used a GUI tool to identify the "NEXT" button for that step and it was easily repaired.  Unfortunately, other wrong object identification errors weren't necessarily so easy to fix.  Sometimes an object of type X would be identified as of type Y and naturally the action involved did not apply.  In other cases, the object would have a bogus reference (as with the "NEXT" button) but pointing it out to TC didn't work smoothly because after doing so it would generate a new reference for that object (and if old references were already around TC would get confused).



    Another serious issue was inscrutable timeouts.  When I run the web forms outside of TC I get results within less than 2 seconds.  For no reason I could figure out, when the same form runs inside of TC the step getting information would timeout with no results even if we gave it 40 seconds.



    Partial interim results on text fields was another problem.  But that one seemed obvious and easy to fix.  My colleague simply deleted such steps, and (unlike the HoverMouse, drag,...) they sort of jump out as being obvious to remove.



    From what I saw the workload to patch a recorded test is completely crazy.  My colleague recorded the test in 5-10 minutes, but then spent over one whole hour patching problems (and still didn't fix all of them!).  I am hoping that there is something I can do in the setup of TC that will greatly reduce the number of problems generated.  Doing so would make TC much easier to use.