What I call "bogus entries" are any actions that the recording brews up that:
1) aren't necessary to the test (the test can run just fine without them)
2) typically they actually cause errors (so removing them isn't just optional it is necessary)
I am completely confused why these steps appear. Two examples I saw when a coworker was recording her test and showing me were:
1) HoverMouse: nothing in our test required the user to over anything. Moreover, the HoverMouse actions always seemed to cause an error. My coworker would disable any step called "HoverMouse" and said they frequently cropped up in recordings and she ALWAYS disables them.
2) there was another bogus step I saw with drag as part of its name. It gave coordinates. Again, nothing she did during recording the test involved dragging anything, and she disabled the step.
-----
Most of the other errors "seemed" to be the software getting messed up on its identifiers. The one that was obvious was a button labeled "NEXT" was clicked on in the test, but in the recording that action attached itself to some object that didn't exist. My colleague used a GUI tool to identify the "NEXT" button for that step and it was easily repaired. Unfortunately, other wrong object identification errors weren't necessarily so easy to fix. Sometimes an object of type X would be identified as of type Y and naturally the action involved did not apply. In other cases, the object would have a bogus reference (as with the "NEXT" button) but pointing it out to TC didn't work smoothly because after doing so it would generate a new reference for that object (and if old references were already around TC would get confused).
Another serious issue was inscrutable timeouts. When I run the web forms outside of TC I get results within less than 2 seconds. For no reason I could figure out, when the same form runs inside of TC the step getting information would timeout with no results even if we gave it 40 seconds.
Partial interim results on text fields was another problem. But that one seemed obvious and easy to fix. My colleague simply deleted such steps, and (unlike the HoverMouse, drag,...) they sort of jump out as being obvious to remove.
From what I saw the workload to patch a recorded test is completely crazy. My colleague recorded the test in 5-10 minutes, but then spent over one whole hour patching problems (and still didn't fix all of them!). I am hoping that there is something I can do in the setup of TC that will greatly reduce the number of problems generated. Doing so would make TC much easier to use.