Conceptually, I kind of like the sound of a "scan" feature.
I think it would best be called "scan for changes" (or something like that). (minor detail).
I would NEVER want an automation tool to start updating object maps etc without me checking the changes it wants to make first.
That's ok, and I think it should be set configurable as how TC should update the model when changes are detected. I think the best setting depends on how testautomation has been implemented in the organization and how fast new software is being delivered , which results in how much time you have to create "your own" model (namemapping). In the past I have worked in non-agile environments and I was able to spend lots of time investigating the namemapping model, improve it, "play" with it, etc. It was nice to do but very time consuming. And, when you (as a tester) received a new version you would do all the work again because the namemapping failed. Lots of time were "eaten up" by "debugging" the namemapping and/or testscripts.
Now, I work in Agile environment and one of the biggest challenges for testers is keeping up with the speed. Everyday new delivery you cant work on drastic namemapping changes. Then I started to see: this can be done by improving the tool. Software/computers are extreme good (in theory) in comparing situations. If you run the tool every day on each released version it should be able to "see" the differences, and help the tester with it in some way.
By letting TestComplete do "more under the hood", releases more time for the tester to work on validation on a higher level of abstraction: like functional end-to-end testing, data driven testing validation, or other specific technical issues like API testing etc.
Of course I won't expect this to be working flawlessly but it should be created with an "undo" for each change TC commits automatically or after confirmation by the tester.
How does it do a "scan" without doing a run? Many object will only be loaded and/or triggered as a result of other actions (which will happen during a test). So you pretty much have to do a test run in order to do the "scan" anyway?
You are right, it should do a "validation" or a "pre run scan", whatever you call it. In first instance I thought it would be implemented the current Run command, but things have evolved and I think it is better to implement seperate "scan" function (which in facts functions somewhat like a "run" if you see it that way).
I just don't see how you do a scan without a run?
I agree with you as in it can't be scanned without running the testscripts.
Related Content
- 2 years ago
- 4 years ago
- 11 years ago
- 2 years ago
- 4 years ago