cancel
Showing results for 
Search instead for 
Did you mean: 

Implement scan option (or update namemapping) during test run

Implement scan option (or update namemapping) during test run

As stated in this feature request, we work agile and the tester gets new applications versions (almost) on a daily basis.

The tester does not know which objects have been changed/deleted. And does not know which controls (object) are new.

 

This feature request is to enhance TestComplete with a feature:

It is to implement a 'scan' (or update namemapping feature) to scan the updated application so that the namemapping gets updated according to the updated application:

new objects are placed in the namemapping;

changes done in existing objects get updated in namemapping;

deleted objects are removed from namemapping.

 

After that the tester runs the script and it runs with no errors (because all changes are reflected in the namemapping prior to the test run), and also (but this is really a dream coming true), TestComplete marks the new but-not-yet-tested controls (in the Visualizer images for example). So that the tester can update the testscripts to new objects get tested as well. This is something like the current "Map object from screen" function, but with the difference that it scans the whole application (or screen based), and reflects the Name mapping repository automatically as well.

 

I realize this is a rather big feature request but I think allows Smartbear to become real market leader in test automation!!

 

19 Comments
Staff
Status changed to: Accepted for Discussion
 
Staff

Hi Mathijs,

 

I really like this idea and we had been pondering how to correctly implement this mechanism.

 

How would TC know if an object is deleted only from one application screen and not from the entire application, for example?

Super Contributor

@MKozinets

 

good question Masha, thank you!

I think the tester has to have the end responsibility and TC would only suggest to update namemapping, based on the differences it finds during test run.

 

One way to display the differences would be to use marking on the places where TC would expect to find a control, but it is deleted. See the example here: Version 1 contains an input field "Bedrag", version 2 (which is released next build for example when using continious integration could be the next day). 

 

2016-06-13_16-57-33.png

 

 

During test playback TC could compare both screens and link the info to the namemapping prior to testrun (sort of like a internal pre-run test validation). When it sees differences it should be made visible (like yellow marking), and TC would to suggestion to delete (or disable) the specific input field from both Namemapping AND Testscripts (both Keyword and Scripts).

 

Of course this is ideal situation for me, it should be discussed / evaluated by other TC users as well.

 

 

Occasional Contributor

Hello, I think this is not something to do during tests runs : what would you do with results which differ from a build to another one? a missing object raises an error or not? you can't trust your test results in this way... and if a missing object doesn't raise an error during run, you may have a lot of false positives. However, this can be usefull as an analysing tool to run before the tests.

Super Contributor

@Montikore As discussed here, it all depends on the chosen approach of software development and releasing of updates. 

When using CI/ CD (Continues Integration/ Continues Development) the focus of test automation is on getting the bugs out of the software, driven by fast feedbacks on the testruns. Comparison to previous builds is not done, there is no time for that.

 

You can still trust your test results! But you have to keep in mind that testing is all done of the latest release and your findings have limited lifespan. Next day new software is delivered and you have to re-execute all the tests.

That's why you need to automate as much as possible (to keep up the speed), but use extensive scripting as little as possible.

 

Occasional Contributor
So TC should know when a missing element is a bug, and when it's not? it should adapt to new developments, when the tester himself doesn't know what's new? and then, you have to trust the result? From my point of view, what you want is impossible... it seems utopian
Super Contributor

@Montikore you're way to deep in technical implementation.

Never ever is a tool going to decide if it's a bug. That's allways up to a human being to interpret.

TC should only recognize differences between latest deployment and the latest deployment -1 (second last deployment).

Specifically it should be able to:

- recognize deleted objects in the application: objects used in second last testrun but are removed from the latest version. It does not make sense to run tests on these since they would fail. (but that's logical because the applicaton has evolved). 

- recognize differences on the UI by marking these. So that tester can do his/hers work on the test automation framework. TC should be able to do that since already similar logic is built in to it (comparing screenshots). That would be ideal for assisting the tester in recognizing the differences in releases and keep the test framework up to date with developers/latest releases!

 

See this feature request also.

 

Occasional Contributor
I would agree with you if it was something to do prior to the test campaign. I still don't understand what you can do with such tests results : what happens when objects are missing? you say it does not make sense to run a test on a deleted object as it would fail. So i assume, in that case, you want the test to be passed. What if the object was unvoluntary deleted? you have a false positive, which is the worst thing...
Super Contributor

@Montikore If the object is missing, then I do NOT want the test to pass. But I want TC to give me a signal that the application has changes / the specific object is deleted (or replaced somewhere else). In the latter case: I do not expect TC to give me the new location (since it's placed on a new position it is unknown area). although if that intelligence would be there it would definitely become a leading tool in te market. But I think we can only expect that to be recognized by tooling not within first couple of years.

 

TestComplete (or other tools as well) should be become more intelligent in handling application changes. Not saying that they should report changes as bugs, but changes should be recognized (and interpreted to some extend), and should be intelligently managed by the tool. In CI environments you (as the tester) dont have to time to inspect whole namemapping models to changes manually! By creating a snapshot of every new devilery TC should be able to "tell" the differences, just in an informative way. And then the tester decides (per change) if that needs updating the scripts and/or namemapping, or needs further (manual) investigation (which could eventually lead to a bug report).

 

As you can read in the feature requests I have created I want TC to become more intelligent in handling the changes. Of course it is not only the tool itself which handles application changes also the framework as designed by the tester should be capable of handling changes, but that is the task of the intelligence of the tester (his/hers skills).  Those two together would prevent reporting false positives as much as possible.

 

Basically, on a high level, I am saying this: TC (other tools have well) just do not have the capabilities to recognize and interpret application changes. These changes are becoming much more relevant in CI/ CD enviroments. I am looking for intelligence in tooling (the interpretation can be done by comparison on screenshots and models (namemapping in case of TC). Since TC has the data of all of these in it is potentially very much usable if intelligence is built into it!

 

Occasional Contributor
I'm pretty ok with what you say, but still not on the "on run" analysis... If i understand well your process, you have a first run which will highlights changes (with all corresponding tests failed as they didn't run), you then update your tests, and run again your tests. For me, the first run should not happen (the results are not trustworthy), but an analysis step instead! From a very logical point of view, the 2 ways are quite the same, except that you don't have a useless run