Forum Discussion
scottb
12 years agoContributor
I don't know if I can answer your questions directly. I'm fairly new with TestComplete, but I can give you a quick user story. I would appreciate any feedback on flaws with this approach as I am developing this now.
I sort of inherited 2,000 of KDT scripts recently. It turns out that the scripts did not do much verification. They just walked through the application without really checking anything. I was tasked with turning these lemons into lemonade.
Rewriting the KDT scripts as textual scripts was out of the question. There was too way much to translate. Plus the testers who wrote and maintain the scripts are not very technical, so they would have problems maintaining textual scripts.
One of the first things I learned about TC's KDT language is that it is too bulky and awkward for heaving coding. It does not play well with complex things like database access. Creating individual form field verifications is painfully slow.
The approach I am taking is to wrap each KDT script in a common textual script function that does a standard test setup, runs the KDT script using KeywordTests.ScriptName.Run(), and does a standard teardown.
Fortunately, my SUT records all of its important database updates in a transaction table. The transaction table has an autonumber key. After every user operation that does an add, change or delete to the DB, records are added to the transaction table, and they are added in a consistent order.
For the standard test setup I note the key number of the last record. I run the KDT script. Then I diff the "actual" records in the transaction file with a set of "expected" records from a master DB. I give the wrapper script a flag parameter so that diffing can be disabled. I also give it a "update" flag parameter that causes the "expected" values to be replaced with the generated "actual" transactions in the master DB.
I will be running the scripts on a set of static DBs running with the SUT operating at a given point in time. When running with the static DBs I will be applying the diff to do my data verification. After a run is completed I can run it a second time with the "update" flag turned on which will cause it to update the "expected" data with some or all of the "actual" data.
Because I can diff like this, I can generate a lot of data verification quickly and easily. I will be maintaining the KDT scripts as they are, with no verification steps in them. Most KDT script maintenance will be updates to the name mappings. When the scripts get too out of date the testers will do a quick re-record of some or all of a failing script. KDT scripts are designed to be quick and easy to generate by less technical testers. Where heavy lifting is needed in a KDT script, I am writing textual scripts to handle it. By taking this approach I can do data verification fast and cheap using textual scripts, and I can do KDT script maintenance fast and cheap because I keep as much complex stuff out of the KDT scripts as possible.
I will also be running the scripts on random DBs with different data. When running with the random DB I will not apply the diff and I will not replace the "expected" with "actuals." The KDT scripts will just do a walkthrough of the SUT.
To make the KDT scripts cheap and easy to maintain the only hand coding I expect to see in the KDT scripts are operations to keep the script from crashing. I'll let the diff determine if the data is correct. I'll let the successful completion of the KDT script determine if the GUI is operating as expected. I could add more detailed testing to the process, but how cost effective is it to add more testing than that to automated regression testing? How likely is it that a label will be dropped as a regression? Regression testing should not be turning up a lot of bugs. When regression testing does find a bug, it tends to be blatant. Checking the data and walking through the GUI should suffice.
I plan on having the testers do periodic hand testing as a supplement to look for regressions not found in the automatic GUI walkthrough. I think the cost of doing that will be less than the cost of creating and maintaining automated checks of everything in the GUI.
If I were working with a DB that did not have some kind of built-in transaction recording, I would consider writing a SQL script that generated post-add-update-delete triggers for all of the important tables. I would have it append all of the fields, probably pipe or tilde delimited, into a single string, and I would have it save a key containing the table name and operation (add, update, or delete). I would have it store that info to a QA transaction table with an autonumber key. I would use that to diff with. Some databases have built-in functionality for recording transactions for things like master-slave syncing. Those would be candidates for quick and easy diffing too.
I am also working on ways to get my KDT scripts to work using alt and control keys instead of actual objects. A lot of my application's forms use alt keys to navigate to fields on the forms, and alt and control keys to navigate through the GUI. If I can take advantage of that I can get rid of a lot of frequent script failures caused by changes in the form objects. For this I am trying out GUI mapping to see what might work. I don't want to use full-on GUI mapping if I can help it because of the amount of coding and code maintenance involved.
HTH
Any feedback would be appreciated.