Forum Discussion

sbeach's avatar
sbeach
Contributor
14 years ago

Quick quick quick them move on: Your strategy for fast automation

Hey there.

So here's the deal. 3 Developers, 1 QA manager/geek (me) and no time to automate.

Of course I can simply do the record and playback method but I find myself

spending a lot of time then 'fixing' the code.



Our UI changes frequently and that makes automation a challenge.

I'd be interested in hearing your 'horror' stories and ways you use

TestComplete to automate your quickly changing software.



Thanks

Steph
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    The horror story:  A web application with web pages and components that are frequently changing in hierarchy, location, and class/idstr names.



    The solution: NameMapping with Extended Find (to solve hierarchy issues), conditional mode (to trap situations where the same component needs to be used on two different pages) and wild-cards, for those places where names are dynamic.



    For you with a rapidly changing UI, DEFINITELY get intimate with NameMapping, Aliasing, etc.  That way, if you use Aliases, you won't have to change any of your test code/keyword tests but you would only need to update the underlying NameMapping scheme.  And, if you're using those three features I named, you may not even have to do that very often.



    And by get intimate with it, I mean don't simply record and playback.  You can record to generate your name mapping, but I've found it a lot more reliable to spend some time up front to manually build a name mapping scheme for all components that I will be interacting with the test up front and THEN build my tests.  Of course, I can't do ALL of it up front but if I spend time with it, I can get the majority of it in.



    As for the "move on", NameMapping schemes are re-usable in that you can share them between testing projects so you'd only need to do the mapping once and then use that scheme in all your testing projects.



    Now, if you REALLY want to get technical, get away from procedural scripts/tests where each script function or each keyword test is one scenario and start working on building a framework for data driven testing.  The end result would be that you'd have a framework of code units, objects, and case/switch statements that would rely on a feed of data from CSV, Excel, or other similar data tables to actually run your tests.  Then, if you need to add a new test case, all you need to do is insert records into your data and BAM, new test case.  Takes away most of the pain of writing new test case code and puts the ability to writing a lot of test scenarios into the hands of folks who all they need to know how to do is update a text file in notepad.



    Just my $0.02. :-)
  • WOW That's worth more than 2 cents. I owe you a beer.

    Thank you!
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    Glad to be of help, Steph!



    As for the beer, make mine "root"... I'm driving. :-)
  • Just wondering:

    with DDT, you would be able to easily add more test cases but they would all be of the same "base" test. It doesn't really help you when you need to test a different area etc. right?
  • Hi Ory,



    Yes, that's right. DDT is needed to supply different data to the same test, not to create tests for different areas of your application.
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    Actually, I've constructed a framework that uses a combination of DDT and ODT to create different tests for different areas of the software using minimally 3 CSV files..  I'm testing web pages, SQL stored procedures, and other areas of our system with a simple set of CSV's.  If I want to add a new test, I add new records to a csv with certain values that indicate what area is being tested and what supporting data is needed.  Each test case consists of several rows in a CSV describing the steps.  Each row in the CSV represents a step and generates a different class of ODT object.  I then use the ODT method of executing tests (calling a "run" method on each object) to walk through those objects and execute the specific step each object represents.