Hi fellow Testers, QA engineers, Test Automation Engineer, Technical test specialists, ....
In this article, an how-to about designing modern, sophisticated, high quality, mature, robust and flexible test automation frameworks, is being explained.
It gives the reader insight, how-to's, tips on how to prevent pitfalls when designing / building /implementing/ maintainting your TAF.
Please share your thoughts, your feedback, your ideas, your (current) experiences, your expectations.... Let's start discussing! 🙂
I love these two page articles that make it all sounds so simple!
In reality, it seldom is.
The first thing I don't like is their POM. A class per page feels like it would get VERY big, VERY quickly.
Personally, I use locators ("where" is a control) and handlers ("do stuff" to control - I have one or more functions per control. One would be ideal, but some controls are complex and require additional functions). These form the core of most of the input require for my tests. And are completely application agnostic and portable.
So you use a locator to "point" to a control in the driver data. Then tell it the control type it should find there, and what it should do with it. (Along with any required input data and expected results of course) If I had to create a new class for every page/panel? I'd be here until doomsday! My locator and handler seems to be holding up well and avoids duplication very well.
Of course, I do have other functions. For common operations, validations, some application specific, some "shortcut" ones which are there simply to make the input a little less long winded in certain cases. But the core is definitely locators and handlers. And a good, well prepared, robust object map!
If the AUT under test is relatively small, yes, this works well. But the more complicated the AUT, as @Colin_McCrae mentions, the more burdensome creating and maintaining a POM becomes. I prefer my classes for my framework to not be based upon the page, but based upon the specific action I want to execute. The framework that I started sometime ago works with this structure but it's flexible enough to either be based off a POM or off something more low-level as Colin has designed.
One thing that I do agree with is that any good, agile-friendly framework should move away from the each test being it's own code routine/keyword test and having something a lot more modularized in the code structure. Break a test case down into component steps (as much break down as you want), write reusable code for each of those component steps that can be re-purposed in multiple test cases, and drive that code from some sort of external data source. This then abstracts the tests from the code. Tests are in the data, the code simply uses the data to determine how the test is run.
Now... this seems simple, but it isn't, because it requires a completely different way of looking at test automation than what is normally deemed "acceptable". Test Automation IS development in this paradigm because the majority of the test automation work is done via code, leaving the creation of the test cases themselves open to anyone, regardless of technical background. A non-tester person, given sufficient documentation, could add a test case simply by adding more rows to the data source. And, if the framework is written well enough, adding those rows could even be pretty plain-language. "Enter myUserName in textUserName on pageLogin" can be parsed out into the necessary parts to do exactly what it says.
Yep. My automation framework pretty much fits in with @tristaanogre's final paragraph.
Once I develop all the handlers, get the name map in place (and robust) and add on any extra "stuff" that falls outwith the big two, any tester should be able to add new tests (well, test steps really ..... many steps comprise a test) to the input sheet. And it accepts as many input sheets as you want. Which can be added and removed at will. And steps within each sheet can also be switched on and off at will.
Takes a lot more setup work to abstract the code almost completely from tests. But worth it in the long run. And I would put it down more as a development task than a testing one.
Hence my job title of "Non-Functional Test Engineer". (Although automated tests do, of course, carry out functional tests, the work to actually build all this is outwith the test job remit. And I also do performance and load testing. And I'm hoping to add penetration testing, at least to some level, to my armoury at some point as well ...)
Excellent discussion here gentlemen, thanks for posting on this @mgroen2.
@tristaanogre we have pseudo-talked (email) about this a bit in the past and what you shared with me has been most helpful.
So thanks again for building the test community.
What existing open source framework options did you guys review/trial before deciding to write your own?
Did anything out there show promise for you?
Baseline I guess is why ultimately would you write your own framework?
As Colin said, to write this is a development effort and no small one either.
My team is evaluating how to approach this right now, so I am interested in how you guys approached this and how it has worked for you.
Hey, @dbattaglia... didn't make the connection before. 🙂
With regards to open-source framework... there really doesn't exist such a thing for TestComplete. So, for me, it was a matter of needing to write something for the tool I was using. I modeled it after a couple of ideas that were fluttering around on the internet... I think @Colin_McCrae might have read some of the same stuff. 😉
I mean, Selenium models exist that mirror somewhat what I've done. I've seen similar constructions written in Java (robot was one that was used at a job I interviewed at). So... I kind of borrowed a bit from here and there... the latest iteration that uses "classFromName" of my framework came out of a question in a job interview about utilizing data reflection and so on...
My current position is using a different framework structure than what I wrote. It's still VERY much sourced in data but with a different model of operation based upon corporate requirements. So, my best hint to use is to spend the time figuring out
1) Skill sets of the people involved
2) Resources available (database servers, VM's etc)
3) General architecture of the product under test (some of this might dictate exactly how you build the framework)
And then build from there. My best hint is what was described above, that you abstract the information needed to execute the test from the code that does the actual execution so that adding new tests consists mostly of adding new data and only minimally adding new code. This will make for a more sustainable test framework and one that will grow more easily.
As @tristaanogre mentioned, nothing really out there for TestComplete in terms of off the shelf stuff.
Personally, I wrote mine based on past experience with QTPro. Wanted something that is easy for a tester to populate without having to go near any scripts, so Excel is ideal. And I prefer to have full control of exactly how my reporting is done, hence using my own results and log files. The built-in ones are not bad. But mine are more flexible.
And yeah, it is a development hit to get it up and running. But if you get it in place and solid early, it shouldn't require much maintenance and adding to going forward. I had to add in TFS functionality, but that's a totally separate thing to everything else in the framework so easy to add without disrupting anything. I wrote mine about 3 years ago and having changed much since I got it how I wanted it.
@Colin_McCrae can you tell me a bit more about your reporting approach?
what does your custom reporting/logging provide that the testcomplete logging doesn't?
what format are you writing to?