Forum Discussion
I find the same... automation's best ROI is twofold, IMO
1) Regression. You've already manually tested it once when it was developed. If you have to re-do, manually, every test you've ever done, you'll NEVER get anything new tested because, each sprint/iteration you'll have to test more and more and more. Adding functional regression automation to the development process is a HUGE help as it frees up the testers from having to do the manual regression work. Have the regression tests running constantly... at least nightly... so that, as SOON as something breaks, you'll have a notification and it can be addressed immediately. Be smart about your regression tests, also. Sometimes something done manually never needs to be repeated. Or, you are testing a module that is reused multiple times in the application. If you've fully tested it in one place, you don't need to fully test it in another. A simple "does it work" will suffice for each subsequent instance of that module. And so on. Also, there are a LOT of tests done in manual testing that are "edge" cases... things that a customer MIGHT run into. Important test cases, for sure (I use the rule, "If I can come up with the scenario, odds are the customer will do the same thing") but regression is about risk. If it's a relatively low risk scenario and you have other high risk things still to do, put it in a backlog and get to it "when you have the time."
2) Need for high accuracy and repeatability. While this plays well into regression, one of the things that I ran into in the past was an intensely complicated workflow/state engine built into the application. There was a need for high accuracy and rapid repeatability of the test. If I found a bug, I would need to be able to rerun the test as many times as I needed with the same level of accuracy. In my example, I wrote an automated test that did this for me that I ran on demand. While it took some time investment up front to do so, it saved me a LOT of time each time I needed to run the test. The automation was later integrated into the regression suite but, as an assistance to my manual testing, it was invaluable.
There was an article I read a while back that addressed the concept of "artifacts" created during the testing process. This includes documented test plans, configuration scripts, and automated test code and scenarios. Creating an automated test creates an artifact that needs to be stored, maintained, and referenced. Does the overall process benefit from having such an artifact created and is there an ROI for the continued maintenance of the artifact? If the answer to both those is yes, then the artifact should be created. If the answer to either one is no, then don't bother.
tristaanogrethanks for your contribution.
I agree with the context: Regressiontesting is a logical potential candiate for the automation of testing.
I agree also that it depends on whether an automated regression test suite can be called an artifact, on the context you are working in.
One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data". One great thing to do this is to use TestComplete TestData Generator, by the way. Here and here more info on it. More on the Test Data Generator can be found here.
- Colin_McCrae8 years agoCommunity Hero
mgroen2 wrote:One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data".
This.
Exactly this.
It's how mine is built. The code is pretty much entirely test agnostic. It is just a series of functions. There would be no tests without the input data. Which is on Excel sheets. Which anyone can populate. Flexibility is key in my opinion. (Along with un-crash-ability...)
We use "agile", although where I work, up to now that's been an excuse not to document anything. We're getting better at it now. But test automation is almost exclusively for regression testing. As most will know, it takes longer to do the initial prep and build of an automated test than it does a manual one (although not so much once you have a good keyword and data driven framework in place). So we tend to manual test new functionality, and then translate the suitable parts into automated tests which get added to the ever growing automated regression pack.
- Ryan_Moran8 years agoValued Contributor
It's also how I've built the framework for my current employer. The question I am asking is where the value is in additional cases.
Your quote just reiterates more cases and assumes there is value in more cases.
Effectively what I am getting at is that more is not always better.
Especially when each case effectively tests the same thing OR tests the same thing that unit testing has already tested.
There is then value in a variation of the steps to test each data set but if we are simply automating those steps and always doing them one way we are ignoring potential variation of how the user interacts with the application. We test something with X steps, user tests with Y steps, users steps produce and error while automation has failed to catch it because they only use X steps. It's the same concept of throwing variations at the field values but my point here is that DDT only focuses on one half of the equation.
Let's use the example of testing a vending machine. How would you test a vending machine?
The typical tester will say something "Use different types of coins, currency values, and selections of items.".
They then build a test with steps to:
1. insert money
2. enter their selection on the vending console
3. retrieve the item from the machine
They would then add variations to step 1 through data sets and test.
The user then does something like:
1. enter their selection on the vending console
2. insert money
3. enter their selection on the vending console again
And then subsequently the machine produces an error and they have lost their money.
So here again it's great that we may have covered so much of the first half of testing, but I do not see how automating anything can replace actual use and manual testing.
With that said what is the real value of automating test cases in a sprint?
Does it really take that long to identify critical cases and perform the steps a handful of times?
Are there really that many critical cases in your application that make it likely to fail based on variations in the cases?- mgroen28 years agoSuper Contributor
Ryan_Moran wrote:It's also how I've built the framework for my current employer. The question I am asking is where the value is in additional cases.
Your quote just reiterates more cases and assumes there is value in more cases.
Effectively what I am getting at is that more is not always better.
Especially when each case effectively tests the same thing OR tests the same thing that unit testing has already tested.
There is then value in a variation of the steps to test each data set but if we are simply automating those steps and always doing them one way we are ignoring potential variation of how the user interacts with the application. We test something with X steps, user tests with Y steps, users steps produce and error while automation has failed to catch it because they only use X steps. It's the same concept of throwing variations at the field values but my point here is that DDT only focuses on one half of the equation.
Let's use the example of testing a vending machine. How would you test a vending machine?
The typical tester will say something "Use different types of coins, currency values, and selections of items.".
They then build a test with steps to:
1. insert money
2. enter their selection on the vending console
3. retrieve the item from the machine
They would then add variations to step 1 through data sets and test.
The user then does something like:
1. enter their selection on the vending console
2. insert money
3. enter their selection on the vending console again
And then subsequently the machine produces an error and they have lost their money.
So here again it's great that we may have covered so much of the first half of testing, but I do not see how automating anything can replace actual use and manual testing.
With that said what is the real value of automating test cases in a sprint?
Does it really take that long to identify critical cases and perform the steps a handful of times?
Are there really that many critical cases in your application that make it likely to fail based on variations in the cases?Automating cannot 100% replace actual use/manual testing. Automation just facilitates the tester to run (some) of the tests unmanned, and faster. Keep in mind that you split your test in regression part and a "obscure" part. As far as test automation concerns, focus on the positive testing (happy flow testing) part of the regression testing. In your example: you can perfectly automate the first described testcase, and manual execute (exploratory based) exceptional cases... until you have the time to extend your framework with these exceptional cases as well.
- mgroen28 years agoSuper Contributor
Colin_McCrae wrote:
mgroen2 wrote:One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data".
This.
Exactly this.
It's how mine is built. The code is pretty much entirely test agnostic. It is just a series of functions. There would be no tests without the input data. Which is on Excel sheets. Which anyone can populate. Flexibility is key in my opinion. (Along with un-crash-ability...)
We use "agile", although where I work, up to now that's been an excuse not to document anything. We're getting better at it now. But test automation is almost exclusively for regression testing. As most will know, it takes longer to do the initial prep and build of an automated test than it does a manual one (although not so much once you have a good keyword and data driven framework in place). So we tend to manual test new functionality, and then translate the suitable parts into automated tests which get added to the ever growing automated regression pack.
With regards to implementing test automation, that's a very good approach to take.
- tristaanogre8 years agoEsteemed Contributor
Preaching to the choir on the whole DDT thing.
Related Content
- 9 years ago
Recent Discussions
- 7 hours ago
- 3 days ago