Forum Discussion
tristaanogrethanks for your contribution.
I agree with the context: Regressiontesting is a logical potential candiate for the automation of testing.
I agree also that it depends on whether an automated regression test suite can be called an artifact, on the context you are working in.
One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data". One great thing to do this is to use TestComplete TestData Generator, by the way. Here and here more info on it. More on the Test Data Generator can be found here.
mgroen2 wrote:
One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data".
This.
Exactly this.
It's how mine is built. The code is pretty much entirely test agnostic. It is just a series of functions. There would be no tests without the input data. Which is on Excel sheets. Which anyone can populate. Flexibility is key in my opinion. (Along with un-crash-ability...)
We use "agile", although where I work, up to now that's been an excuse not to document anything. We're getting better at it now. But test automation is almost exclusively for regression testing. As most will know, it takes longer to do the initial prep and build of an automated test than it does a manual one (although not so much once you have a good keyword and data driven framework in place). So we tend to manual test new functionality, and then translate the suitable parts into automated tests which get added to the ever growing automated regression pack.
- Ryan_Moran9 years agoValued Contributor
It's also how I've built the framework for my current employer. The question I am asking is where the value is in additional cases.
Your quote just reiterates more cases and assumes there is value in more cases.
Effectively what I am getting at is that more is not always better.
Especially when each case effectively tests the same thing OR tests the same thing that unit testing has already tested.
There is then value in a variation of the steps to test each data set but if we are simply automating those steps and always doing them one way we are ignoring potential variation of how the user interacts with the application. We test something with X steps, user tests with Y steps, users steps produce and error while automation has failed to catch it because they only use X steps. It's the same concept of throwing variations at the field values but my point here is that DDT only focuses on one half of the equation.
Let's use the example of testing a vending machine. How would you test a vending machine?
The typical tester will say something "Use different types of coins, currency values, and selections of items.".
They then build a test with steps to:
1. insert money
2. enter their selection on the vending console
3. retrieve the item from the machine
They would then add variations to step 1 through data sets and test.
The user then does something like:
1. enter their selection on the vending console
2. insert money
3. enter their selection on the vending console again
And then subsequently the machine produces an error and they have lost their money.
So here again it's great that we may have covered so much of the first half of testing, but I do not see how automating anything can replace actual use and manual testing.
With that said what is the real value of automating test cases in a sprint?
Does it really take that long to identify critical cases and perform the steps a handful of times?
Are there really that many critical cases in your application that make it likely to fail based on variations in the cases?- mgroen29 years agoSuper Contributor
Ryan_Moran wrote:It's also how I've built the framework for my current employer. The question I am asking is where the value is in additional cases.
Your quote just reiterates more cases and assumes there is value in more cases.
Effectively what I am getting at is that more is not always better.
Especially when each case effectively tests the same thing OR tests the same thing that unit testing has already tested.
There is then value in a variation of the steps to test each data set but if we are simply automating those steps and always doing them one way we are ignoring potential variation of how the user interacts with the application. We test something with X steps, user tests with Y steps, users steps produce and error while automation has failed to catch it because they only use X steps. It's the same concept of throwing variations at the field values but my point here is that DDT only focuses on one half of the equation.
Let's use the example of testing a vending machine. How would you test a vending machine?
The typical tester will say something "Use different types of coins, currency values, and selections of items.".
They then build a test with steps to:
1. insert money
2. enter their selection on the vending console
3. retrieve the item from the machine
They would then add variations to step 1 through data sets and test.
The user then does something like:
1. enter their selection on the vending console
2. insert money
3. enter their selection on the vending console again
And then subsequently the machine produces an error and they have lost their money.
So here again it's great that we may have covered so much of the first half of testing, but I do not see how automating anything can replace actual use and manual testing.
With that said what is the real value of automating test cases in a sprint?
Does it really take that long to identify critical cases and perform the steps a handful of times?
Are there really that many critical cases in your application that make it likely to fail based on variations in the cases?Automating cannot 100% replace actual use/manual testing. Automation just facilitates the tester to run (some) of the tests unmanned, and faster. Keep in mind that you split your test in regression part and a "obscure" part. As far as test automation concerns, focus on the positive testing (happy flow testing) part of the regression testing. In your example: you can perfectly automate the first described testcase, and manual execute (exploratory based) exceptional cases... until you have the time to extend your framework with these exceptional cases as well.
- tristaanogre9 years agoEsteemed Contributor
Value of automating test cases in a sprint? Two fold, depending...
1) Developing the test case in the sprint allows you to work with the application while the developers are working with the same stuff, keeping the tasks related in the sprint. Everything is fresh, everyone is working on the same thing, everyone is moving forward together.2) It gets the test case developed to benefit the NEXT sprint. Because the true ROI on an automated test is when it is re-run on currentsprint + 1 and following. For example, I create a test case or cases in sprint 1 that tests feature A on widget X. It's automated in sprint 1 for reason described above. It's RUN, however, in sprint 2 and following. Because, in Sprint 2, they may want to add another feature to Widget X that may require modifications to the implementaiton of feature A. By having the automation running in Sprint 2 that you wrote in sprint 1, you don't have to retest anything you did in sprint 1. You can focus, in sprint 2, only on those new things. In other words... regression.
There are instances, where I've describe before, where an automation actually benefits the CURRENT sprint by creating a tool by which the manual testers can complete tedious detailed tasks with high reliability on repeated runs. But, ultimately, your ROI is on the next sprint. So, you want to have the automation developed NOW so you can actually reap the benefit in the next sprint. Waiting until the next sprint to develop is too late.
In your example, however, you do have more test cases... but I only see data changing. I see that the order in which the tasks are performed is changed and that is simply re-ordering the data. If my framework is created correctly, I can toss in another test case simply by adding data rows.
And yes, there are that many critical cases. If your application is VERY small, then you know what, you can probably run the regression manually all the time and never sweat. Kudos to you to find such a cushy job.
However, applications grow in size and complexity. Each time through the SDLC, you're adding more and more test cases, not all critical, but you WILL add more critical ones that should be regressed each time you release. The more regression test cases you need, the longer your regression time will take. And, especially, when you're dealing with manual regression, the more man-hours which equals money. If you can reduce the man hours by automating your critical regression test cases early on, it will save you money in the long game.
THAT is why, even in your vending machine example, I'd automate those test cases... because while tomorrrow it's no big deal to run two test cases again... next year, they may be 2 cases in a suite of 200... and time crunch is on.... Easier to push a button with confidence that "Yup, already coded that" than to have to scramble to get everything done.
- mgroen29 years agoSuper Contributor
Colin_McCrae wrote:
mgroen2 wrote:One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data".
This.
Exactly this.
It's how mine is built. The code is pretty much entirely test agnostic. It is just a series of functions. There would be no tests without the input data. Which is on Excel sheets. Which anyone can populate. Flexibility is key in my opinion. (Along with un-crash-ability...)
We use "agile", although where I work, up to now that's been an excuse not to document anything. We're getting better at it now. But test automation is almost exclusively for regression testing. As most will know, it takes longer to do the initial prep and build of an automated test than it does a manual one (although not so much once you have a good keyword and data driven framework in place). So we tend to manual test new functionality, and then translate the suitable parts into automated tests which get added to the ever growing automated regression pack.
With regards to implementing test automation, that's a very good approach to take.