Forum Discussion

mgroen2's avatar
mgroen2
Super Contributor
8 years ago

Agile / Scrum / DevOps/ Waterfall... : (Dis)likes, Experiences, pitfalls, best practices!?

I would like to start an open discussion here about experiences you have with Agile / Scrum and it's practises like the standup meetings, the sprint reviews, the retrospectives etc. And of course also in relation to test automation tools, including (but not limited to)  TestComplete.

 

What are your experiences? What do you like to share? Any positive things you have experienced. Negative things. The advantages/ disadvantages? Pitfalls, etc.

 

To make it a little uniform I suggest we stick to the following: Please what you want to share place it in the following structure:

 

1. Years of experience in modern development methods like Agile/ Scrum/ Devops:

2. Years of experience in classing development methods like Waterfall:

3. Role(s) in those methods:

4. What you want to share, contains

a. Likes;

b. Dislikes;

c. Pitfalls

d. Best practices (workarounds for the pitfalls);

e. Conclusion

 

Thanks everybody!

 

24 Replies

  • Ryan_Moran's avatar
    Ryan_Moran
    Valued Contributor

    I am facing a similar issue in my current position of how to work in test automation in an agile environment.

    The primary concerns I have are reusability and where to focus our efforts.

     

    I have for some time been automating our test cases just for the sake of getting them done and appeasing my coworkers who are desperate to justify their necessity - but where does the meat go? I have yet to see any value in this type of automation. In fact I find writing test cases to be a complete waste of time and added fluff to the QA process. Writing out scenarios and breaking down requirements into meaningful action items to test I have found to be pretty useful but writing out all my steps for everything I test...waste of time.

     

    Just as I drive to work every day I do not need to write the steps and document every turn I take to arrive at my destination. I do it quite well on my own - naturally - because I have done it for some time now and I am familiar with the steps involved to start my vehicle and arrive at work. Sure I occasionally look for alternate routes - or maybe even drive somewhere besides work - but the maps do not apply to subsequent trips and therefore do not see a purpose to keep them around.

    I know how to drive my car - I know how to use google maps - I'm prepared for whatever lies ahead.

     

    So automation can jump through a bunch of steps with different data sets, but so can I - and I learn more - catch more - do it faster without having it automated or wasting time on the fluffy redundant documentation of that process.

    So I have typically been leaning towards Automation being valuable only in regression testing.

     

    What do you all think about test cases and can you elaborate on the greatest value that automation has in the agile environment?

    • tristaanogre's avatar
      tristaanogre
      Esteemed Contributor

      I find the same... automation's best ROI is twofold, IMO

       

      1) Regression.  You've already manually tested it once when it was developed.  If you have to re-do, manually, every test you've ever done, you'll NEVER get anything new tested because, each sprint/iteration you'll have to test more and more and more.  Adding functional regression automation to the development process is a HUGE help as it frees up the testers from having to do the manual regression work.  Have the regression tests running constantly... at least nightly... so that, as SOON as something breaks, you'll have a notification and it can be addressed immediately.  Be smart about your regression tests, also.  Sometimes something done manually never needs to be repeated.  Or, you are testing a module that is reused multiple times in the application.  If you've fully tested it in one place, you don't need to fully test it in another.  A simple "does it work" will suffice for each subsequent instance of that module.  And so on.  Also, there are a LOT of tests done in manual testing that are "edge" cases... things that a customer MIGHT run into.  Important test cases, for sure (I use the rule, "If I can come up with the scenario, odds are the customer will do the same thing") but regression is about risk.  If it's a relatively low risk scenario and you have other high risk things still to do, put it in a backlog and get to it "when you have the time."

       

      2) Need for high accuracy and repeatability.  While this plays well into regression, one of the things that I ran into in the past was an intensely complicated workflow/state engine built into the application.  There was a need for high accuracy and rapid repeatability of the test.  If I found a bug, I would need to be able to rerun the test as many times as I needed with the same level of accuracy.  In my example, I wrote an automated test that did this for me that I ran on demand.  While it took some time investment up front to do so, it saved me a LOT of time each time I needed to run the test.  The automation was later integrated into the regression suite but, as an assistance to my manual testing, it was invaluable.

      There was an article I read a while back that addressed the concept of "artifacts" created during the testing process.  This includes documented test plans, configuration scripts, and automated test code and scenarios.  Creating an automated test creates an artifact that needs to be stored, maintained, and referenced.  Does the overall process benefit from having such an artifact created and is there an ROI for the continued maintenance of the artifact?  If the answer to both those is yes, then the artifact should be created.  If the answer to either one is no, then don't bother.  

      • mgroen2's avatar
        mgroen2
        Super Contributor

        tristaanogrethanks for your contribution.

        I agree with the context: Regressiontesting is a logical  potential candiate for the automation of testing.

        I agree also that it depends on whether an automated regression test suite can be called an artifact, on the context you are working in.

        One extra thing I'd like to mention: enhance your automated regression test suite to a data driven test suite. Keep the test data in external sources (Excel, SQL). Why? In this way you can execute regression testing with unique and manageable testdata, and easily extend your framework with new testdata. The big win here is that when focusing on your test data on a frequent basis, keeps you focused and feeds your creativity to come up with "odd test data". One great thing to do this is to use TestComplete TestData Generator, by the way. Here and here more info on it. More on the Test Data Generator can be found here.

    • mgroen2's avatar
      mgroen2
      Super Contributor

      Ryan_MoranThanks for your reaction.

      I think the credo "context is everything"also counts here.. It all depends on the potential risks involved in the system you are testing (whether it's manual or automatic: doesn't matter in this case).

      Example: the (sub)systems which run (fly) an airplane will need the full bunch of documentation, and the 'complete package' of testscripts, worked out in detail. No compromises. Why? Because human lifes are potential at stake.

       

      When your testing efforts are focused on a "regular" IT system in "typical" business, most of the times human lifes are not at stake, and therefore risks can be lower (as an example). Mix these context aspects with (lack of) available to time, resources, etc, and you have to make the decision if complete 100% complete and accurate testcripts need to be worked out in 100% detail... And this again relates to agile way of work: a lot of companies choose to work 'agile' because of some of the benefits of this software development method: it results in fast deliverables, with a high degree of interaction between end users and developers...

      But, at the same time, Agile has it's cons as well.. One of them being lack of robustness, and this is caused by lacking of time to work out all the documentation and all the testscripts you are referring to as well...

      Putting it back togeter in the example: I am 100% there is no airplane manufacturer in the world (whether it's the plane itself, or any software systems running in/on it), which even starts to think of working Agile for their development/manufacturing processes... the risks are just too high! To rely on the 'old, classical', documentation, decent testing, in each and every (sub)step of their development/manufacturing processess...

       

      If the company you work for has choosen to work "Agile" you can conclude for yourself that quality assurance is not the major point of focus, in whatever their it systems need to do. Being a tester in an Agile culture has its challenges, you can figure it out yourself.....

      • tristaanogre's avatar
        tristaanogre
        Esteemed Contributor

        mgroen2 wrote:

         

        If the company you work for has choosen to work "Agile" you can conclude for yourself that quality assurance is not the major point of focus, in whatever their it systems need to do. Being a tester in an Agile culture has its challenges, you can figure it out yourself.....


        I disagree with your statement that utilizing an Agile methodology (SCRUM, KanBan, Lean, etc) means that quality assurance is not the major point of focus.  At the company where we implemented a blend of Scrum and KanBan, QA was DEFINITELY in the focus of our development.  The purpose of the Agile methodologies we employed were not simply to get deliverables out the door at a high velocity, but to make sure that what we DID get out the door was good quality.  This meant that we needed to adjust our expectations as to how many deliverables we could get out within a particular time frame... and that's where companies frequently fall down.  They get too caught up in "get it out, get it out fast, get paid fast" and forget that exponentially increasing cost of failures the further down the development line you get.  If you build in good QA practices within your Agile methodology, even if it, contextually, doesn't make sense to do the robust documentation, you'll still have good quality product.  In this I agree with Ryan_Moran... sometimes documentation is done simply to say that the documentation is done... which goes back to that rule I mentioned about artifacts.  If you write a test plan with the full robust test scripts, execute it once... and then never look at the documentation EVER again... why the heck did you spend time creating the artifact in the first place?

  • Ryan_Moran's avatar
    Ryan_Moran
    Valued Contributor

    I never bought into the idea of DDT because it uses circular logic. That is to say that in every case I have found of people saying it's useful the consistent message is "It's useful because more cases mean it will catch more bugs.". In this type of logical reasoning it is claimed that A: "It's useful" because B: "more cases mean it will catch more bugs" even though B has not been proven and varies greatly depending on the area in the application that you are testing. In the way DDT is typically "sold" it is assumed that B is true therefore A is true. This is circular reasoning and is a logical fallacy.

    • tristaanogre's avatar
      tristaanogre
      Esteemed Contributor

      Actually, I've never heard the argument for DDT as "do more cases, catch more bugs".  Instead, a good DDT structured framework is not necessarily based upon generating a lot of test cases but more on separating the code that executes against the AUT from the data that the code uses.  For example, I want to enter data into a form on screen.  The code for that form is pretty static and basic, no matter what the data is.  So, I abstract the code for  the form to make all data entry fields and selections parameterized variables that are sourced in data. 

       

      Now, if I want to run a test case against that form, I simply create a data record for that form and boom.  One test case is done.  Want another test case?  Don't need to touch the code at all, just add another data record.

      Now... if done well, this isn't just looping through the form and populating data.  That code that "drives" that form is called from other framework structures. So, I'm testing a use case, for example, that enters data in that form and then does some sort of operation based upon what was entered.  So, my data includes not just the data on the form but other steps and processes, button clicks, validations, etc., that drive the whole use case.

       

      Build your framework properly, and when it comes to adding another test case, the amount of actual code you need to write is reduced and most of your test case construction is simply adding data records.  I had a framework at that company that did POS applications where I had code that applied payments, selected tickets, verified transaction journals, etc.... all modularized code.  Everytime I wanted to add a new test case, I didn't need to touch a line of code, I simply went into my database, added a few data records, and now my project has expanded by a test case.

       

      Sure, the end result is more test cases... but the true aim of DDT is to speed up the ability to add a test case and even make it so that a non-programmer tester can add a new test case by the simple task of data entry.

      • Ryan_Moran's avatar
        Ryan_Moran
        Valued Contributor

        Little confused by that last response. You say you haven't heard an argument for more cases catching more bugs, but then all of this is to simplify the addition of more cases. I have to ask - if more cases doesn't mean more bugs then why is it so important to add more cases?