Forum Discussion

william_roe's avatar
william_roe
Super Contributor
9 years ago

Test Dependencies

I've come to the conclusion our tests are too fragile. Many of the tests depend on  test(s) prior to run so if test 3 of 50 fails 4 -through 50 don't get run. I recall reading about ways around this but can't seem to find them. Here's the scenario;

 

test one (no dependencies)

test two depends on one

test three depend on two

etc.

 

My thoughts (from what I remember reading) is the dependencies for each test other than one) already exist so that if a test fails subsequent test(s) can be completed.through the entire test run. 

 

The error hander would move to the next test upon failure using Runner.Stop(true) in the 'OnLogError' event.

 

I'm sure this has been discussed before but I've been unable to find the posts.

7 Replies

  • Marsha_R's avatar
    Marsha_R
    Champion Level 3

    We make our tests as independent as we can.  Each one will create it's own data or read it from Excel, but it should not be relying on a previous test.  This works for data entry and calculations, so most unit test type things and some functional tests.

     

    This can change when we get into workflow tests.  In this case, the dependency is sometimes part of the test.  The test might enter a bunch of things in one screen and then expect certain results in another screen.  In that case, if the first one is inaccurate, I want the second one to fail.  

     

    For your example, if 3 is dependent on 2, but 2 fails, then would you still have the correct data for 3 to go on?

     

    p.s.  Can I suggest another title for your post?  Something like "test dependencies" might be more helpful when others are searching the forum.  :)

  • NisHera's avatar
    NisHera
    Valued Contributor

    I’m also facing same problem. As Marsha stated trying very hard to make each test independent.

     

    The other strategy I’m using currently is concept of smoke test. It is a tiny subset of main test suite. Just after a build I run small sample of most critical tests. For example part of initial data inserting scrips, part of most critical flows test. So before night/weekend run I know whether critical tests are not broken environment is stable.

     

    I learnt in a hard way, that test no 10 should not depend on result data of test no 05. Never reuse data. Now I’m trying hold separate data sets for each test. And you can do that with minimum changes to scripts. Particularly when using DDT.

     

    Think most hard thing is to deal with system state. If test 04 change the state to run test 05 then you have lot of work ahead.

    • william_roe's avatar
      william_roe
      Super Contributor

      NisHera wrote:

      ...

      If test 04 change the state to run test 05 then you have lot of work ahead.


      That's what I was afraid of. Thanks for the reply.

  • Hi,

     

    Regardless of what test framework / tools / scripts you use having dependencies between tests is a fundamental No No !! All tests must be able to run independently and in any order. You have broken this rule an now you are paying and will contue to pay until you refactor your tests.

     

    John

    • Marsha_R's avatar
      Marsha_R
      Champion Level 3

       Hi John - 

       

      Please remember that everyone has different constraints to work under at different employers.  Criticizing is not really helpful.  

       

       

       

      • TheGhost's avatar
        TheGhost
        Contributor

        Hi Melisa


        It's not a criticism it’s a statement of fact. Specifically he needs to refactor the dependencies out from between his tests. This is the root cause of the fragility he is trying to eliminate.

         

        John