Forum Discussion

william_roe's avatar
william_roe
Super Contributor
9 years ago

Strategy for Disabling Stop on Error

We have made the cardinal sin of creating dependencies between tests. Because of this if test 2 fails the remains 5XX tests aren't run. Because our tests are data driven I believe we can recover from this. However, there is a question which looms large and want to hear about how others are handling this.

 

Given a test can fail at any point (any number of pages deep with popups and all) how is it you able to return to a known good place? We are aware the application can be terminated by calling Close on the the root Alias but because the users last location is stored in a cookie this could be anywhere and will resume when the application is re-launched.

 

The approach currently under consideration is to write a test which returns to a know location which will be called from the OnLogError event at which point the "the test will proceed to the next sibling test item" (whatever that is) and the test will resume.

 

Is this a valid approach?

 

Thanks all in Advance.

 

 

  • soccerjo's avatar
    soccerjo
    Occasional Contributor

    We have a web application, and our error handler closes the browser (to make sure all dialogs in the browser are dealt with) and reopens it. Then each of our tests goes to the correct part of the web app (if it was already there from the last test, that is not a problem).

    • Colin_McCrae's avatar
      Colin_McCrae
      Community Hero

      I adopt the "kill everything and start fresh" approach. For big/fatal errors.

       

      Some of my tests have dependencies on previous tests. In some cases, it's almost unavoidable. It kind of depends what you're doing. In most of these cases they could be made entirely standalone, but the amount of setup work required for each (which the previous tests are at least partly responsible for) would become huge and the runtime would stretch out something silly.

       

      So we accept that sometimes an error sometimes can, and should, cause an abandonment. But without losing the entire run - if possible.

       

      So I deal with this in two ways.

       

      My framework splits tests into "packs". That's a key part of it. Packs should not be huge. And should deal with small, functionally linked, tests.

       

      A "pack" will NEVER have a dependency on a test in another pack. Any dependencies MUST be within the pack. And even the, should be kept to a minimum.

       

      All error handling is done by custom routines. For a simple test failing, I allow tests within a "pack" to have a dependency on previous step(s) within the back. That is to say I can put logic in which allows a test to say "if this passes, do this" / "if this fails do this". This allows me to work round small, known, bugs and the tests will auto-correct themselves when it's fixed but still report the failure while it's not.

       

      But if a larger error occurs, the application crashes being the obvious one, it will kill the application (or website) and all services related to it. Log as much error info as possible, and simply move onto the next "pack". Every pack is designed to start the application/site from a clean baseline and most will have some sort of restore (be it a DB or whatever) or tear-down routine which should always be the first step in any pack. Tests can also be coded to pass back a flag to the main handler which indicate that something terrible has happened and the "pack" should be abandoned.

       

      Seems to work for me. I can work round small errors and defects, and get out of jail with a clean start for bigger ones without losing everything.

       

      Of course, if the application has crashed/broken so badly that it no longer works, even after tear downs and restores, then every subsequent "pack" will simply fail out as well. But there isn't much anyone can do about that (bar re-running installers etc - way too complex) but at least we get a log telling us this. :)

      • AlexKaras's avatar
        AlexKaras
        Champion Level 3

        Hi William,

         

        It looks that the approach I used to use is pretty the same as two previous speakers talked about. :)

        Basically, every test (or test 'pack' if tests are organized as described by Colin) is responsible to check if the tested application exists and to put it (tested application) into required initial state. If some unexpected windows appear while navigating to the required initial state (usually this happens if something was not saved because of the failed previous test), these windows are attempted to be closed ignoring any changes, just in order not to trap into the same problem that caused the failure of the previous test.

        The moment worth to be mentioned here is that the key results of the previous succeeded tests are preserved by my test code and if some test fails the subsequent test tries to use the result of the previous successful test run. For example: if an order was successfully created once, the id of this order is preserved. If during the next test run the order fails to be created, then the next test will use the previous order with the preserved id. (Yes, the orders cannot be created, but why lose the option to be able to check that existing orders still can be viewed and/or printed?)