Forum Discussion

Logiv's avatar
Logiv
Contributor
5 years ago

Best practice on handling test failures that will be corrected soon

Hello,

 

I have a question on how people manage errors in TestComplete, specifically errors that will be happening for a few weeks but corrected afterwards.

As example:
- Let's say I have a 1000 tests on retail store transactions

- I run the test but I have 348 failures because there is a problem in a tax calculation so most of my transactions are off by 1 cent.

- Team decides to go ahead and release this version (along with the bug).

-  On my next pass of test automation, I get 355 errors. There are 348 "old" errors and a few new.

Can I easily tell which ones are new? 

 

I see 3 ways to possibly resolve this:
1 - I copy all of the 348 failed checkpoint, rewrite them with that one cent difference so that the tests passes, and then I go back and point my test to the "new/bugged amount" or the "regular, correct amount". Seems lengthy and scared I forget to toggle it when we correct the problem..

2 - I manually write down all the failed test and compare each time I run the routine (lengthy also).

3 - If I could set a flag to turn the errors in orange, to say sort of "yes, I k now it failed the checkpoint but it's ok" so the NEW issues will show the red error flag, while the transaction I flagged as "known to be bugged" would be in orange.

I am not sure I explain this properly..
I basically need to be able to distinguish between errors I saw on a first run of tests versus errors I find in a second run (that would include the previously found errors + new ones).

Or let's say, I'd like to temporarily disable the Checkpoints on the 348 tests that have failed, and re-activate all those checkpoints a few weeks later. 

Thanks.

  • I wouldn't change the calculations on the test.  A couple of options would be to change them to warnings rather than errors or to change the message on the error to be IGNORE THIS so you can see them in the log easily.

  • I wouldn't change the calculations on the test.  A couple of options would be to change them to warnings rather than errors or to change the message on the error to be IGNORE THIS so you can see them in the log easily.

    • sonya_m's avatar
      sonya_m
      SmartBear Alumni (Retired)

      Thank you Marsha for the tip!

       

      Logiv would this approach work for you?

      • Logiv's avatar
        Logiv
        Contributor

        Hello,

        Thanks for the idea, I did not know we could change those. Yes, I think it'll work - just haven't found how to do it yet 😉

        The best for me would be to:
        Trigger error if the difference is > .02  and trigger a warning if difference is =<.02 this way I could be able to find a bigger difference that would indicate another issue on top of the existing one!

        I will look into it and report back. Thanks.