cancel
Showing results for 
Search instead for 
Did you mean: 

On error nest test case iteration should not stop

On error nest test case iteration should not stop

In out project we have multiple test items and each of our test item have some iterations defined. While running project suit if any error occurs during any of the iteration for any test item, the execution stops for the whole test item and test complete does not execute the next iteration in line but instead execution proceeds with the next test item. This behavior of test complete is problematic  as the previous iteration could have failed due to application error and many other possible reasons . So ideally if one iteration is failing then test complete should not stop the executions for next iteration. I had also dicsussed this isue with smart bear support team and they have suggested me to made a request for feature implementation. It would be really nice to have this feature becasue its very common in automation for a test case to have mulitiple set of test data and iterations are only good approach to do this.

6 Comments
Community Hero

It sounds like a nice feature.  However, you can achieve what you're looking for by adding appropriate exception handling code in your keywords tests and scripts that are executing.  So, within your test item, inside each iteration, have nested exception handling using try/catch or something similar in your code language.

Established Member

@

 I am already using exception handling in my code and but it does not cover all the error related scenario for example if,  for one of the  iteration there is some test data related issue then logically my test case for that particular iteration should fail and even in my exception handling code I have to log the error but once the error is logged the execution for test item stops and it does not proceed with next iteration for another set of test data.

Community Hero

Exactly... there's a setting for "On Error", meaning as SOON as something shows up to the log as a type "Error", that fires... that is controlled either on the project level OR on the Test Item level (there's a hierarchy).  So... it sounds like you have it set on the Test Item level meaning that, as soon as an error gets logged, that Test item is halted and it moves to the next, even if there are more iterations. 

Two things:
1) Keep the setting as is but change your exception handling to log as "Warnings" instead of "Errors"
2) Change the setting and control halting of a test item internally in code within the try/catch/finally blocks.

Established Member

@tristaanogre 

I had also tried this solution but it’s not a  good approach because, let’s suppose I have to make a selection at some point and the application next steps depends upon the kind of selection I have made but as I was not able to make the selection due to some error and if I have define the "on error" property to continue with the execution then the test case execution will continue but it will not perform any operation on the application. It could be workaround if your test cases are small in terms of time taken for execution but in my case test cases takes around 10 to 15 min for completion so if On Error property is defined to continue with execution in that case, test case will  unnecessarily execute for more than 10 to 15 minutes (here I am including the timeout wait also which test complete takes if it’s not able to locate the element). And from my point of view purpose of automation should not only be to perform the execution but also catch the genuine bugs if there are any. So rather than increasing the complexity of code there should be simple solution if one iteration is failing then test complete should not halt the execution but continue with the next iteration.

Community Hero

Gotcha.  Just trying to present alternative since a feature request is good... but turn around on development is not quite so quick.  So, looking for ways for you, in the short term as you wait for the development (which could be a year or so) to achieve the same results.

 

I still think it can be handled in the code and your concerns about the timeouts on object recognition can also be mitigated but I get your point and agree with you (hence why I gave a Kudo to the request. 🙂 )

Established Member

 

Thanks 🙂