Forum Discussion
AlexKaras
12 years agoCommunity Hero
This time I am with Marsha :)
I also never met the situation when test log can be taken as it is and used as a final report to the management. We always had to do some manual processing to go through the test logs and to decide if the reported error is an actual problem or something that was caused by the flaw in the test code/system environment/whatever else and that is not reproduced after the failed test is re-executed.
Also, don't forget that real-world automated tests differ from the manual testing. Manual tester is usually aware about latest changes in the tested application and he/she always 'adjusts' the written test to the actual behaviour of the tested application while testing. Usually, he will not report a problem if the current GUI (for example) does not correspond to the one that is mentioned in the manual test if he knows that the GUI was modified in the latest build.
Automated test does not have this knowledge and been a regular deterministic software it will fail and report a problem. Obviously, you can consider two border types of automated tests - very simple and fast created (e.g. recorded ones) and pretty robust that do a lot of work to do what they expected to do and do not fail (e.g. coded tests that use a lot of classes, custom libraries, intelligent search within the tested application, etc.). I think that it is evident, that the former approach will result in a very short time required to implement the test, but in a long run will require a lot of efforts to support such test. Alternatively, the latter approach will result in a significant time required to implement the library code and will require much more time needed to implement the first tests. Tests will be more stable, but the efforts required to support them may depend on the complexity and architecture of the library code. Pretty like when developing a regular application. In a real life we are usually balancing between these border approaches which means that we are always in risk that our test code will fail and require correction.
I don't think that if the automated test fails because of the problem in the test code, the test result should be reported as passed. I think so because the test actually failed. The fact that test failed because of the test code problem may be the reason to report such test as 'blocked and executed manually', but not as passed, because this may create a wrong impression that the given automated test can be executed unattended, while in fact it requires manual execution.
So, for me, the test must be reported in the test log as failed regardless of whether it failed because of the real problem in the tested application of because of some problem with the test code. Then test log must be reviewed and a decision must be made on whether the test failure should be reported with the issue created in the issue tracking system, or the test should be reported as blocked because it requires corrections in the test code.
I also never met the situation when test log can be taken as it is and used as a final report to the management. We always had to do some manual processing to go through the test logs and to decide if the reported error is an actual problem or something that was caused by the flaw in the test code/system environment/whatever else and that is not reproduced after the failed test is re-executed.
Also, don't forget that real-world automated tests differ from the manual testing. Manual tester is usually aware about latest changes in the tested application and he/she always 'adjusts' the written test to the actual behaviour of the tested application while testing. Usually, he will not report a problem if the current GUI (for example) does not correspond to the one that is mentioned in the manual test if he knows that the GUI was modified in the latest build.
Automated test does not have this knowledge and been a regular deterministic software it will fail and report a problem. Obviously, you can consider two border types of automated tests - very simple and fast created (e.g. recorded ones) and pretty robust that do a lot of work to do what they expected to do and do not fail (e.g. coded tests that use a lot of classes, custom libraries, intelligent search within the tested application, etc.). I think that it is evident, that the former approach will result in a very short time required to implement the test, but in a long run will require a lot of efforts to support such test. Alternatively, the latter approach will result in a significant time required to implement the library code and will require much more time needed to implement the first tests. Tests will be more stable, but the efforts required to support them may depend on the complexity and architecture of the library code. Pretty like when developing a regular application. In a real life we are usually balancing between these border approaches which means that we are always in risk that our test code will fail and require correction.
I don't think that if the automated test fails because of the problem in the test code, the test result should be reported as passed. I think so because the test actually failed. The fact that test failed because of the test code problem may be the reason to report such test as 'blocked and executed manually', but not as passed, because this may create a wrong impression that the given automated test can be executed unattended, while in fact it requires manual execution.
So, for me, the test must be reported in the test log as failed regardless of whether it failed because of the real problem in the tested application of because of some problem with the test code. Then test log must be reviewed and a decision must be made on whether the test failure should be reported with the issue created in the issue tracking system, or the test should be reported as blocked because it requires corrections in the test code.