Forum Discussion
tristaanogre wrote:
When you make your automation code TOO complicated, then you spend all your time debugging the automation code and not enough time actually validating/verifying your AUT.
And I would like to add even more:
-- Unfortunately, it is pretty rare case when the tested application is designed and documented good enough to make test automation to be done really in parallel with development;
-- In most cases, test automation or correction done in order to match application's changed behaviour is done when the development task is completed;
-- The above means that the more time will be required to put automated test into production, the more time will be spend by manual testers to verify the thing that can and should be verified automatically. And this means increased load on manual testing and its decreased efficiency with manual exploratory verification of complex, corner and non-standard cases.
With the above in mind, I think that sometimes it is better to have less perfect (from the classical development point of view) test code in favor of more easily understandable and modifiable one.
The less time it takes to the person who supports test code to put it into production, the more time manual testing has for extended application verification.
Hi all,
Back from leave. Some good pointers there, thanks. Responses/updates/queries as follows : :smileyhappy:
Just keep in mind the throw("1") and throw("2") in my code is me literally throwing an exception, not putting it in psuodo-code to illustrate the point.
a) TestComplete provides call stack in the Call Stack log pane.
Yup, I'm accepting of that point. The call stack in the log should cover what is required for users. It's just me as a developer that would find it useful and even that I can handle with debugging if required.
Also, most of the error during execution related to the object identification/timing issue which try...catch can't catch.
I think we are all with you on that one. I think object recognition issues fill up more lines of code in my scripts than the actual testing the script does
In case test code fails because of unexpected situation (e.g. when attempting to write to a read-only file), then it is perfectly fine if the code fails. This reveals but not hides the code problem which can be immediately addressed. If it is possible that the given file can be read-only, then test code must be improved to make file writable before writing to it. If the file must not be read-only, then test code must be improved with verification block and clearly report a problem to the test log.
Hmm, my example is a bit rough and not a perfect representation of what my code does, but it's the best I can think of without trying to show you what my code doesn't do yet because I'm still trying to get input on the best way to do it :smileyfrustrated: Wow, what a messy sentence...
What I'm thinking of at the moment is negative confirmation, rather than positive confirmation. Perhaps if the file can't be written to because if it's readonly the test is a PASS rather than a fail. (No, I can't think of a case where this would be a test either, but bear with me on this one) If I write my script to always ensure the file is not read-only, I lose that ability using this script function.
Now, if I wrote a script that's very simple and can easily be used by standard users...
function WriteLineToTextFile(filename, text) { //get file //open file //write line in //save file }
Can't get much simpler than that. A user can call the function from a script or visualiser and it will work. Unless the file is read-only, in which case it will fail.
I could add code to make the file not readonly by finding the file with fileinfo or the like then changing the readonly flag in the same function. This means I have one function that is entirely dedicated to doing one job and can only be called in one context. If the code was modified slightly I could use it for more than one context
function WriteLineToFileInternal(filename, text) { try { //get file //open file //write to file //save file } catch(e) { throw e; } }
I can now use it in two contexts.
function PositiveFeedback(filename,text) { try { MakeFileWriteable(filename); WriteLineToFileInternal(filename,text); return true; } catch(e) { Log.Error("Unable to write to file"); return false; } } function NegativeFeedback(filename) { try { WriteLineToFileInternal(filename,"Test Text"); //I managed to write to the file, which I shouldn't have been able to Log.Error("You shouldn't be able to do this"); return false; } catch(e) {
if([e somehow gives away that the file is readonly])
{ Log.Message("The file was readonly"); return true;
}
else
{
throw(e) or return false;
} } }
A user no longer calls WriteLineToFileInternal as before - it's now an internal utility function, but they have the PositiveFeedback function that does exactly the same, but with better error handling. If any better handling is required, the PositiveFeedback function can be tweaked internally without users needing to know about it. Does this look like a reasonable implementation? None of these script individually are particularly complex. As I said, it's a ramble looking for some feedback coming from an OO rather than scripting paradigm. Any feedback is welcome.
- AlexKaras6 years agoChampion Level 3
Hi,
Below are just my comments. Obviously, test code can be implemented with or without using exceptions. This is a matter of code design and what requirements do you have for test code.
> The call stack in the log should cover what is required for users.
I would say that users do not care about call stack. What they do care is the report about whether or not the tested function works as required. And this must be clear from test log with the clear and understandable description of what exactly did not work (in case of failure). If call stack is required to figure out what or why did not work, for me this means a poor logging. Thus I consider Call Stack feature of TestComplete to be development means.
> I can now use it in two contexts.
-- Nothing prevents you to have three functions: to check if the file is writable; to make file writable and to write to the file. And to call them as your test dictates;
-- Each exception type must be explicitly handled in the calling code, so this does not differ a lot from return code. Generic exception handling is not considered as a good programming practice and is not recommended AFAIK;
-- Exceptions are a good means to either handle unexpected situations (like lost of connectivity) or as a means to report a problem and do not crash the application. But the test, by definition, checks a certain defined behaviour and all deviations must be clearly reported in order to be triaged whether or not they are a real problems. If they are not, then the test code must be enchanced to make these deviations to become possible expected handled behaviour;
-- Exceptions are usually either unrecoverable events (when they are handled at the end of the calling code and do a proper cleanup) or must be handled right after function call to make it possible to retry and/or continue. Considering our example with the read-only file, if the exception is thrown and calling function handles it at the end, the only thing that it can done is to cleanup and report a problem. In almost all cases the current test must be stopped after that (because of unstable environment state) and leave you without any information about whether or not the verified task can be done by the end-user. If the exception (or return code) is handled immediately after the function call, then test code has a chance to make the file writable and retry. As a result, you can get a notification that for some reason the file appeared to be read-only, but the whole functionality remained valid after the file was made writable. I think that this is much more useful result if you need to make a decision about tested application's state but not just report the first problem and stop. (Basically, this is the primary difference between the integration and end-to-end testing and TestComplete really excels with the latter though is perfectly usable for the former.)
- RUDOLF_BOTHMA6 years agoCommunity Hero
Hi AlexKaras
I've opted in the end to go more down the route of building the KWTs themselves with Try.. catches.. in them and having the KWTs return a true/false in the catch/finally depending on how things went. Much less OO, much more scripting approach:
Return false from a Keyword Test
I'm using some work in the OnLogError to help me handle and error things better and report back better info, but the approach is now leaving only application exceptions to technically thow errors. Other errors will be handled with intensive standard wait/find logic, but scripts should always return Something. Either a true/false or an object/null which can be tested and acted upon by the calling function/KWTs. The next KWT in a tree will check the lastresult for a true/false and log/stop as required
- RUDOLF_BOTHMA6 years agoCommunity Hero
Thanks AlexKaras
I haven't forgotten you, I'm just mulling things over at the moment. You make some very good points in your post and your input is very welcome.
Related Content
- 13 years agojitendra_1
- 6 years agosriram_sig
- 10 years agomrezahoseini
- 4 years agobwolsleben
Recent Discussions
- 20 hours agovladd1