Not sure how much it helps, but I pretty much don't use any of the built in error handling. (I switch it all off in the project setting and also switch off the error stopping in my code through script.)
I build a LOT of error handling into my scripts. And I mean .... A LOT. An average script function for me can easily be 70% validation & error handling and only 30% actual test.
This handles validation of the user supplied data as well as validation and error handling around the controls used in the test. My test units are all user defined, modular and run and controlled through a central framework. So if one test (or test pack) fails, the next one can still run. Assuming the test flow is designed and split up in a sensible way of course. If you try and run one giant test for everything, that would be more of a problem.
Reason I put so much effort into it is, for me, one of the key things for any automated test is that it should never crash. (Which is probably impossible. But I try and get it to the point where it's very hard to crash at least.)
An overnight run against our application (kicked off by the nightly build process) can take 6-8 hours so automated tests are useless if it crashes 20 minutes into the run. I think I must have it about right as it hardly ever crashes. It may spew out a lot of errors if someone introduces a change which breaks things, but it won't crash - and you'll get meaningful error messages in the log files. (My log files are also custom built.)
But then, I've built my entire framework from the ground up on this basis. I suspect retro-fitting it if I hadn't might be trickier.