My framework does this. I find it very useful.
My framework is like an expanded and more flexible version of DDT.
It allows me to switch parts of the test(s) on and off easily. It also allows me to put a "marker" against any step in the test. I can then refer to that "marker" further down the test. When it refers back to it, you can tell it to take action if a test has passed or failed (it allows for both conditions) and the step the check is on will only be run if the marker(s) matches the condition. You can have multiple marker checks for a single test step (eg. Only run step 10 if steps 3, 6 and 9 all fail .... or run step 8 if step 2 passes but step 3 fails ... etc etc .... you can mix and match). Obviously, it can only refer to lines in the past, not future ones as they haven't been run yet!
So - if there is no point running step 6 unless step 3 works, I put "marker 1" on step 3. When it gets to step 6, it checks step 3 and step 6 will only run if step 3 did not fail. In my logs, if step 3 fails, you get the fail in the log for that, and the status of step 6 when it ran. If step 3 passes, you get the pass for that and notification that step 6 was not run as the required condition(s) to run it were not met. It's not a fail. It's a "not run".
It's handy as it also allows people to add in remedial/corrective steps where there are known bugs in a test pack.
When this happens, I put a marker against the failing step (which will also contain a link to the bug in our bug tracking). It then checks if the step is failing in the lines just after it. If it is (still failing) it will run the steps to get the test back on track. The beauty is though, when the bug is fixed, the step which was failing starts to pass, and the corrective steps are no longer run.
Give you some simple steps you can add in that:
a) keeps your tests running despite known bugs (depends on the bug of course. In some cases, you need to simply abandon the test. It depends whats going wrong)
b) tests "auto-fix" themselves when bugs are fixed.
My reporting also reports on any steps which are reliant on checks against prior tests and reports on any that are not run. They're not failures, but you at least know there are (possibly) redundant test steps there. Usually doesn't do any harm to leave them there though. If the bug comes back, you'll still get a failure for it, and your corrective actions will kick in automatically.
Works well. :smileyhappy: