Forum Discussion

NisHera's avatar
NisHera
Valued Contributor
9 years ago

managing test flow

The testcomplete is based on basic assumption of there are static set of test say for e.g. 100 tests you may run each and every release and get result. This is true for certain extent. In some release some test may not relevant (e.g. 25 test), still its ok for run all tests especially for regression test.

 

But in times depending on results of Test A, test X or Y is relevant. Simply you can code them with an If condition. (Or switch where multiple paths) When your test frame is adding more and more test scenarios this can get complex and the test flow is inside scripts. eg depending on test X rerults have to run Test P or Q and comming back to test Y depending on test Q's result.....and so on

 

To me this is much resembled with data residing in code instead of DDT. To change the flow you have to re-read code and understand.

Dose QA complete has a solution for this?  Will having QA complete to this purpose would overkill?

5 Replies

  • My framework does this. I find it very useful.

     

    My framework is like an expanded and more flexible version of DDT.

     

    It allows me to switch parts of the test(s) on and off easily. It also allows me to put a "marker" against any step in the test. I can then refer to that "marker" further down the test. When it refers back to it, you can tell it to take action if a test has passed or failed (it allows for both conditions) and the step the check is on will only be run if the marker(s) matches the condition. You can have multiple marker checks for a single test step (eg. Only run step 10 if steps 3, 6 and 9 all fail .... or run step 8 if step 2 passes but step 3 fails ... etc etc .... you can mix and match). Obviously, it can only refer to lines in the past, not future ones as they haven't been run yet!

     

    So - if there is no point running step 6 unless step 3 works, I put "marker 1" on step 3. When it gets to step 6, it checks step 3 and step 6 will only run if step 3 did not fail. In my logs, if step 3 fails, you get the fail in the log for that, and the status of step 6 when it ran. If step 3 passes, you get the pass for that and notification that step 6 was not run as the required condition(s) to run it were not met. It's not a fail. It's a "not run".

     

    It's handy as it also allows people to add in remedial/corrective steps where there are known bugs in a test pack.

     

    When this happens, I put a marker against the failing step (which will also contain a link to the bug in our bug tracking). It then checks if the step is failing in the lines just after it. If it is (still failing) it will run the steps to get the test back on track. The beauty is though, when the bug is fixed, the step which was failing starts to pass, and the corrective steps are no longer run.

     

    Give you some simple steps you can add in that:

     

    a) keeps your tests running despite known bugs (depends on the bug of course. In some cases, you need to simply abandon the test. It depends whats going wrong)

    b) tests "auto-fix" themselves when bugs are fixed.

     

    My reporting also reports on any steps which are reliant on checks against prior tests and reports on any that are not run. They're not failures, but you at least know there are (possibly) redundant test steps there. Usually doesn't do any harm to leave them there though. If the bug comes back, you'll still get a failure for it, and your corrective actions will kick in automatically.

     

    Works well. :smileyhappy:

  • You can manage this type of organization of tests within TestComplete's Test Items Editor. This article details the flow: https://support.smartbear.com/viewarticle/74716

     

    Essentially you can set tests to be dependant on the success of parent test items. Once the test set is setup then QAComplete can call the test set and execute it on an endpoint.

     

    - Gregg -

    • NisHera's avatar
      NisHera
      Valued Contributor

       

      Thanks Collin, for your feedback. So essentially there is no tool support, you have developed that from scratch.  I regard you may be a sole or lead tester. How does others especially a newbie would find work with your frame, is it much obvious for them to understand or it’s only expert’s job to mend with framework?  I mean how do you manage the visibility of framework

       

      Gregg, I’m wondering how could you dynamically arrange tests with Test Items Editor. Scenario like “depending on results of Test A, test X or Y is relevant and say in this instance it’s X. depending on test X results have to run Test P or Q and coming back to test Y depending on test Q's result....”

      Could you please show me how…without setting up all the permutations of test senarios....

       

      • Colin_McCrae's avatar
        Colin_McCrae
        Community Hero

        Working with the framework, as in actually using it, is fine.

         

        It's pretty much bullet-proof and idiot-proof. It has to be. It's all keyword and data driven by user-populated input on Excel sheets .... and no use if it crashes halfway through a run (which can be several hours and run unattended overnight or in an empty test lab).

         

        I wrote it myself, from scratch, as script extensions. These are stored in their own project in our version control system so can be applied to new installs of TestExecute or whatever. As you say, the framework itself is fiarly complex (although not hugely so). I've developed several frameworks before in previous jobs so I have a pretty good idea these days what I want from one, how I want it to work, what I want it to do, etc etc.

         

        Anyone can update it of course, but it tends to be me as it's "my baby" so to speak.

         

        I always write my own as I always, at some point, hit a point with built in flows and controls that I can't quite get it to do what I want. The DDT offering from TestComplete is a perfect example. It's good, but not quite as flexible as my own Excel handler. Plus, if I get asked for a specific modification to the way things are handled, I can easily modify and update things myself.

         

        The framework itself seldom needs maintenance. (It's completely project/application agnostic - as a framework should be!) I tend to to the additions and updates, but as I say, any half decent dev would also be able to do so.

         

        We only have one dev license for TC. So I do tend to develop all the actual test functions (which are project suites completely separate to the framework). But it's the manual testers who then use these (and the framework) to actually build and run tests.

         

        My framework handles:

         

        • Test control and flow (including the above mentioned markers and dependencies)
        • Logging
        • Results (dropped to a shared folder and also e-mailed to recipients of your choosing) - both for individual suites and full summary
        • DB backups & restores (at user request)
        • Service and process checks, stops and starts (at user request)
        • Test updates to TFS via it's API (so automated runs can update test suites in TFS)
        • All error handling - although recovery is usually handed back to the project suite as this tends to be application specific