Forum Discussion

stebi's avatar
stebi
Occasional Contributor
13 years ago

Enable/Disable Test Items via Code

Hi,



I don't want all tests of my project to be run all the time. I have structured the tests like that (in the "Test Items" panel):



EnableTests

[-] SmokeTests

  *  Test 1

  *  ...

  *  Test N

[-] DetailedTests

  *  Test 1

  *  ...

  *  Test N



The first Test "EnableTests" should enable the SmokeTest-Group and only if nightly testing "DetailedTests". 



I'm able to scan through all test items and see if they are "enabled", but trying to set the property results in an error.



E.g. (Delphi Script):



Project.TestItems[1].Enabled := True;



Results in:



---------------------------


Wrong number of parameters or invalid property assignment.






Enabled


---------------------------




How do I enable/disable a specific test item via code? Is there a better location for enabling tests than using a test itself for that?



If there isn't any possibility to do it like that, maybe I can prevent a test from starting via "Test Engine Events"->OnStartTest?



Thanks in advance for any help.























14 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    Test Items cannot be enabled/disabled via code.  The reason why you have the error when trying to set the "Enabled" property is that it is a read-only property.


  • stebi's avatar
    stebi
    Occasional Contributor
    Although the error message is not as clear as it could be I already suspected it's a read-only property. It there any other possibility to make the collection of tests which should be executed a bit more dynamic?







  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    This is why I use a data driven structure to drive my tests.  I have a CSV file that gives a list of the tests to be executed and then use a DDT.CSVDriver object to iterate through that list and execute the tests.



    In doing so, I'm not actually using the "TestItems" part of the project, I have one TestItem to run the CSVDriver iteration and then use code to read from the driver what needs to be run and execute those things (scripts or keyword tests) accordingly.
  • stebi's avatar
    stebi
    Occasional Contributor
    How do you actually run a test referenced by a line in the CSV file?



    I found a similar solution: I don't iterate over a CSV file but the TestStructure. I disabled all top level items but the one which iterates over all others. I can now apply as much logic for each test item and than decide to run it or not. To run a test I call "eval(scriptName)" or "Runner.CallMethod(scriptName)".

    Using Log.AppendFolder I can structure the log so I can distinguish which test is run. It's nearly what I want, but the execution summary is not as clean as I'd like. TC shows a nice summary of the test run, but now as all tests are merged into one its nearly useless. It also slows down Internet Explorer when viewer the exported (mht-)log  file because one entry gets huge. I would really like to create "Log Items" myself to structure the log. Is there any possibility to do that?
  • stebi's avatar
    stebi
    Occasional Contributor
    Is the code for your framework available somewhere? Maybe I'm blind, but I didn't found any download link. I'm especially interested in how you actually execute a test and how the log look like.







  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    I can easily supply it.  Drop me an e-mail and I'll send you the relevant files.  My e-mail should be linked from the article.
  • stebi's avatar
    stebi
    Occasional Contributor
    Thanks. Unfortunatelly I didn't found any magic tricks in your code ;-) You execute the tests via Runner.CallMethod and organize your error log via Log.AppendFolder. With the same disadvantages my own "framework" has.

    I think TC needs improvement here. 







  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    The "magic" is that the Runner.CallMethod doesn't actually call the specific code methods for executing tests but simply calls an "Execute" method that is named as <AUTName>TestStep.Execute.  That method contains a case statement that parses keywords for the execution.  There is no "code" referenced in the CSV files, simply a set of keywords that feed into the loop for creating the array.  Code routine calls are constructed on the fly.



    If, in the CSV file for the list of tests the first character is a "semi colon", it skips that entire test.



    So, where in your framework you have (at least as I under stand it) your CSV file containing specific code routines, I have two CSV files.  The first is  a list of test cases with simply the test case ID and a descriptor.  The second is a list of steps organized by test case ID containing a step counter, the AUT name against which the step will be executed, a keyword designator for the step, and a delimited string for any specific detail data needed during executing that step.  



    As for large logs, the framework is designed that you can drop in different pairings of the two CSV files for different sets of tests.  Rather than having to execute ALL your tests in one project run, you simply designate the project to run with one set of CSV's, copy in another set, run that, and repeat.  You'll get a lot of smaller, more easily handled logs that are specific for certain desired testing outcomes.



    (shrug) YMMV but I find this an efficient way of constructing tests where, if I need to add a new test case, at minimum I simply need to add more records to the CSV.  If the new test case involves new features of the AUT, then I need to add a bit more code but, again, modularized based upon AUT and controlled by the case statements.
    • sdahiya's avatar
      sdahiya
      Occasional Contributor

      Hi,

       

      Is there any progress or any approach to Enable/Disable Test Items via Code??

      I am just wondering that this Topic is since 2012 and Test Complete has no answer for it??

       

       

  • christiank's avatar
    christiank
    Occasional Contributor

    Hi,

     

    how about using

     

     

    Runner["Stop"](true)

    to work around this.

     

     

     

    Instead of activating / deactivating testitems according to certain conditions, have each testitem check its own run condition. If the condition does not match, ask the Runner to stop execution of this test item, but continue with the next one.

     

    function runTestItem2 ()
    {
      if (Sys["UserName"] == "LazyTester")
      {
        Runner["Stop"](true)
      } else
      {
        doTheWork ();
      }
    }

    Regards,

    Christian.

     

    *Edit: Using this method will also skip over all ChildTestItems of the TestItem in which you execute this command.

    • jsc's avatar
      jsc
      Regular Contributor

      We ran into the same problems some time ago.

      Our TestSuite (~12 projects + setup + teardown) grew too big (~8h) so we wanted to split it into 4 parts and run each part on a different machine. We found no way to do this in TestComplete itself, so we did it outside TestComplete functionality.

       

      After checking out the test (git) we manipulate the *.pjs file to enable / disable whole projects for the test.

      The same can be done with the *.mds file of each project to enable / disable testitems in this project.

       

      Done in vbscript

      • sdahiya's avatar
        sdahiya
        Occasional Contributor

        Hi,

         

        Thanks for all solutions and would like to try those options (workarounds).

        But I still expect that Testcomplete itself should provide/have such possiblity

        so that on run time, it is easy to enable/disable test items and all its child elements (if required).