Forum Discussion

Damon_Wang's avatar
Damon_Wang
Occasional Contributor
12 years ago

Run multiple tests via cmd

Hi,



I want to run multiple tests via cmd. I know how to run one test, like below,

TestComplete.exe xxx.pjs /r /p:Demo1 /t:"Script|Unit1|Test1"

But When I tried to add Test2 in the cmd, it run failed. cmd error pop up. Could any one give me some suggestions?

TestComplete.exe xxx.pjs /r /p:Demo1 /t:"Script|Unit1|Test1", /t:"Script|Unit1|Test2"
  • simon_glet's avatar
    simon_glet
    Regular Contributor
    Hi Damon



    The XML test definition file looks like this:



    <?xml version="1.0" encoding="utf-8"?>

    <Tests>

     <ProjetFile>fullPathToProjectSuiteFile.pjs</ProjetFile>  

     <Test>

      <Name>Name</Name>

      <ProjectName>TestCompleteProjectName</ProjectName>  

      <UnitName>TestCompleteScriptNameWithoutTheExtension</UnitName>

      <RoutineName>functionName</RoutineName>

      <Timeout>

       <Unit>Minute</Unit>

       <Value>60</Value>

      </Timeout>   

     </Test> 

     <Test>

      <Name>Name2</Name>  

      <ProjectName>TestCompleteProjectName</ProjectName>

      <UnitName>TestCompleteScriptNameWithoutTheExtension</UnitName>

      <RoutineName>AnotherFunctionName</RoutineName>

      <Args> <!-- Args are optional -->

       <Arg name="Username" position="1">    

        <Value>demo</Value>     

       </Arg>

       <Arg name="Password" position="2">

        <Value>demo</Value>

       </Arg>

       </Args>  

     </Test>

    </Tests>





    The Powershell script is:



    <# ****************************************************************************

    This script is intended to run a suite of TestComplete or TestExecute tests. At

    this time the operator should check if there are enough licenses before starting

    a script. The testRunner will run all tests before closing.


    Configuration

    There are two optional environment variables:

    - $Env:LOGPATH -> To set the log path to something else than Z:\

    - $Env:DEBUGMODE -> To print logs and not close TC at the end of a test.



    Has to

    - run on Powershell 2.0

    - x86 and X64

    - Takes the follwing inputs

     - Release Version

     - Product acronym

     - Product Version

     - TestComplete or TestExecute (expected value is TC or TE)

     - Configuration file full path name

      

     eg: 3.0.0.xx ProdName 3.0.0.yy TC fullPath\AutomatedTests_Definition.xml



     Test:3.0.0.x ProdName 3.0.0.y TC fullPath\Tests_Definition.xml

    **************************************************************************** #>

    Set-PSDebug -Strict

    cls

    $Error.Clear()



    function displayArgsIfMissing($scriptArgs)

    {

     if (($scriptArgs.Count -eq 0) -or ($scriptArgs.Count -ne 5))

     {

      Write-Host "The arguments of this script are:"

      Write-Host "1 - Build Version like 3.0.0.xx"

      Write-Host "2 - Product acronym can be Blah, Blah1 or Blah2"

      Write-Host "3 - Product version like 3.0.0.yy"

      Write-Host "4 - Test runner can be TC (TestComplete) or TE (TestExecute)"

      Write-Host "5 - The full path to the test definition xml file."

      Write-Host "Command line example for MatchPoint+ in a powershell session:"

      Write-Host "TestRunner.ps1 3.0.0.xx ProdName 3.0.0.yy TC fullPath\AutomatedTests_Definition.xml"

      

      return $true

     }

     

     return $false

    }



    <# ============================================================================

    CONSTANTS



    ============================================================================ #>

    Set-Variable LOG_NORMAL       -Value "NORMAL"                    -Description "Log normal level"        -Option ReadOnly -Force

    Set-Variable LOG_WARNING       -Value "WARNING"                    -Description "Log warning level"        -Option ReadOnly -Force

    Set-Variable LOG_ERROR        -Value "ERROR"                     -Description "Log Error level"        -Option ReadOnly -Force



    Set-Variable PROCESS_OP_TIMEOUT     -Value 20                      -Description "Value in seconds"       -Option ReadOnly -Force



    Set-Variable MAX_TRY         -Value 10                     -Description ""           -Option ReadOnly -Force

    Set-Variable DELAY_TRY        -Value 1                      -Description " should be #600 seconds"      -Option ReadOnly -Force



    Set-Variable TEST_COMPLETE_APPLICATION_NAME   -Value "TestComplete"                  -Description "TestComplete Process Name"      -Option ReadOnly -Force

    Set-Variable TEST_COMPLETE_COM_NAME     -Value "TestComplete.TestCompleteApplication"            -Description "TestComplete COM Name"       -Option ReadOnly -Force



    Set-Variable TEST_EXECUTE_APPLICATION_NAME   -Value "TestExecute"                  -Description "TestExecute Process Name"      -Option ReadOnly -Force

    Set-Variable TEST_EXECUTE_COM_NAME     -Value "TestExecute.TestExecuteApplication.9"            -Description "Test Execute COM Name"       -Option ReadOnly -Force



    Set-Variable TEST_RUNNER_APPLICATION_NAMES   -Value $TEST_COMPLETE_APPLICATION_NAME,$TEST_EXECUTE_APPLICATION_NAME        -Description "Table of application Names"      -Option ReadOnly -Force

    Set-Variable TEST_RUNNER_NAME_HASH_TABLE  -Value @{TC = $TEST_COMPLETE_APPLICATION_NAME; TE = $TEST_EXECUTE_APPLICATION_NAME}   -Description "HashTable of application Names"     -Option ReadOnly -Force

    Set-Variable TEST_RUNNER_COM_NAME_HASH_TABLE -Value @{TC = $TEST_COMPLETE_COM_NAME; TE = $TEST_EXECUTE_COM_NAME}        -Description "HashTable of application COM Names"   -Option ReadOnly -Force



    Set-Variable LOG_FILE_NAME      -Value "TestRunnerExecution.log"               -Description "Execution log file name"      -Option ReadOnly -Force

    Set-Variable DISPLAY_INTERVAL     -Value 10000                    -Description "Display intervall while running test"  -Option ReadOnly -Force



    Set-Variable HOUR           -Value "Hour"                     -Description "Timeout in Hour unit value"      -Option ReadOnly -Force

    Set-Variable MINUTE           -Value "Minute"                    -Description "Timeout in Minute unit value"     -Option ReadOnly -Force

    Set-Variable SECOND           -Value "Second"                     -Description "Timeout in Second unit value"     -Option ReadOnly -Force

    Set-Variable MILLI_SECOND         -Value "MilliSecond"                  -Description "Timeout in MilliSecond unit value"    -Option ReadOnly -Force



    if (displayArgsIfMissing $args)

    {

     Write-Host "Script will exit now"

     return

    }



    Set-Variable RELEASE_VERSION     -Value $args[0]                    -Description "Release Version Number"       -Option ReadOnly -Force

    Set-Variable PRODUCT_ACRONYM     -Value $args[1]                    -Description "MPX or BRX or BUX"        -Option ReadOnly -Force

    Set-Variable PRODUCT_VERSION     -Value $args[2]                    -Description "Product Version Number"       -Option ReadOnly -Force

    Set-Variable TEST_RUNNER      -Value $args[3]                    -Description "Test Runner TC or TE"      -Option ReadOnly -Force

    Set-Variable TEST_DATA_FILE_NAME       -Value $args[4]                    -Description "Full Path data configuration file name"   -Option ReadOnly -Force



    <# ============================================================================

    Variables



    ============================================================================ #>

    $dateFormat = "dd-MM-yy HH:mm:ss "

    $OS = ((Get-WmiObject Win32_OperatingSystem).Caption) -like "*Windows 7*" | % {if ( $_ -eq $true) {"Win7"} else {"XP"}}



    Set-Variable PARTIAL_PATH_TO_LOG_FILE -Value "$PRODUCT_ACRONYM\$OS\$RELEASE_VERSION" -Description "Partial path to log file" -Option ReadOnly -Force



    if ($Env:LOGPATH -ne $null)

    {

     $destinationFolder = "$Env:LOGPATH\$PARTIAL_PATH_TO_LOG_FILE"

    } else {

     $destinationFolder = "Z:\TestResults\$PARTIAL_PATH_TO_LOG_FILE"

    }



    if (-not (Test-Path $destinationFolder))

    {

     New-Item $destinationFolder -ItemType Directory

    }



    $logFileName = "$destinationFolder\$LOG_FILE_NAME"



    $try = 1



    $testRunnerApplication = $null

    $testRunnerInterface = $null



    <# ============================================================================

    Functions



    ============================================================================ #>



    <# ----------------------------------------------------------------------------

    log Functions

    The function logTypeMessage is not to be invoked directly.



    ---------------------------------------------------------------------------- #>



    <#

    Genering log function.

    It is only invoked by log, logWarning and logError



    #>

    function logTypeMessage($type, $message)

    {

     $fullMessage = $(Get-Date -Format $dateFormat) + ": "

     $backgroundColor = "-BackgroundColor "

     switch($type)

     {

      {$_ -eq $LOG_NORMAL} {

       $fullMessage += $message

       $backgroundColor += "Green "

      }

      {$_ -eq $LOG_WARNING} {

       $fullMessage += "WARNING $message"

       $backgroundColor += "DarkYellow "

      }

      {$_ -eq $LOG_ERROR} {

       $fullMessage += "ERROR $message"

       $backgroundColor += "Red "

      }

      Default {

       Throw "Logging function logs three message type $LOG_NORMAL, $LOG_WARNING and $LOG_ERROR"

      }

     }

     

     if ($Env:DEBUGMODE -eq 1)

     {

      Write-Host "$backgrounColor $message"

     }

     

     Add-Content $logFileName -Value $fullMessage 

    }



    <#

    Simple log function.

    Invokes logTypeMessage



    #>

    function log($message)

    {

     logTypeMessage $LOG_NORMAL $message



    }



    <#

    Warning log function

    Invokes logTypeMessage



    #>

    function logWarning($message)

    {

     logTypeMessage $LOG_WARNING $message

    }



    <#

    Error log function

    Invokes logTypeMessage



    #>

    function logError($message)

    {

     logTypeMessage $LOG_ERROR $message

    }



    <# ----------------------------------------------------------------------------

    Process functions



    ---------------------------------------------------------------------------- #>

    <#

     Returns all process objects that a test runners.

     Should only return one.



    #>

    function getAllTestRunners

    {

     return Get-Process | Where-Object {$_.ProcessName -eq $TEST_COMPLETE_APPLICATION_NAME -or

              $_.ProcessName -eq $TEST_EXECUTE_APPLICATION_NAME}

    }



    <#

     Returns true if a test runner is running.



    #>

    function isAnyTestRunnerApplicationRunning

    {

     return (getAllTestRunners -ne $null)

    }



    <#

     Returns true if the test runner with name $application is running



    #>

    function isTestRunnerApplicationNameRunning($applicationName)

    {

     return ((Get-Process | Where-Object {$_.ProcessName -eq $applicationName}) -ne $null)

    }



    <#

     Stops all test runners (can only be one) and return true if action was successful. False otherwise.

    #>

    function stopAllTestRunners

    {

     getAllTestRunners | ForEach {Stop-Process -Id $_.Id}

              

     getAllTestRunners | ForEach {Wait-Process -Timeout $PROCESS_OP_TIMEOUT -Id $_.Id}

     

     return -not (isAnyTestRunnerApplicationRunning)

    }



    <#

     Stops other test runner. Returns nothing but it should



    #>

    function stopOtherTestRunner($applicationName)

    {

     foreach($testRunnerApplicationName in $TEST_RUNNER_APPLICATION_NAMES)

     {

      if ($testRunnerApplicationName -ne $applicationName)

      {

       Stop-Process -Name $testRunnerApplicationName -Force

       Wait-Process -Timeout $PROCESS_OP_TIMEOUT -Name $testRunnerApplicationName

      }

     }

     

     return -not (isAnyTestRunnerApplicationRunning)

    }



    <#

     Checks if other testRunner is already running.

     Returns true if it is the case, false otherwise.



    #>

    function isOtherTestRunnerRunning($applicationName)

    {

     foreach($testRunnerApplicationName in $TEST_RUNNER_APPLICATION_NAMES)

     {

      if ($testRunnerApplicationName -ne $applicationName)

      {

       return isTestRunnerApplicationNameRunning $testRunnerApplicationName;

      }

     }

     

     return $false;

    }



    <#

    Makes a COM connection to the specified test runner. If it is not already running, will start in.

    Returns the COM object or raises dans exception.



    WARNING: if all licenses are used, the test runner will display the no licenses available dialog

       and the script will never exit.



    #>

    function getTestRunner($applicationName, $testRunnerCOMName)

    {

     #Check TestComplete

     

     if (isOtherTestRunnerRunning $applicationName)

     {

      # Could check if a test is already running and offer options

      # 1 - Kill

      # 2 - stop the current test, export the results and close

      if (-not (stopOtherTestRunner $applicationName))

      {

       LogError "Could not stop other testRunner of $applicationName"

      }

     }

     

     if (isAnyTestRunnerApplicationRunning)

     {  

      logError "A testRunner is still running. Will kill the testRunner process."  

      try {

       return [System.Runtime.InteropServices.Marshal]::GetActiveObject($testRunnerCOMName)

      } catch {

      

       logError "Could not connect to running testRunner. Will kill it and try again."

       

       if (stopAllTestRunners)

       {

        log "The testRunner has stopped"

       } else {

        throw "The testRunner failed to stop the processes:`n$(getAllTestRunners)"

       }  

      }

     }

     # if the script was started with no licenses left, this function will never return.   

     return New-Object -ComObject $testRunnerCOMName

    }



    function makeTimeoutInMilliSeconds($xmlTimeout)

    {

     Switch ($xmlTimeout.Unit)

     {

      $HOUR {return ([int]$xmlTimeout.Value * 3600 * 1000)}

      $MINUTE {return ([int]$xmlTimeout.Value * 60 * 1000)}

      $SECOND {return ([int]$xmlTimeout.Value * 1000)}

      $MILLI_SECOND {return [int]$xmlTimeout.Value}

     }

     

     return 0

    }



    <# ============================================================================

    Script Start



    ============================================================================ #>

    log "START script"

    [System.Diagnostics.Stopwatch] $allTestsStopWatch = [System.Diagnostics.Stopwatch]::StartNew()



    # The Do-While is intended to wait for an available license. It is not usefull if there are available licences

    Do {  

     log "Will get the $applicationName COM application"

     try {   

       $testRunnerApplication = getTestRunner $TEST_RUNNER_NAME_HASH_TABLE[$TEST_RUNNER] $TEST_RUNNER_COM_NAME_HASH_TABLE[$TEST_RUNNER]

      }

     catch {

       LogError "$applicationName could not start as expected with Exception:`n$($Error)"

       $Error.Clear()

       Log "Will try $($MAX_TRY - $try) more times"

       log "Will wait for $DELAY_TRY seconds"

       Start-Sleep $DELAY_TRY

      }

     finally {

      $try++    

     } 



    } while(($try -le $MAX_TRY) -and ($testRunnerApplication -eq $null))



    if ($try -eq $MAX_TRY)

    {

     logError "The testRunner failed to start after $MAX_TRY tries"

     logError "The script cannot continue and will exit immediately."

    } else {

     log "The testRunner $applicationName started successfully"

     

     $testRunnerInterface = $testRunnerApplication.Integration  

     

     Try {

      $testData = [xml](Get-Content -Path $TEST_DATA_FILE_NAME -ErrorAction Stop)

     } Catch {

      logError "$($_.Exception.Message)"

      logError "As there is not data to define the tests to run, the script will exit now."

      log "END script"

      log "==============================================================================="

      return

     }

     

     $testProjectFileName = $testData.Tests.ProjetFile

     Log "Opening Project file $testProjectFileName"

     if ($testRunnerInterface.OpenProjectSuite($testProjectFileName))

     { 

      $functionArgs = $null  

      foreach($test in $testData.Tests.Test)

      {   

       Log "START test $($test.Name)"

       

       # Timeout

       $timeOut = $test.Timeout

       $timeOutCheck = $false

       $timeOutInMilleseconds = [System.Int64]::MaxValue

       if ($timeOut -ne $null)

       {

        $timeOutInMilleseconds = makeTimeoutInMilliSeconds($timeout)

        Log "Timeout is $timeOutInMilleseconds milli-Seconds"

       }

       

       $allArgs = $test.Args

       if ($allArgs -ne $null)

       {

        [array]$functionArgs = $null

        foreach($testArg in $allArgs.Arg)

        {     

         $functionArgs += $testArg.Value     

        }

        Log "Will start test with the following arguments:`n`r`t`t`tProjectName=$($test.ProjectName)`n`r`t`t`tUnitName:$($test.UnitName)`n`r`t`t`tRoutineName:$($test.RoutineName)`n`r`t`t`tRoutineArgs:$($functionArgs)"

        $testRunnerInterface.RunRoutineEx($test.ProjectName, $test.UnitName, $test.RoutineName, $functionArgs);

       } else {

        Log "Will start test with the following arguments:`n`r`t`t`tProjectName=$($test.ProjectName)`n`r`t`t`tUnitName:$($test.UnitName)`n`r`t`t`tRoutineName:$($test.RoutineName)"

        $testRunnerInterface.RunRoutine($test.ProjectName, $test.UnitName, $test.RoutineName)

       }

       # Will display something every $DISPLAY_INTERVAL

       # The test max runtime could be implement here.

       [System.Diagnostics.Stopwatch] $displayStopWatch = [System.Diagnostics.Stopwatch]::StartNew()

       while($testRunnerInterface.IsRunning())

       {

        $elapsedTimeInMilliseconds = $displayStopWatch.ElapsedMilliseconds

        if (-not ($elapsedTimeInMilliseconds % $DISPLAY_INTERVAL))

        {

         Log "Test $($test.Name) has been runnings for $($elapsedTimeInMilliseconds / 1000) seconds"

        }

        

        # By killing the product the TestRunner will stop in an orderly fashion

        # Killing the TestRunner would not allow the log file to be written.

        if ($timeOutInMilleseconds -lt $elapsedTimeInMilliseconds)

        {

         Log "Timeout of $timeOutInMilleseconds was reached at $elapsedTimeInMilliseconds. Test will be stopped."

         Get-Process "yourApplicationName" -ErrorAction SilentlyContinue | Stop-Process -Force

         $testRunnerInterface.Halt("The test timed out at configured $timeOutInMilleseconds milli-seconds")     

        }

        

        Start-Sleep -Seconds 2

       }

       # The test completed.

       $displayStopWatch.Stop()

       $testRunnerInterface.ExportResults("$destinationFolder\$($PRODUCT_ACRONYM)_$($PRODUCT_VERSION)_$($test.Name)_$($OS).mht", $true)   

       

       $testResultDescription = $testRunnerInterface.GetLastResultDescription()   

       if ($testResultDescription.IsTestCompleted)

       {

        switch ($testResultDescription.Status)

        {

         0 {Log "Test $($test.Name) was successful"}

         1 {LogWarning "Test $($test.Name) Completed with $($testResultDescription.WarningCount) warning(s) "}

         2 {LogError "Test $($test.Name) FAILED with $($testResultDescription.ErrorCount) error(s)"}

        

        }    

       } else {

        LogError "Test $($test.Name) Did not complete"

       }

       

       Log "Test $($test.Name) ran for $($($([System.DateTime]::FromOADate($testResultDescription.EndTime)) - $([System.DateTime]::FromOADate($testResultDescription.StartTime))).TotalSeconds) seconds"

       

       Log "END test $($test.Name)"

      }

      

     } else {

      logError "Project $$testProjectFileName could not be openend"

     }

      

     if ($Env:DEBUGMODE -ne 1)

     {

      $testRunnerApplication.Quit();

     }

     [System.Runtime.Interopservices.Marshal]::ReleaseComObject($testRunnerInterface)   

    }



    $allTestsStopWatch.Stop()

    Log "Duration to run all tests is $(($allTestsStopWatch.Elapsed).toString()) (hh:mm:ss.ms)"



    log "END script"

    log "==============================================================================="



    Sincerely


  • karkadil's avatar
    karkadil
    Valued Contributor
    It is impossible to do by standard TestComplete means. The simplest solution is to use several launches (using /exit parameter when calling TestComplete). This way you can run as many functions as you wish.



    However, you might implement a workaround.



    The idea of it is to pass the list of tests via additional command line parameter, and then run the tests one by one from a single function.



    For example, your Test2 function should look like this




    function testall()


    {


      var tests = ParamStr(ParamCount());


      runListOfTests(tests)


    }



    Now the function which runs the tests will look like this


    function runListOfTests(lst)


    {


      var tests = lst.split(";");


      


      for(i = 0; i < tests.length; i++)


      {


        eval(tests + "();");


      }


    }



    And finally your command line will look like this

    TestComplete.exe xxx.pjs /r /p:Demo1 /t:"Script|Unit1|Test1", /t:"Script|Unit1|Test2" Test1;Test2



    The last parameter here is a list of tests split by semicolon.



    The example is very simple and has several limitations:

    1. It processes only functions from the same unit (but you can use Runner.CallMethod method to solve the problem)


    2. It is impossible to pass parameters to functions (this can also be solved by parsing the last parameter

    3. The test log will contain only one test with all messages from all the functions
  • Damon_Wang's avatar
    Damon_Wang
    Occasional Contributor
    Hi Gena,



    Thank you for your reply, it's a work around way, but it's not the answer I'm looking for. Too many limitation in it. :( Are you woking on TestComplete company? Do we have a plan to add this feature in the future release? CMD is the key to keep automation overnight. You could thought how important it is. :)
  • simon_glet's avatar
    simon_glet
    Regular Contributor
    Hi Damon,



    We also thought that cmd was not flexible enough and had a look at Working With TestComplete via COM.

    The possibilities with a Powershell script are endless ! :-) Powershell is so much better than DOS, really.



    We ended up with a Powershell script that reads an XML test definition file that allows us to run any function from any script.



    Sincerely
  • Damon_Wang's avatar
    Damon_Wang
    Occasional Contributor
    Hey Simon, powershell is acceptable!!! Could you please share some example to me?



    Thanks,

    Damon.
  • Damon_Wang's avatar
    Damon_Wang
    Occasional Contributor
    Hi Simon,



    It seems there is some preparation steps before we run test case. Thank you, I will try it.
  • Damon_Wang's avatar
    Damon_Wang
    Occasional Contributor
    Hi Tanya,



    Thanks for your solution, I think that is the best one so far! Does this approach have same log file behavior with running test cases from UI?

  • Hi Damon,


     


    Yep, the test execution results will be saved to the log report, so later you can review it or send it via email.