Knowledge Base Article

TestComplete and Zephyr Scale - work together

Hi Everyone! Today I'm bringing news about Zephyr Scale (previously called TM4J, a Jira test management plugin that SmartBear acquired a few months ago).

 

This is exciting because Zephyr Scale is considerably different from Zephyr for Jira in a number of ways. The biggest differentiation being that Zephyr Scale creates it's own table in your Jira database to house all of your test case management data and analytics, instead of issues of type "test". This means that you will not experience the typical performance degradation that you might expect from housing hundreds if not thousands of test cases within your Jira instance. Additionally, there's many more reports available: Upwards of 70 (and it's almost "more the merrier" season so that's nice)

 

Now how does this relate to TestComplete at all? Well, seeing as we don't have a native integration between the two tools as of yet (watch out in the coming months?), we have to rely on using Zephyr Scale's REST api in order to map to corresponding test cases kept in Zephyr Scale's test library. I wanted to explore the option of having our TestComplete tests be mapped and updated (per execution) back to the corresponding test cases housed within Zephyr Scale by making use of event handlers and the REST api endpoints.

 

To start, I created a set of test cases to mirror the test cases within Zephyr Scale and TestComplete:

You will notice that I have a "createTestRun" KDT test within my TestComplete project explorer. This test case will make the initial POST request to Zephyr Scale in order to create a test run (otherwise known as a test cycle). This is done by using TestComplete's aqHttp object. Within that createTestRun kdt, I'm just running a script routine a.k.a run code snippet (because that's the easiest way to use the aqhttp object). The code snippet below shows the calls

 

 

import json
import base64

#init empty id dictionary
id_dict = {}

def createTestRun():
      projName = "Automated_TestComplete_Run " + str(aqDateTime.Now()) #timestamped test cycle name 
      address =  "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun" #TM4J endpoint to create test run
      username = "JIRA-username" #TM4J username
      password = "JIRA-password!" #TM4J password
      # Convert the user credentials to base64 for preemptive authentication
      credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii")
  
      request = aqHttp.CreatePostRequest(address)
      request.SetHeader("Authorization", "Basic " + credentials)
      request.SetHeader("Content-Type", "application/json")
      
      #intialize empty item list
      items= []
      for i in range(Project.TestItems.ItemCount): #for all test items listed at the project level
        entry = {"testCaseKey":getTestCaseID(Project.TestItems.TestItem[i].Name)} #grab each tc key as value pair according to name found in id_dict
        items.append(entry) #append as a test item
      
      #building request body
      requestBody = {
        "name" : projName,
        "projectKey" : "KIM", #jira project key for the tm4j project
        "items" : items #the items list will hold the key value pairs of the test case keys to be added to this test cycle
      }
      response = request.Send(json.dumps(requestBody))
      df =  json.loads(response.Text)
      key = str(df["key"])
      #set the new test cycle key as a project level variable for later use
      Project.Variables.testRunKey = key
      Log.Message(key) #output new test cycle key

 

 

Within that snippet, you may have noticed an operation called "getTestCaseID()" and this was how I mapped my TestComplete tests back to the Zephyr Scale test cases (by name to their corresponding test id key) as shown below:

 

 

def getTestCaseID(argument):
    #list out testcaseID's in dict format - this is where you will map your internal testcases (by name) to their corresponding tm4j testcases
    id_dict = {
        "login": "KIM-T279",
        "logout": "KIM-T280",  
        "createTestRun": "KIM-T281",
        "UI_Error": "KIM-T282",
        "UI_Warning": "KIM-T283"
        }
   
    tc_ID = id_dict.get(argument, "Invalid testCase") #get testcase keys by name from dictionary above
    return tc_ID  #output tm4j testcase ID

 

 

Referring to the screenshots above, you will notice that the names of my KDT are the key values, where as the corresponding Zephyr Scale test id keys are the paired values within the id_dict variable. 

 

Now that we have a script to create a new test cycle per execution, which also assigns all test cases (at the project level) to the newly created test cycle, we just need to update that test run with the corresponding execution statuses for each of the test case to be run. We can do this by leveraging TestComplete's (onstoptestcase) Event handler in conjunction with Zephyr Scale's REST api.

 

 

the Event handler:

 

 

import json
import base64

def EventControl1_OnStopTestCase(Sender, StopTestCaseParams):
    import utilities #to use the utility functions
    testrunKey = Project.Variables.testRunKey #grab testrun key from createstRun script
    tc_name = aqTestCase.CurrentTestCase.Name #grab current testcase name to provide to getTestCaseID function below
    tcKey = utilities.getTestCaseID(tc_name) #return testcase Key for required resource path below
    address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun/" + str(testrunKey) + "/testcase/" + str(tcKey) + "/testresult" #endpoint to create testrun w test cases
    username = "JIRA-USERNAME" #TM4J username
    password = "JIRA-PASSWORD!" #TM4J passowrd
    # Convert the user credentials to base64 for preemptive authentication
    credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii")

    request = aqHttp.CreatePostRequest(address)
    request.SetHeader("Authorization", "Basic " + credentials)
    request.SetHeader("Content-Type", "application/json")
    
    #building requirest body; limited to pass,warning, and fail from within TestComplete. mapping to their corresponding execution statuses in tm4j 
    
    comment = "posting from TestComplete" #default comment to test executions
    if StopTestCaseParams.Status == 0: # lsOk
        statusId = "Pass" # Passed
    elif StopTestCaseParams.Status == 1: # lsWarning
        statusId = "Warning" # Passed with a warning
        comment = StopTestCaseParams.FirstWarningMessage
    elif StopTestCaseParams.Status == 2: # lsError
        statusId = "Fail" # Failed
        comment = StopTestCaseParams.FirstErrorMessage
  
    #request body for each pertinent statuses
    requestBody = {"status": statusId,"comment":comment}
    response = request.Send(json.dumps(requestBody))

    #in case the post request fails, let us know via logs
    if response.StatusCode != 201:
      Log.Warning("Failed to send results to TM4J. See the Details in the previous message.")

 

 

We're all set to run our TestComplete tests while having the corresponding execution statuses get automatically updated within Zephyr Scale.

Now a few things to remember:

  • always include the "createTestRun" kdt as the topmost test item within our project run. this is needed to create the test cycle within Zephyr Scale (and there was no "onProjectStart" event handler so I needed to do this manually)
  • make sure that within the script routine called gettestcaseID() that you have mapped the key value pairs correctly with the matching names and testcase keys. 
  • create a test set within the project explorer, for the test items you'd like to run (which has been mapped per the above bullet point, otherwise the event handler will throw an error).

Now every time you run your TestComplete tests, you should see a corresponding test run within Zephyr Scale, with the proper execution statuses and pass/warning/error messages. You can go on to customize/parameterize the POST body that we create to contain even more information (i.e environment, tester, attachments, etc.)

 

and you can go ahead and leverage those Zephyr Scale test management reports now, looking at execution efforts, defects raised, and traceability for all of the TestComplete tests you've designed to report back to Zephyr Scale.

Happy Testing!

Published 3 years ago
Version 1.0

Was this article helpful?

No CommentsBe the first to comment