TestComplete with Zephyr
In this post, we are going to talk about SmartBear’s UI testing tool, TestComplete, and writing the results to Zephyr in an automated fashion. When we think about using TestComplete with any test management tool, it can really be accomplished in two ways: Natively inside TestComplete or integrating with some CI-CD system. When we are using Zephyr, both ways will utilize the Zephyr REST API. When we link to Zephyr natively from TestComplete, it is a script heavy approach. An example of that can be found here. Today we are going to go into detail about using TestComplete, with a CI-CD system, and sending the results to Zephyr. Now let’s talk about automating this process. The most common way to automate triggering TestComplete tests is through one if its many integrations. TestComplete can integrate to any CI-CD system as it has Command Line options, REST API options, and many native integrations like Azure Dev Ops or Jenkins. The use of a CI-CD system makes managing the executions at scale much easier. The general approach to this workflow would be a two-stage pipeline something like: node { stage('Run UI Tests') { // Run the UI Tests using TestComplete stage('Pass Results') { //Pass Results to Zephyr } } First, we trigger TestComplete to execute our tests somewhere. This could be a local machine or a cloud computer, anywhere, and we store the test results in a relative location. Next, we use a batch file (or alike) to take the results from that relative location, send them to Zephyr. When executing TestComplete tests, there are easy ways to write the results to a specific location in an automated fashion. We will look at options through the CLI as well as what some of the native integrations offer. Starting with the TestComplete CLI, the /ExportSummary:File_Name flag will generate a summary report for the test runs, and save it to a fully qualified or relative path in Junit-XML structure. At a basic level we need this: TestComplete.exe <ProjectSuite Location> [optional-arguments] So something like this: TestComplete.exe "C:\Work\My Projects\MySuite.pjs" /r /ExportSummary:C:\Reports\Results.xml /e The /ExportSummary flag can be stored in a relative or fully qualified directory. We could also use one of TestComplete’s many native integrations, like Jenkins and specify in the settings where to output results: Now that our TestComplete tests are executing, and the results are writing to a relative location we are ready for stage 2 of the pipeline, sending the results to Zephyr. So now let’s send our results to Zephyr. I think the easiest option is to use the Zephyr API, and the Auto-Create Test Case option to true. The command below is a replica of what you would use in a batch file script in the pipeline. curl -H "Authorization: Bearer Zephyr-Token-Here" -F file= Relative-Location-of-Report-Here\report.xml;type=application/xml "https://api.zephyrscale.smartbear.com/v2/automations/executions/junit?projectKey=Project-Key-Here&autoCreateTestCases=true" After you modify the API token, relative location, and project key you are good to run the pipeline. The pipeline should look something like this: After we run the pipeline let’s jump into Jira to find to confirm the results are populating. Even with execution data: Also, with transactional data to analyze the failed test steps:2.7KViews2likes0CommentsOn-premises Azure build agents running TestExecute with ID-based licensing
We recently migrated our self-hosted Azure DevOps Build agents from HASP based licensing to the ID based licensing. We use a test adapter via the COM integration to execute and pull test data back into Azure Pipelines. Through the sales and renewal teams, we were instructed that exporting the licenses to the On-Premises License Server was the appropriate mechanism. Now, we are seeing that the access token expires regularly and we have to manually access every build agent in the pool and open either TestComplete or TestExecute, log-out, and then generate the auth code again. This article says however, that an access key should be generated and provided via CLI execution. https://support.smartbear.com/testcomplete/docs/licensing/id-based/automated-builds.html This doesn't work with the COM integration, or in any way that I can find in the interface. It also doesn't seem to be possible to create an access key with the On-Premises server, as it always states "Failed to generate an Access Key". Years after releasing ID-based licensing, there still seem to be major disconnects in the implementation. The HASP system was simpler and more reliable. What is the proper solution to using ID-based licensing with Azure DevOps build agents?Solved85Views0likes5CommentsChecking API Status in TestComplete
Introduction I first saw the need to verify the state of an API several years ago with an application that used an address validation API in several of it's operations. If this API was down or did not respond in a timely manor, many of the automated test cases would run and fail. In this case, and in others, I have found that doing a simple call to the API to check the returned status code allowed me to skip, fail or log a warning with a logical message instead of allowing the application to fail with another less direct error message due to the API issue. The aqHttp Object The TestComplete aqHttp object and it's methods are very useful for performing simple checks like this and are also useful for other more complex tasks like leveraging an API to return a test data set or even verifying a certain data is returned prior to running tests against the UI that depend on the data. Sending and Receiving HTTP Requests From Script Tests More Complete API Testing Most proper API testing should be done using a tools like ReadyAPI or SoapUI. Both of these tools will integrate with TestComplete or can be used alone and will provide much more capabilities and automation options. Integration With ReadyAPI and SoapUI Code Example Here I have provided a working example of how to code a Get request using 'aqHttp.CreateRequest' to confirm an API returns a status code of 200 and it will log the returned records. function sendGetRequest() { let resourcePath ="https://restcountries.com/v3.1/all" let resourceQuery = "?fields=name,capital"; let url = resourcePath + resourceQuery; try { // Send GET request let response = aqHttp.CreateRequest("GET", url, false).Send(); // Check for successful response if (response.StatusCode === 200) { // Parse JSON response let allData = JSON.parse(response.Text); Log.Message("Total records received: " + allData.length); // Process each record allData.forEach((record, index) => { Log.Message("Record " + (index + 1) + ": " + JSON.stringify(record)); }); return true; // Send a bool back to the calling function. } else { throw new Error("Failed to fetch data. Status code: " + response.StatusCode); } } catch (error) { Log.Error("Error during request or data processing: " + error.message); } } Enhancements You could accept parameters for the resourcePath and resourceQuery. Parameterize the logging loop run or remove it. Return the JSON to the calling function for use. Perform other tasks based on the return code. Conclusion With the growing use of API calls in desktop applications and the fact that APIs are almost the foundation of any web site checking an API before a test case run is almost a requirement for consistent test runs and good error logging. This small script can bring big rewards to your test runs and reports. Cheers! I hope you find it as useful as I have! If you find my posts helpful drop me a like! 👍 Leave a comment if you want to contribute or have a better solution or an improvement. 😎 Have a great dayTestComplete Allure Jenkins Integration
Did any one tried TestComplete Allure Jenkins Integration. I am unable to see Test steps when using /exportsummary.Its working fine but showing basic detail and when using /exportlog then its unable to parse xml file. I want to see TestLog also in Allure html file.44Views0likes1CommentUse Variables in the sql connectionstring
Hi there, I'm using 2 environments and 2 databases: 1 to create the testcases and 1 to execute them I want to check the results with DBTables Custom queries (SSMS) and I have i.e. next connectionstring: How CAN I make this string Variable ? When I login at the ALPHA-environment I want in the string ALPHA and server01 When I login at the BETA-environment I want in the string BETA and server02 Greetings, Sjef van IrselSolved758Views0likes6CommentsVS 2019 - Failed to initialize the TFS API Library
When we try to access the Server Path and add a Server location to access the Online Repo getting an error as Steps to reproduce: 1) Navigate to Tools -> Options -> Source Control and select 'Current Source Control Plugin ' as 'Microsoft Visual Studio 2019' [already installed locally and Source path / Workspace is set] 2) Navigate to File -> Source Control -> Open From Team Foundation Server... 3) Click on the ... button at the 'Server Path:' section.996Views0likes11CommentsUnable to execute Desktop GUI tests on Azure Virtual Machine connected to GitHub Actions runner
Hi everyone, Currently we are migrating to GitHub and as a part of that I am trying to execute a simple GUI desktop test using TestComplete with GitHub actions. The issue: I have created a batch file to launch TestComplete and run the simple GUI test as mention in the documentation provided on TestComplete website. (refer to Syntax screenshot) https://support.smartbear.com/testcomplete/docs/working-with/integration/github-actions.html?_ga=2.246307148.915657396.1707729558-339793958.1707729558 https://support.smartbear.com/testexecute/docs/running/automating/command-line/command-line.html#examples https://community.smartbear.com/discussions/testcomplete-questions/testexecute-command-line/119642 The Batch file which was created using the support documents, works fine when executed manually on the Virtual Machine, it launches TestComplete and executes GUI test and generates report. But when it is executed with GitHub actions, it hangs indefinitely (refer to Admin-mode and GitHub-execution screenshot) and should be cancelled. So I am stuck at this point because debugging is difficult, as there is no specific error message appears during or after the execution. 1.Tried to run the batch file using Admin privileges 2.Checked Firewall setting etc., 3. I have gone through the documentation and questions on the community related to this topic but could not find any solution related to this issue. Has anyone had similar issue ? Thank you.Solved336Views0likes11CommentsReadyAPI - Jira Integration
I am attempting to use the Jira plugin and have it setup. I have the connections configured and they are working. However, when trying to create a new Jira item, the screen displayed does not have all the needed information. Looking at the wiki for the plugin, the screen shots for creating a new Jira item have a lot more information than I am seeing. The connection to Jira is fine as I am able to click the Jira icon from a test case and select my project. It also is retrieving the issue types correctly from Jira. However, when filling out the Summary, which is the only required field displayed to me, the creation of the item fails because it is missing more required information. Jira version = 9.12.15 ReadyAPI Version = 3.58.0 Jira plugin version = 1.6.7 There are no errors in the error log. The only "error" I see is in the HTTP log after the creation of the Jira item fails: RestClientException{statusCode=Optional.of(400), errorCollections=[ErrorCollection{status=400, errors={customfield_10010=Acceptance Criteria is required., customfield_11200=Category is required.}, errorMessages=[]}]} In the ReadyAPI log, I see some records like this. It seems to be getting all the fields from Jira, but even after it checks for it, it still just displays the issue type. For example, there are the lines where it appears to find the "description" field. Attached a screen shot of the "Create Story" screen. Any help or guidance would be appreciated. Tue Feb 25 12:47:33 CST 2025: INFO: getFieldInfo.bugTrackerProvider : com.smartbear.ready.plugin.jira.impl.JiraProvider@4a9b76c, selectedProject : EWS, fieldInfoKey: description Tue Feb 25 12:47:33 CST 2025: INFO: [JiraProvider].[getProjectFieldsInternal] we reach here Tue Feb 25 12:47:33 CST 2025: INFO: getFieldInfo.bugTrackerProvider : com.smartbear.ready.plugin.jira.impl.JiraProvider@4a9b76c, selectedProject : EWS, fieldInfoKey: description Tue Feb 25 12:47:33 CST 2025: INFO: [JiraProvider].[getProjectFieldsInternal] we reach here Tue Feb 25 12:47:48 CST 2025: INFO: [CreateIssueMetadataJsonParserExt].[parse] json: {"maxResults":50,"startAt":0,"total":16,"isLast":true,"values":[{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/1","id":"1","description":"A problem which impairs or prevents the functions of the product.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13803&avatarType=issuetype","name":"Bug","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10100","id":"10100","description":"Spikes are a type of story used for activities such as research, design, exploration, or prototyping. They are used to gain knowledge in order to reduce risk.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/exclamation.png","name":"Spike","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10300","id":"10300","description":"This subtask represents a defect found during testing or review of a Story","iconUrl":"https:\/\/jira\/images\/icons\/subtask-defect.png","name":"Defect","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10400","id":"10400","description":"An impediment is anything that is slowing or stopping the team(s) or effort down.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/delete.png","name":"Impediment","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10700","id":"10700","description":"An announcement of information that is not an impediment or risk.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/blank.png","name":"Notification","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10701","id":"10701","description":"Any uncertain event that can have an impact on the success.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13803&avatarType=issuetype","name":"Risk","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10702","id":"10702","description":"Use for technical analysis, research work done by team member for detail design type work. ","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/defect.png","name":"Technical Analysis","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10703","id":"10703","description":"Use for solution analysis, work done by team member for defining and clarifying discovery efforts, requirements, MBI\/MTI and features. Mainly used by solution analysts.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/defect.png","name":"Solution Analysis","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10804","id":"10804","description":"This is an \"initial placeholder\" issue and optional to use. It generally only includes the summary and possibly a supporting sentence or two","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=14404&avatarType=issuetype","name":"Raw","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/3","id":"3","description":"A task that needs to be done.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13818&avatarType=issuetype","name":"Task","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/4","id":"4","description":"An improvement or enhancement to an existing feature or task.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13810&avatarType=issuetype","name":"Improvement","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/5","id":"5","description":"The sub-task of the issue","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13816&avatarType=issuetype","name":"Sub-task","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/6","id":"6","description":"Created by Jira Software - do not edit or delete. Issue type for a big user story that needs to be broken down.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=15719&avatarType=issuetype","name":"Feature","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/7","id":"7","description":"Created by Jira Software - do not edit or delete. Issue type for a user story.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13815&avatarType=issuetype","name":"Story","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/8","id":"8","description":"Created by GreenHopper - do not edit or delete. Issue type for a technical task.","iconUrl":"https:\/\/jira\/images\/icons\/ico_task.png","name":"Technical task","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/9","id":"9","description":"Unit, Behavior, or Test Case","iconUrl":"https:\/\/jira\/images\/icons\/chem.png","name":"Test task","subtask":true}]} Tue Feb 25 12:47:48 CST 2025: INFO: [CreateIssueMetadataJsonParserExt].[parse] json: {"maxResults":50,"startAt":0,"total":16,"isLast":true,"values":[{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/1","id":"1","description":"A problem which impairs or prevents the functions of the product.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13803&avatarType=issuetype","name":"Bug","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10100","id":"10100","description":"Spikes are a type of story used for activities such as research, design, exploration, or prototyping. They are used to gain knowledge in order to reduce risk.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/exclamation.png","name":"Spike","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10300","id":"10300","description":"This subtask represents a defect found during testing or review of a Story","iconUrl":"https:\/\/jira\/images\/icons\/subtask-defect.png","name":"Defect","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10400","id":"10400","description":"An impediment is anything that is slowing or stopping the team(s) or effort down.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/delete.png","name":"Impediment","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10700","id":"10700","description":"An announcement of information that is not an impediment or risk.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/blank.png","name":"Notification","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10701","id":"10701","description":"Any uncertain event that can have an impact on the success.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13803&avatarType=issuetype","name":"Risk","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10702","id":"10702","description":"Use for technical analysis, research work done by team member for detail design type work. ","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/defect.png","name":"Technical Analysis","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10703","id":"10703","description":"Use for solution analysis, work done by team member for defining and clarifying discovery efforts, requirements, MBI\/MTI and features. Mainly used by solution analysts.","iconUrl":"https:\/\/jira\/images\/icons\/issuetypes\/defect.png","name":"Solution Analysis","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/10804","id":"10804","description":"This is an \"initial placeholder\" issue and optional to use. It generally only includes the summary and possibly a supporting sentence or two","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=14404&avatarType=issuetype","name":"Raw","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/3","id":"3","description":"A task that needs to be done.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13818&avatarType=issuetype","name":"Task","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/4","id":"4","description":"An improvement or enhancement to an existing feature or task.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13810&avatarType=issuetype","name":"Improvement","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/5","id":"5","description":"The sub-task of the issue","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13816&avatarType=issuetype","name":"Sub-task","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/6","id":"6","description":"Created by Jira Software - do not edit or delete. Issue type for a big user story that needs to be broken down.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=15719&avatarType=issuetype","name":"Feature","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/7","id":"7","description":"Created by Jira Software - do not edit or delete. Issue type for a user story.","iconUrl":"https:\/\/jira\/secure\/viewavatar?size=xsmall&avatarId=13815&avatarType=issuetype","name":"Story","subtask":false},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/8","id":"8","description":"Created by GreenHopper - do not edit or delete. Issue type for a technical task.","iconUrl":"https:\/\/jira\/images\/icons\/ico_task.png","name":"Technical task","subtask":true},{"self":"https:\/\/jira\/rest\/api\/2\/issuetype\/9","id":"9","description":"Unit, Behavior, or Test Case","iconUrl":"https:\/\/jira\/images\/icons\/chem.png","name":"Test task","subtask":true}]}64Views0likes0CommentsTestComplete Reporting using Azure DevOps
We are using TestComplete/Execute with Azure Desktop and running tests as Part of a Pipeline, it by default gives attaches the .mht file to the specific tests results. Recently I started facing issue in opening the .mht file. Is there any way to change the results file format keeping in mind I want to stick to the same approach to attach results to each failed test not as artifact. Thanks!Solved121Views0likes6Comments