Recent Content
Accessibility Testing Made Easy: How TestComplete Ensures Web Compliance
Most test automation focuses on functionality but in regulated industries like healthcare, finance, and education, teams must also prove accessibility and compliance. TestComplete’s Web Audit Checkpoints make this simple by integrating accessibility scans directly into automated tests, identifying errors like missing alt text, deprecated tags, and invalid HTML. Teams can set practical thresholds (e.g., zero critical errors, limited warnings) to balance enforcement and flexibility. This ensures every regression run checks not only if features work, but if they meet legal and usability standards. The result is faster compliance, reduced risk, and higher-quality user experiences for everyone. Check out our demo video to see how accessibility testing in TestComplete fits seamlessly into your automation pipeline and helps you build more inclusive, compliant web applications. Accessibility Testing in Testcomplete DemoTestComplete with Zephyr
In this post, we are going to talk about SmartBear’s UI testing tool, TestComplete, and writing the results to Zephyr in an automated fashion. When we think about using TestComplete with any test management tool, it can really be accomplished in two ways: Natively inside TestComplete or integrating with some CI-CD system. When we are using Zephyr, both ways will utilize the Zephyr REST API. When we link to Zephyr natively from TestComplete, it is a script heavy approach. An example of that can be found here. Today we are going to go into detail about using TestComplete, with a CI-CD system, and sending the results to Zephyr. Now let’s talk about automating this process. The most common way to automate triggering TestComplete tests is through one if its many integrations. TestComplete can integrate to any CI-CD system as it has Command Line options, REST API options, and many native integrations like Azure Dev Ops or Jenkins. The use of a CI-CD system makes managing the executions at scale much easier. The general approach to this workflow would be a two-stage pipeline something like: node { stage('Run UI Tests') { // Run the UI Tests using TestComplete stage('Pass Results') { //Pass Results to Zephyr } } First, we trigger TestComplete to execute our tests somewhere. This could be a local machine or a cloud computer, anywhere, and we store the test results in a relative location. Next, we use a batch file (or alike) to take the results from that relative location, send them to Zephyr. When executing TestComplete tests, there are easy ways to write the results to a specific location in an automated fashion. We will look at options through the CLI as well as what some of the native integrations offer. Starting with the TestComplete CLI, the /ExportSummary:File_Name flag will generate a summary report for the test runs, and save it to a fully qualified or relative path in Junit-XML structure. At a basic level we need this: TestComplete.exe <ProjectSuite Location> [optional-arguments] So something like this: TestComplete.exe "C:\Work\My Projects\MySuite.pjs" /r /ExportSummary:C:\Reports\Results.xml /e The /ExportSummary flag can be stored in a relative or fully qualified directory. We could also use one of TestComplete’s many native integrations, like Jenkins and specify in the settings where to output results: Now that our TestComplete tests are executing, and the results are writing to a relative location we are ready for stage 2 of the pipeline, sending the results to Zephyr. So now let’s send our results to Zephyr. I think the easiest option is to use the Zephyr API, and the Auto-Create Test Case option to true. The command below is a replica of what you would use in a batch file script in the pipeline. curl -H "Authorization: Bearer Zephyr-Token-Here" -F file= Relative-Location-of-Report-Here\report.xml;type=application/xml "https://api.zephyrscale.smartbear.com/v2/automations/executions/junit?projectKey=Project-Key-Here&autoCreateTestCases=true" After you modify the API token, relative location, and project key you are good to run the pipeline. The pipeline should look something like this: After we run the pipeline let’s jump into Jira to find to confirm the results are populating. Even with execution data: Also, with transactional data to analyze the failed test steps:2likes0CommentsStop Skimming PDFs, Start Automating PDF Testing
On the surface, PDFs look simple, but testing them is a whole different story. Invoices, contracts, statements, compliance reports… they’re often the last thing that lands in a customer’s hands. That also means even the smallest issue, like a missing field or a misplaced decimal, can turn into something big. The challenge is that PDFs aren’t like web pages or apps where you can easily inspect elements. They’re containers packed with content, layout, images, and data from different systems. When you add in dynamic content that changes for every customer, formatting that has to stay perfect, and the regulatory risks in industries like finance or healthcare, you start to see why manual testing just doesn’t cut it. It’s slow, inconsistent, and doesn’t scale. This is where automation becomes essential. With automation, you can make sure data is always accurate, layouts stay consistent, and testing scales across thousands of documents without slowing down your team. Instead of spending hours opening PDFs by hand, QA can focus on higher-value work while still knowing that every report or statement going out the door is right. That’s exactly where TestComplete comes in. It’s built to handle the tough parts of PDF testing so you don’t have to. You can validate content down to the last character, run visual checks to keep layouts consistent, and plug it all straight into your CI/CD pipeline. The result is faster releases, fewer headaches, and a lot more confidence that the documents your customers see are exactly as they should be. Click on this link and check out a quick demo to see how TestComplete makes PDF testing easier and more reliable in action.Accelerating Quality: How TestComplete Leads in Test Creation, Execution, and Object Recognition
Temil Sanchez, the new Product Manager for TestComplete, shares insights from a recent evaluation comparing TestComplete and Ranorex. TestComplete stood out for its faster test creation, intuitive interface, and superior object recognition, which reduce maintenance and ensure robust automation. Looking ahead, the focus is on integrating AI to further accelerate test creation, enhance resilience, and help teams release quality software faster.1like0CommentsChecking API Status in TestComplete
Introduction I first saw the need to verify the state of an API several years ago with an application that used an address validation API in several of it's operations. If this API was down or did not respond in a timely manor, many of the automated test cases would run and fail. In this case, and in others, I have found that doing a simple call to the API to check the returned status code allowed me to skip, fail or log a warning with a logical message instead of allowing the application to fail with another less direct error message due to the API issue. The aqHttp Object The TestComplete aqHttp object and it's methods are very useful for performing simple checks like this and are also useful for other more complex tasks like leveraging an API to return a test data set or even verifying a certain data is returned prior to running tests against the UI that depend on the data. Sending and Receiving HTTP Requests From Script Tests More Complete API Testing Most proper API testing should be done using a tools like ReadyAPI or SoapUI. Both of these tools will integrate with TestComplete or can be used alone and will provide much more capabilities and automation options. Integration With ReadyAPI and SoapUI Code Example Here I have provided a working example of how to code a Get request using 'aqHttp.CreateRequest' to confirm an API returns a status code of 200 and it will log the returned records. function sendGetRequest() { let resourcePath ="https://restcountries.com/v3.1/all" let resourceQuery = "?fields=name,capital"; let url = resourcePath + resourceQuery; try { // Send GET request let response = aqHttp.CreateRequest("GET", url, false).Send(); // Check for successful response if (response.StatusCode === 200) { // Parse JSON response let allData = JSON.parse(response.Text); Log.Message("Total records received: " + allData.length); // Process each record allData.forEach((record, index) => { Log.Message("Record " + (index + 1) + ": " + JSON.stringify(record)); }); return true; // Send a bool back to the calling function. } else { throw new Error("Failed to fetch data. Status code: " + response.StatusCode); } } catch (error) { Log.Error("Error during request or data processing: " + error.message); } } Enhancements You could accept parameters for the resourcePath and resourceQuery. Parameterize the logging loop run or remove it. Return the JSON to the calling function for use. Perform other tasks based on the return code. Conclusion With the growing use of API calls in desktop applications and the fact that APIs are almost the foundation of any web site checking an API before a test case run is almost a requirement for consistent test runs and good error logging. This small script can bring big rewards to your test runs and reports. Cheers! I hope you find it as useful as I have! If you find my posts helpful drop me a like! 👍 Leave a comment if you want to contribute or have a better solution or an improvement. 😎 Have a great dayAzure DevOps Pipelines - Running “Headless” Tests
One of the most common requests I get from TestComplete customers who run their tests from Azure DevOps pipelines is: “How can I execute my tests on remote Virtual Machines (VMs) with self-hosted agents without needing to maintain an active terminal session?”. Note: This document does not detail setting up the Azure DevOps pipeline or the self-hosted Azure agents. That process can be found in the TestComplete documentation here: Integration With Azure DevOps and Team Foundation Server via TestComplete Test Adapter | TestComplete Documentation There are some details in the TestComplete documentation on how to accomplish this type of configuration, but I’ll go through the full process and how to configure this solution with two options. That documentation can be found online: Disconnecting From Remote Desktop While Running Automated Tests | TestComplete Documentation The first option is to set up multiple virtual machines (VMs). You start by logging into one VM via Remote Desktop Protocol (RDP). From there, you connect to other Tester VMs running Microsoft Agents. After issuing a command to release the RDP session back to the Admin user, the Tester VMs remain active, allowing TestExecute to run tests triggered by the pipeline. The second option is to use a single, high-performance VM to host all remote sessions and agents. Here, multiple Tester sessions run on the same VM, each with its own agent handling pipeline requests. Each session runs its own TestExecute instance as long as enough floating licenses are available. The Admin session connects to the VM and then accesses each Tester session via RDP using the VM’s loopback address (e.g., 127.0.0.2). However, this method requires the VM to be part of a Microsoft Domain to support more than two simultaneous RDP sessions. Here are the commands we’ll be taking advantage of in this operation. To gracefully disconnect from an active RDP terminal session, converting it into a console session, open a Command Prompt (CMD) in the Admin user session and use the following: %windir%\System32\tscon.exe RDP-Tcp#NNN /dest:console where RDP-Tcp#NNN is the ID of your current Remote Desktop session, for example, RDP-Tcp#5. You can see it in the Windows Task Manager on the Users tab, in the Session column. The Session column is hidden by default. To show it, right-click somewhere within the row that displays CPU, Memory, and so on, then choose Session in the opened context menu. Our documentation also includes an easy-to-use batch file option that can be run as an Administrator to more easily disconnect from the remote Admin session. Create a batch file with this code: for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do ( %windir%\System32\tscon.exe %%s /dest:console ) Create a desktop shortcut to this file. To do this, right-click the batch file and select Send to > Desktop (create shortcut). In the shortcut properties, click Advanced and select Run as administrator. Double-click this shortcut on the remote computer (in the Remote Desktop Connection window) or call this batch file at the beginning of your tests (provided that the tests are run as an Administrator). To gracefully reconnect to a remote console session, use the following command in an open CMD prompt: mstsc -v: servername /F -console where the servername is the address of the remote VM hosting the Admin session. Option 1: Multiple VMs in the testing network. RDP connect to and set up the remote VMs to be used for the Tester Agents and have their self-hosted Microsoft agents active and listening for jobs, then disconnect the RDP sessions. RDP connect to the VM to be used as the “Admin” session. From the “Admin” VM, RDP connect to each Tester VM. (Connecting to more than 2 remote RDP sessions will require the VMs to be on a Domain with the appropriate settings configured to allow for more than 2 remote sessions.) Your Admin system should look something like this image where I am connected to 2 of my Tester VMs with their agents listening for jobs: Ensure that your Agents are active in the Azure DevOps Agent pool: Disconnect from the Admin user session using the prepared batch file or the direct command with the RDP session ID: Run the pipeline to validate that tests execute as expected: Reconnect to the Admin VM to see test results from the running agents using the command: mstsc -v: servername /F -console Option 2: Using a Single Virtual Machine RDP connect to the VM to be used for testing using the “Admin” account. I’m using my Tester01 account in this demonstration. RDP connect to the other users on the same VM using the 127.0.0.2 loopback address. Run the Microsoft self-hosted agents on these connected sessions. Disconnect from the Admin user session using the prepared batch file or the direct command with the RDP session ID. Prepare the pipeline. In my case, as I am not on a Domain so I am limited to ONE RDP session from my “Admin” user. I’m disabling the agent on Tester03 since we are using Tester02 for our pipeline. Execute the pipeline and validate successful test executions: Reconnect to the “Admin” session to make changes to the configuration as needed: mstsc -v: servername /F -console Conclusion In conclusion, these two methods provide testing options from Azure DevOps using VMs without requiring a dedicated terminal session from your development system. They still require that session, but the solutions provide options to workaround that “directly monitored” connection. These solutions also give you the security of running them on VMs in your environment; easily secured, controlled, and isolated by your firewall and security requirements. Full demonstration video link: Azure DevOps Pipelines - Remote "Headless" Testing1like0CommentsData Structures for Application Object Definitions
Library files containing data structure objects like JavaScript objects, Map objects, and JSON objects work well across multiple projects and suites. These files are also nice to work with in source control. Changes in any library file can be easily seen and tracked. Each of these object types will integrate seamlessly with methods like FindChild and work equally well with desktop and web applications. JavaScript objects are directly usable in TestComplete scripts. Map objects have some advantages over JavaScript objects such as built in looping methods and properties. This adds some complexity in implementation as these methods must be used to access the data structure. JSON is a 'portable' object definition. Many different languages support JSON. JSON requires a parse operation to return a JavaScript object prior to use in JavaScript. JSON would be preferred if the object definitions were created directly from the application code base by development. Object Definition Examples This code defines an object named customOptionObjDefs. It contains two UI object definitions: btnOK and btnCancel (It could contain many definitions). The code below shows examples of a JavaScript Object, a JavaScript Map and a JSON Object and the code used with the TestComplete method FindChild. JavaScript Object const customOptionObjDefs = { btnOK: { propertyNames:["WPFControlName","WPFControlText"] ,propertyValues: ["btnOK","OK"], depth: 16}, btnCancel: { propertyNames:["WPFControlName","WPFControlText"] ,propertyValues: ["btnCancel","Cancel"], depth: 16} } const btnOKDef = customOptionObjDefs.btnOK; // directly accessable const btnOKObject = parentObject.FindChild(btnOKDef.propertyNames,btnOKDef.propertyValues,btnOKDef.depth); JavaScript Map Object const customOptionObjDefs = new Map([ ["btnOK", { propertyNames: ["WPFControlName", "WPFControlText"], propertyValues: ["btnOK", "OK"], depth: 16 }], ["btnCancel", { propertyNames: ["WPFControlName", "WPFControlText"], propertyValues: ["btnCancel", "Cancel"], depth: 16 }] ]); const btnOKDef = customOptionObjDefs.get("btnOK"); // .get method const btnOKObject = parentObject.FindChild(btnOKDef.propertyNames,btnOKDef.propertyValues,btnOKDef.depth); JSON Object const customOptionObjDefs = { "btnOK": { "propertyNames": ["WPFControlName", "WPFControlText"], "propertyValues": ["btnOK", "OK"], "depth": 16 }, "btnCancel": { "propertyNames": ["WPFControlName", "WPFControlText"], "propertyValues": ["btnCancel", "Cancel"], "depth": 16 } }; const btnOKDef = JSON.parse(customOptionObjDefs).btnOKDef; // .parse method const btnOKObject = parentObject.FindChild(btnOK.propertyNames,btnOK.propertyValues,btnOK.depth); I prefer to organize object definition libraries in files by application and form and 'importing them in scripts or in the class structure of a project using 'require'. These files can be stored externally to the project, shared and organized as desired. const orderEntryDefs = require("orderEntryDefinitions"); Sometimes application objects are not well named or require a variable to be calculated dynamically in order to be defined. These objects are a challenge to define for automation and usually lead to brittle code. In such cases code is written to determine the values needed for definition and passed directly to the FindChild method or a 'helper' class method or function. In most cases a helper class method or a function is used to create and return the objects defined in the definition libraries. The use of a 'helper' class method or function will provide a layer of abstraction, centralized error handling and more modular code. The file(s) containing these functions or methods would also be imported for each script using 'require'. Conclusion The use of data structures provides a modular way to define and create instances of objects for automation scripts. These objects can be easily looped over to find and create container objects for entire forms or very complex object like grids or trees. I have found that creating all available objects for a form or in groups if the form changes dynamically and storing them in a object makes the script easier to write and to read. WW Wood Products Inc.1like0CommentsHow To: Read data from the Windows Registry
Hello all, I have recently learned how to retrieve data from the Windows registry in JavaScript test units. I am using this to return the OS information and application path information. This is very useful when added to the EventControl_OnStartTest event code. This will allow you to return OS information and other needed data at each test run. Some test management systems may provide this information for you or it may be logged in the in data produced in a pipeline run. This will embed the information directly into your test log. SmartBear KB Links: Storages Object Storages Object Methods Storages.Registry Method Section Object Get SubSection Method This bit of code will return the Product Name and Current Build from the registry. This location may vary between OS's so you will want to check this with RegEdit. let Section = Storages.Registry("SOFTWARE\\Microsoft\\Windows NT", HKEY_LOCAL_MACHINE); let regKeyString = Section.GetSubSection("CurrentVersion").Name; let productIdString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("ProductName", ""); let currentBuildString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("CurrentBuild", ""); Log.Message("Windows Version: " + productIdString + " Build: " + currentBuildString ) I have also found the need to find and set an application path and work folder in the project TestedApp for running through a pipeline because the pipeline deploys the application to a non-standard path. let Section = Storages.Registry("SOFTWARE\\WOW6432Node\\<_yourSectionName>\\", HKEY_LOCAL_MACHINE); let regKey = Section.GetSubSection(<_yourSubSectionName>).Name; let Path = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("", ""); let WorkFolder = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("Path", ""); let appIndex = TestedApps.Find(<_yourAppName>); if (appIndex >= 0){ if(TestedApps.Items(<_yourAppName>).Path != Path){ TestedApps.Items(<_yourAppName>).Path = Path } if(TestedApps.Items(<_yourAppName>).WorkFolder != WorkFolder){ TestedApps.Items(<_yourAppName>).Params.ActiveParams.WorkFolder = WorkFolder; } } else{ Log.Error("TestedApp " + <_yourAppName> + " does not Exist.") Runner.Stop(true); } I hope you find these links and code examples as useful as I have! Have a great day!Drag-and-drop object to another object
TestComplete's built-in Drag action is designed to drag a specific Alias object from a given point to another point, but at a pixel offset (i.e. drag Alias....Button by X/Y pixels). While useful as a "jumping off point", this approach can be problematic for obvious reasons (Dynamic UIs, changing screen resolutions, inconsistent offsets) leading to brittle tests. Fortunately, TestComplete method parameters offer a high degree of customisation. By evaluating and utilising exposed properties like ScreenTop/ScreenLeft, we can create more robust and adaptable drag-and-drop actions. This allows us to instead dynamically reference the coordinates of a target object, or better still use exposed values in simple calculations, like figuring out the offset value for the Drag action. This Python script example calculates the offset using the common ScreenTop and ScreenLeft positions of both objects then passes the difference to the Drag action, allowing us to drag one given object to another given object with much more flexibility : def dragToObject(clientObj, targetObj): # Using ScreenLeft property to drag horizontally; ScreenTop for vertical fromObjectTop = aqObject.GetPropertyValue(clientObj, "ScreenTop") fromObjectLeft = aqObject.GetPropertyValue(clientObj, "ScreenLeft") toObjectTop = aqObject.GetPropertyValue(targetObj, "ScreenTop") toObjectLeft = aqObject.GetPropertyValue(targetObj, "ScreenLeft") dragY = toObjectTop-fromObjectTop dragX = toObjectLeft-fromObjectLeft Log.Message("Dragging "+aqConvert.IntToStr(dragX)+"px horizontally and"+aqConvert.IntToStr(dragY)+"px vertically") clientObj.Drag(-1, -1, dragX, dragY) You can then even utilise this in your KeywordTests, by changing the input parameter Mode to Onscreen Object, which enables the Object Picker : Now you have a way to drag one object to another - for example a value into a table? Hope this gets the creative juices going - can you think of other ways you might handle dynamic values in other Action methods? Regards, Mike TestComplete Solutions Engineer0likes0Comments