Recent Content
Stop Skimming PDFs, Start Automating PDF Testing
On the surface, PDFs look simple, but testing them is a whole different story. Invoices, contracts, statements, compliance reports… they’re often the last thing that lands in a customer’s hands. That also means even the smallest issue, like a missing field or a misplaced decimal, can turn into something big. The challenge is that PDFs aren’t like web pages or apps where you can easily inspect elements. They’re containers packed with content, layout, images, and data from different systems. When you add in dynamic content that changes for every customer, formatting that has to stay perfect, and the regulatory risks in industries like finance or healthcare, you start to see why manual testing just doesn’t cut it. It’s slow, inconsistent, and doesn’t scale. This is where automation becomes essential. With automation, you can make sure data is always accurate, layouts stay consistent, and testing scales across thousands of documents without slowing down your team. Instead of spending hours opening PDFs by hand, QA can focus on higher-value work while still knowing that every report or statement going out the door is right. That’s exactly where TestComplete comes in. It’s built to handle the tough parts of PDF testing so you don’t have to. You can validate content down to the last character, run visual checks to keep layouts consistent, and plug it all straight into your CI/CD pipeline. The result is faster releases, fewer headaches, and a lot more confidence that the documents your customers see are exactly as they should be. Click on this link and check out a quick demo to see how TestComplete makes PDF testing easier and more reliable in action.Accelerating Quality: How TestComplete Leads in Test Creation, Execution, and Object Recognition
Temil Sanchez, the new Product Manager for TestComplete, shares insights from a recent evaluation comparing TestComplete and Ranorex. TestComplete stood out for its faster test creation, intuitive interface, and superior object recognition, which reduce maintenance and ensure robust automation. Looking ahead, the focus is on integrating AI to further accelerate test creation, enhance resilience, and help teams release quality software faster.1like0CommentsChecking API Status in TestComplete
Introduction I first saw the need to verify the state of an API several years ago with an application that used an address validation API in several of it's operations. If this API was down or did not respond in a timely manor, many of the automated test cases would run and fail. In this case, and in others, I have found that doing a simple call to the API to check the returned status code allowed me to skip, fail or log a warning with a logical message instead of allowing the application to fail with another less direct error message due to the API issue. The aqHttp Object The TestComplete aqHttp object and it's methods are very useful for performing simple checks like this and are also useful for other more complex tasks like leveraging an API to return a test data set or even verifying a certain data is returned prior to running tests against the UI that depend on the data. Sending and Receiving HTTP Requests From Script Tests More Complete API Testing Most proper API testing should be done using a tools like ReadyAPI or SoapUI. Both of these tools will integrate with TestComplete or can be used alone and will provide much more capabilities and automation options. Integration With ReadyAPI and SoapUI Code Example Here I have provided a working example of how to code a Get request using 'aqHttp.CreateRequest' to confirm an API returns a status code of 200 and it will log the returned records. function sendGetRequest() { let resourcePath ="https://restcountries.com/v3.1/all" let resourceQuery = "?fields=name,capital"; let url = resourcePath + resourceQuery; try { // Send GET request let response = aqHttp.CreateRequest("GET", url, false).Send(); // Check for successful response if (response.StatusCode === 200) { // Parse JSON response let allData = JSON.parse(response.Text); Log.Message("Total records received: " + allData.length); // Process each record allData.forEach((record, index) => { Log.Message("Record " + (index + 1) + ": " + JSON.stringify(record)); }); return true; // Send a bool back to the calling function. } else { throw new Error("Failed to fetch data. Status code: " + response.StatusCode); } } catch (error) { Log.Error("Error during request or data processing: " + error.message); } } Enhancements You could accept parameters for the resourcePath and resourceQuery. Parameterize the logging loop run or remove it. Return the JSON to the calling function for use. Perform other tasks based on the return code. Conclusion With the growing use of API calls in desktop applications and the fact that APIs are almost the foundation of any web site checking an API before a test case run is almost a requirement for consistent test runs and good error logging. This small script can bring big rewards to your test runs and reports. Cheers! I hope you find it as useful as I have! If you find my posts helpful drop me a like! 👍 Leave a comment if you want to contribute or have a better solution or an improvement. 😎 Have a great dayAzure DevOps Pipelines - Running “Headless” Tests
One of the most common requests I get from TestComplete customers who run their tests from Azure DevOps pipelines is: “How can I execute my tests on remote Virtual Machines (VMs) with self-hosted agents without needing to maintain an active terminal session?”. Note: This document does not detail setting up the Azure DevOps pipeline or the self-hosted Azure agents. That process can be found in the TestComplete documentation here: Integration With Azure DevOps and Team Foundation Server via TestComplete Test Adapter | TestComplete Documentation There are some details in the TestComplete documentation on how to accomplish this type of configuration, but I’ll go through the full process and how to configure this solution with two options. That documentation can be found online: Disconnecting From Remote Desktop While Running Automated Tests | TestComplete Documentation The first option is to set up multiple virtual machines (VMs). You start by logging into one VM via Remote Desktop Protocol (RDP). From there, you connect to other Tester VMs running Microsoft Agents. After issuing a command to release the RDP session back to the Admin user, the Tester VMs remain active, allowing TestExecute to run tests triggered by the pipeline. The second option is to use a single, high-performance VM to host all remote sessions and agents. Here, multiple Tester sessions run on the same VM, each with its own agent handling pipeline requests. Each session runs its own TestExecute instance as long as enough floating licenses are available. The Admin session connects to the VM and then accesses each Tester session via RDP using the VM’s loopback address (e.g., 127.0.0.2). However, this method requires the VM to be part of a Microsoft Domain to support more than two simultaneous RDP sessions. Here are the commands we’ll be taking advantage of in this operation. To gracefully disconnect from an active RDP terminal session, converting it into a console session, open a Command Prompt (CMD) in the Admin user session and use the following: %windir%\System32\tscon.exe RDP-Tcp#NNN /dest:console where RDP-Tcp#NNN is the ID of your current Remote Desktop session, for example, RDP-Tcp#5. You can see it in the Windows Task Manager on the Users tab, in the Session column. The Session column is hidden by default. To show it, right-click somewhere within the row that displays CPU, Memory, and so on, then choose Session in the opened context menu. Our documentation also includes an easy-to-use batch file option that can be run as an Administrator to more easily disconnect from the remote Admin session. Create a batch file with this code: for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do ( %windir%\System32\tscon.exe %%s /dest:console ) Create a desktop shortcut to this file. To do this, right-click the batch file and select Send to > Desktop (create shortcut). In the shortcut properties, click Advanced and select Run as administrator. Double-click this shortcut on the remote computer (in the Remote Desktop Connection window) or call this batch file at the beginning of your tests (provided that the tests are run as an Administrator). To gracefully reconnect to a remote console session, use the following command in an open CMD prompt: mstsc -v: servername /F -console where the servername is the address of the remote VM hosting the Admin session. Option 1: Multiple VMs in the testing network. RDP connect to and set up the remote VMs to be used for the Tester Agents and have their self-hosted Microsoft agents active and listening for jobs, then disconnect the RDP sessions. RDP connect to the VM to be used as the “Admin” session. From the “Admin” VM, RDP connect to each Tester VM. (Connecting to more than 2 remote RDP sessions will require the VMs to be on a Domain with the appropriate settings configured to allow for more than 2 remote sessions.) Your Admin system should look something like this image where I am connected to 2 of my Tester VMs with their agents listening for jobs: Ensure that your Agents are active in the Azure DevOps Agent pool: Disconnect from the Admin user session using the prepared batch file or the direct command with the RDP session ID: Run the pipeline to validate that tests execute as expected: Reconnect to the Admin VM to see test results from the running agents using the command: mstsc -v: servername /F -console Option 2: Using a Single Virtual Machine RDP connect to the VM to be used for testing using the “Admin” account. I’m using my Tester01 account in this demonstration. RDP connect to the other users on the same VM using the 127.0.0.2 loopback address. Run the Microsoft self-hosted agents on these connected sessions. Disconnect from the Admin user session using the prepared batch file or the direct command with the RDP session ID. Prepare the pipeline. In my case, as I am not on a Domain so I am limited to ONE RDP session from my “Admin” user. I’m disabling the agent on Tester03 since we are using Tester02 for our pipeline. Execute the pipeline and validate successful test executions: Reconnect to the “Admin” session to make changes to the configuration as needed: mstsc -v: servername /F -console Conclusion In conclusion, these two methods provide testing options from Azure DevOps using VMs without requiring a dedicated terminal session from your development system. They still require that session, but the solutions provide options to workaround that “directly monitored” connection. These solutions also give you the security of running them on VMs in your environment; easily secured, controlled, and isolated by your firewall and security requirements. Full demonstration video link: Azure DevOps Pipelines - Remote "Headless" Testing1like0CommentsData Structures for Application Object Definitions
Library files containing data structure objects like JavaScript objects, Map objects, and JSON objects work well across multiple projects and suites. These files are also nice to work with in source control. Changes in any library file can be easily seen and tracked. Each of these object types will integrate seamlessly with methods like FindChild and work equally well with desktop and web applications. JavaScript objects are directly usable in TestComplete scripts. Map objects have some advantages over JavaScript objects such as built in looping methods and properties. This adds some complexity in implementation as these methods must be used to access the data structure. JSON is a 'portable' object definition. Many different languages support JSON. JSON requires a parse operation to return a JavaScript object prior to use in JavaScript. JSON would be preferred if the object definitions were created directly from the application code base by development. Object Definition Examples This code defines an object named customOptionObjDefs. It contains two UI object definitions: btnOK and btnCancel (It could contain many definitions). The code below shows examples of a JavaScript Object, a JavaScript Map and a JSON Object and the code used with the TestComplete method FindChild. JavaScript Object const customOptionObjDefs = { btnOK: { propertyNames:["WPFControlName","WPFControlText"] ,propertyValues: ["btnOK","OK"], depth: 16}, btnCancel: { propertyNames:["WPFControlName","WPFControlText"] ,propertyValues: ["btnCancel","Cancel"], depth: 16} } const btnOKDef = customOptionObjDefs.btnOK; // directly accessable const btnOKObject = parentObject.FindChild(btnOKDef.propertyNames,btnOKDef.propertyValues,btnOKDef.depth); JavaScript Map Object const customOptionObjDefs = new Map([ ["btnOK", { propertyNames: ["WPFControlName", "WPFControlText"], propertyValues: ["btnOK", "OK"], depth: 16 }], ["btnCancel", { propertyNames: ["WPFControlName", "WPFControlText"], propertyValues: ["btnCancel", "Cancel"], depth: 16 }] ]); const btnOKDef = customOptionObjDefs.get("btnOK"); // .get method const btnOKObject = parentObject.FindChild(btnOKDef.propertyNames,btnOKDef.propertyValues,btnOKDef.depth); JSON Object const customOptionObjDefs = { "btnOK": { "propertyNames": ["WPFControlName", "WPFControlText"], "propertyValues": ["btnOK", "OK"], "depth": 16 }, "btnCancel": { "propertyNames": ["WPFControlName", "WPFControlText"], "propertyValues": ["btnCancel", "Cancel"], "depth": 16 } }; const btnOKDef = JSON.parse(customOptionObjDefs).btnOKDef; // .parse method const btnOKObject = parentObject.FindChild(btnOK.propertyNames,btnOK.propertyValues,btnOK.depth); I prefer to organize object definition libraries in files by application and form and 'importing them in scripts or in the class structure of a project using 'require'. These files can be stored externally to the project, shared and organized as desired. const orderEntryDefs = require("orderEntryDefinitions"); Sometimes application objects are not well named or require a variable to be calculated dynamically in order to be defined. These objects are a challenge to define for automation and usually lead to brittle code. In such cases code is written to determine the values needed for definition and passed directly to the FindChild method or a 'helper' class method or function. In most cases a helper class method or a function is used to create and return the objects defined in the definition libraries. The use of a 'helper' class method or function will provide a layer of abstraction, centralized error handling and more modular code. The file(s) containing these functions or methods would also be imported for each script using 'require'. Conclusion The use of data structures provides a modular way to define and create instances of objects for automation scripts. These objects can be easily looped over to find and create container objects for entire forms or very complex object like grids or trees. I have found that creating all available objects for a form or in groups if the form changes dynamically and storing them in a object makes the script easier to write and to read. WW Wood Products Inc.1like0CommentsHow To: Read data from the Windows Registry
Hello all, I have recently learned how to retrieve data from the Windows registry in JavaScript test units. I am using this to return the OS information and application path information. This is very useful when added to the EventControl_OnStartTest event code. This will allow you to return OS information and other needed data at each test run. Some test management systems may provide this information for you or it may be logged in the in data produced in a pipeline run. This will embed the information directly into your test log. SmartBear KB Links: Storages Object Storages Object Methods Storages.Registry Method Section Object Get SubSection Method This bit of code will return the Product Name and Current Build from the registry. This location may vary between OS's so you will want to check this with RegEdit. let Section = Storages.Registry("SOFTWARE\\Microsoft\\Windows NT", HKEY_LOCAL_MACHINE); let regKeyString = Section.GetSubSection("CurrentVersion").Name; let productIdString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("ProductName", ""); let currentBuildString = Storages.Registry(regKeyString, HKEY_LOCAL_MACHINE, 1, true).GetOption("CurrentBuild", ""); Log.Message("Windows Version: " + productIdString + " Build: " + currentBuildString ) I have also found the need to find and set an application path and work folder in the project TestedApp for running through a pipeline because the pipeline deploys the application to a non-standard path. let Section = Storages.Registry("SOFTWARE\\WOW6432Node\\<_yourSectionName>\\", HKEY_LOCAL_MACHINE); let regKey = Section.GetSubSection(<_yourSubSectionName>).Name; let Path = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("", ""); let WorkFolder = Storages.Registry(regKey, HKEY_LOCAL_MACHINE, 0, true).GetOption("Path", ""); let appIndex = TestedApps.Find(<_yourAppName>); if (appIndex >= 0){ if(TestedApps.Items(<_yourAppName>).Path != Path){ TestedApps.Items(<_yourAppName>).Path = Path } if(TestedApps.Items(<_yourAppName>).WorkFolder != WorkFolder){ TestedApps.Items(<_yourAppName>).Params.ActiveParams.WorkFolder = WorkFolder; } } else{ Log.Error("TestedApp " + <_yourAppName> + " does not Exist.") Runner.Stop(true); } I hope you find these links and code examples as useful as I have! Have a great day!Drag-and-drop object to another object
TestComplete's built-in Drag action is designed to drag a specific Alias object from a given point to another point, but at a pixel offset (i.e. drag Alias....Button by X/Y pixels). While useful as a "jumping off point", this approach can be problematic for obvious reasons (Dynamic UIs, changing screen resolutions, inconsistent offsets) leading to brittle tests. Fortunately, TestComplete method parameters offer a high degree of customisation. By evaluating and utilising exposed properties like ScreenTop/ScreenLeft, we can create more robust and adaptable drag-and-drop actions. This allows us to instead dynamically reference the coordinates of a target object, or better still use exposed values in simple calculations, like figuring out the offset value for the Drag action. This Python script example calculates the offset using the common ScreenTop and ScreenLeft positions of both objects then passes the difference to the Drag action, allowing us to drag one given object to another given object with much more flexibility : def dragToObject(clientObj, targetObj): # Using ScreenLeft property to drag horizontally; ScreenTop for vertical fromObjectTop = aqObject.GetPropertyValue(clientObj, "ScreenTop") fromObjectLeft = aqObject.GetPropertyValue(clientObj, "ScreenLeft") toObjectTop = aqObject.GetPropertyValue(targetObj, "ScreenTop") toObjectLeft = aqObject.GetPropertyValue(targetObj, "ScreenLeft") dragY = toObjectTop-fromObjectTop dragX = toObjectLeft-fromObjectLeft Log.Message("Dragging "+aqConvert.IntToStr(dragX)+"px horizontally and"+aqConvert.IntToStr(dragY)+"px vertically") clientObj.Drag(-1, -1, dragX, dragY) You can then even utilise this in your KeywordTests, by changing the input parameter Mode to Onscreen Object, which enables the Object Picker : Now you have a way to drag one object to another - for example a value into a table? Hope this gets the creative juices going - can you think of other ways you might handle dynamic values in other Action methods? Regards, Mike TestComplete Solutions Engineer0likes0CommentsClick Object not on screen
I have an application in .Net that looks like the old Windows 8 Metro interface (lots of "tiles" with a horizontal scroll bar). Using the object spy and aliases, I was able to find my "tile" and create a script to click it. As long as the tile is on the screen (scrolling left or right to move my tiles), it works. However, as soon as that tile is not on the screen, it fails to find it even though it is mapped. I am not a developer but trying to learn. Any ideas on how to make the code click the object? Aliases.MYCustomAPP.Tile_WO.Click(); The full path of the object is: Sys.Process("MYCustomAPP").WPFObject("HwndSource: DynamicWindow", "MYCustomAPP : test").WPFObject("DynamicWindow", "MYCustomAPP : test", 1).WPFObject("LayoutRoot").WPFObject("radTileList").WPFObject("TileGroupContainer", "Production", 7).WPFObject("Tile", "", 1).WPFObject("Grid", "", 1).WPFObject("TextBlock", "Work Order", 1)0likes0CommentsTestComplete: Headless Browser Testing in Parallel
In addition to standard web test execution, with the Intelligent Quality Add-on TestComplete supports running tests from an Execution Plan in a Headless browser environment. Browser options in this environment can be configured for Chrome, Edge, and Firefox, but require the installation of the supported web drivers for each of these browser types. More detailed information can be found in our support documentation on how to setup these drivers if TestComplete does not install them automatically when the environments are configured. Headless Browser Testing in TestComplete Here are some quick links to sites to download these web drivers as well: Chrome Edge Firefox The First Step: Running Parallel Tests in Sequence To configure the desired testing environments in the Execution Plan, one simply needs to add the desired keyword tests or scripts to the Execution Plan, choose the Local Headless Browser option, enter the desired endpoint URL, and configure the desired test environments. It should be noted that each test in the Execution needs to be configured for the URL and the test environments individually, and that each environment can only be configured once for each test for each resolution option available (there are up to 5 resolution settings for each browser). The image below details these configuration options. In this setup, each test in the Execution Plan will execute on the configured environments in parallel when the Execution Plan is started. The test will fire sequentially as shown here running “BearStore_Contact_Us” on the three environments simultaneously, then “Google_Panda_search” on its configured environments, and finally, “Google_Starfish_Search” on the environments configured for that test. The following figure shows the successful completion of this execution plan. Note the start times for each test showing them running in parallel, then executing the next keyword test in the Execution Plan sequence, also in parallel. We can even up the count for each keyword test. This will result in running each test case the set number of times, also in sequence, for the given count, before moving on to the next test case. With the resulting test log showing us those parallel test runs in sequential order: The Next Step: Running Parallel Tests in Parallel Now that we have run our three keyword tests in parallel environments, but in sequential order, let’s run them in parallel environments, in parallel! To accomplish this task, we need to setup a parallel Group in our Execution Plan and add our desired tests into this Group. Then configure the Local Headless Browser URL and Environments and run the Execution Plan. Since we are now launching all of our test cases, 3 test cases x 3 configurations x 3 execution counts, TestComplete will be running 27 headless browser sessions. As you can imagine, this is extremely processor intensive. My laptop, with a few applications running, including TestComplete, hovers between 25-50% CPU utilization. Starting this test run, will easily stress my CPU at 100% for most of the duration of the test. Our log shows all 27 tests starting at the same time. The test results also show several failures for very simple tests, most of which are associated with failures to reach the designated sites, likely caused by lack of system resources or over-taxed network connections. In conclusion, Local Headless Browser testing can be a very useful tool for running tests in “Sequential-Parallel” or “Parallel-Parallel” modes, but system resources are a factor to consider to ensure your tests are running cleanly and successfully without generating false failures.1like0Comments