Recent Content
TestComplete: Headless Browser Testing in Parallel
In addition to standard web test execution, with the Intelligent Quality Add-on TestComplete supports running tests from an Execution Plan in a Headless browser environment. Browser options in this environment can be configured for Chrome, Edge, and Firefox, but require the installation of the supported web drivers for each of these browser types. More detailed information can be found in our support documentation on how to setup these drivers if TestComplete does not install them automatically when the environments are configured.Headless Browser Testing in TestComplete Here are some quick links to sites to download these web drivers as well: Chrome Edge Firefox The First Step: Running Parallel Tests in Sequence To configure the desired testing environments in the Execution Plan, one simply needs to add the desired keyword tests or scripts to the Execution Plan, choose the Local Headless Browser option, enter the desired endpoint URL, and configure the desired test environments. It should be noted that each test in the Execution needs to be configured for the URL and the test environments individually, and that each environment can only be configured once for each test for each resolution option available (there are up to 5 resolution settings for each browser). The image below details these configuration options. In this setup, each test in the Execution Plan will execute on the configured environments in parallel when the Execution Plan is started. The test will fire sequentially as shown here running “BearStore_Contact_Us” on the three environments simultaneously, then “Google_Panda_search” on its configured environments, and finally, “Google_Starfish_Search” on the environments configured for that test. The following figure shows the successful completion of this execution plan. Note the start times for each test showing them running in parallel, then executing the next keyword test in the Execution Plan sequence, also in parallel. We can even up the count for each keyword test. This will result in running each test case the set number of times, also in sequence, for the given count, before moving on to the next test case. With the resulting test log showing us those parallel test runs in sequential order: The Next Step: Running Parallel Tests in Parallel Now that we have run our three keyword tests in parallel environments, but in sequential order, let’s run them in parallel environments, in parallel! To accomplish this task, we need to setup a parallel Group in our Execution Plan and add our desired tests into this Group. Then configure the Local Headless Browser URL and Environments and run the Execution Plan. Since we are now launching all of our test cases, 3 test cases x 3 configurations x 3 execution counts, TestComplete will be running 27 headless browser sessions. As you can imagine, this is extremely processor intensive. My laptop, with a few applications running, including TestComplete, hovers between 25-50% CPU utilization. Starting this test run, will easily stress my CPU at 100% for most of the duration of the test. Our log shows all 27 tests starting at the same time. The test results also show several failures for very simple tests, most of which are associated with failures to reach the designated sites, likely caused by lack of system resources or over-taxed network connections. In conclusion, Local Headless Browser testing can be a very useful tool for running tests in “Sequential-Parallel” or “Parallel-Parallel” modes, but system resources are a factor to consider to ensure your tests are running cleanly and successfully without generating false failures.1like0CommentsTestComplete with Zephyr Scale
In this post, we are going to talk about SmartBear’s UI testing tool, TestComplete, and writing the results to Zephyr Scale in an automated fashion. When we think about using TestComplete with any test management tool, it can really be accomplished in two ways: Natively inside TestComplete or integrating with some CI-CD system. When we are using Zephyr Scale, both ways will utilize the Zephyr Scale REST API. When we link to Zephyr Scale natively from TestComplete, it is a script heavy approach. An example of that can be found here. Today we are going to go into detail about using TestComplete, with a CI-CD system, and sending the results to Zephyr Scale. Now let’s talk about automating this process. The most common way to automate triggering TestComplete tests is through one if its many integrations. TestComplete can integrate to any CI-CD system as it has Command Line options, REST API options, and many native integrations like Azure Dev Ops or Jenkins. The use of a CI-CD system makes managing the executions at scale much easier. The general approach to this workflow would be a two-stage pipeline something like: node { stage('Run UI Tests') { // Run the UI Tests using TestComplete stage('Pass Results') { //Pass Results to Zephyr Scale } } First, we trigger TestComplete to execute our tests somewhere. This could be a local machine or a cloud computer, anywhere, and we store the test results in a relative location. Next, we use a batch file (or alike) to take the results from that relative location, send them to Zephyr Scale. When executing TestComplete tests, there are easy ways to write the results to a specific location in an automated fashion. We will look at options through the CLI as well as what some of the native integrations offer. Starting with the TestComplete CLI, the /ExportSummary:File_Name flag will generate a summary report for the test runs, and save it to a fully qualified or relative path in Junit-XML structure. At a basic level we need this: TestComplete.exe <ProjectSuite Location> [optional-arguments] So something like this: TestComplete.exe "C:\Work\My Projects\MySuite.pjs" /r /ExportSummary:C:\Reports\Results.xml /e The /ExportSummary flag can be stored in a relative or fully qualified directory. We could also use one of TestComplete’s many native integrations, like Jenkins and specify in the settings where to output results: Now that our TestComplete tests are executing, and the results are writing to a relative location we are ready for stage 2 of the pipeline, sending the results to Zephyr Scale. So now let’s send our results to Zephyr Scale. I think the easiest option is to use the Zephyr Scale API, and the Auto-Create Test Case option to true. The command below is a replica of what you would use in a batch file script in the pipeline. curl -H "Authorization: Bearer Zephyr-Scale-Token-Here" -F file= Relative-Location-of-Report-Here\report.xml;type=application/xml "https://api.zephyrscale.smartbear.com/v2/automations/executions/junit?projectKey=Project-Key-Here&autoCreateTestCases=true" After you modify the API token, relative location, and project key you are good to run the pipeline. The pipeline should look something like this: After we run the pipeline let’s jump into Jira to find to confirm the results are populating. Even with execution data: Also, with transactional data to analyze the failed test steps:1like0CommentsCrossBrowserTesting to BitBar Selenium Script Migration - QuickStart Guide
On June 21, 2022, SmartBear launched web application testing on our unified cloud testing solution that will include both browser and device testing on the BitBar platform! We have listened to our customers and having one product for both web and device testing will better meet your needs. BitBar is a scalable, highly reliable and performant platform with multiple datacenters. On it, you will have access to the latest browsers and devices with additional deployment options to meet your needs, including private cloud and dedicated devices. For more frequently asked questions about the launch of web app testing on BitBar, visit ourFAQ. This Quickstart Guide is intended to walk through the conversion of your existing CrossBrowserTesting Selenium tests to use BitBar! We have updated Selenium hubs and API calls that will require conversion, though little else will be required. As with CrossBrowserTesting, we have sample scripts and a Selenium Capabilities Configurator you may use to build out the specific capabilities for the desired tested device. This tool can be foundhere. To start conversion, you will need your BitBar API key versus the CrossBrowserTesting Authkey.This is a new method to authenticate the user and to make API calls. You may find your BitBar API Key in account settings as describedhere. Most of the Code examples and talking points for conversion are in reference to the CrossBrowserTesting Selenium Sample script that is availablehere. All code snippets in this article will be in Python. Now that you have your BitBar API Key, let's alter the original Authkey variable with our new BitBar API key located at line 18 in the CrossBrowserTesting sample script. This step is for connection to the BitBar API for processes such as taking screenshots and setting the status of your tests. # Old CrossBrowser Testing Sample Authkey Variable self.authkey = "<"CrossBrowserTesting Authkey">" # New Bitbar API Key Variable self.apiKey = "<"insert your BitBar API Key here">" In regards to the capabilities used in BitBar, there are a couple things to note. First, we do not need to specify a'record_video'capability as we do in CrossBrowserTesting. Videos are generated automatically for every test, so we no longer need to provide this capability. Doing so will result in webDriver errors.The second thing to note is that we now also pass the BitBar API Key along with the Capabilities; capabilities = { 'platform': 'Windows', 'osVersion': '11', 'browserName': 'chrome', 'version': '102', 'resolution': '1920x1080', 'bitbar_apiKey': '<insert your BitBar API key here>', } With BitBar we now have four Selenium hub options to choose from. Both US and EU Selenium hubs are available to aid in performance for your location. Separate hubs are also provided depending on the type of device (Desktop vs Mobile) you wish to test against. You may pick the applicable Desktop or Mobile hub closest to your location and replace your existing hub with the updated URL; BitBar Desktop Selenium Hubs; US_WEST:DESKTOP: https://us-west-desktop-hub.bitbar.com/wd/hub EU:DESKTOP: https://eu-desktop-hub.bitbar.com/wd/hub BitBar Mobile Selenium Hubs; US_WEST:MOBILE: https://us-west-mobile-hub.bitbar.com/wd/hub EU:MOBILE: https://eu-mobile-hub.bitbar.com/wd/hub # start the remote browser on our server self.driver = webdriver.Remote desired_capabilities=capabilities command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" Now that we have our BitBar API Key, and Capabilities and Selenium Hub set up, we can move on to altering our requests for Screenshots and Test Result Status. In the CrossBrowserTesting sample script, we use standalone API requests to create Screenshots. For the BitBar sample scripts, we are doing this with the Selenium driver itself to create the Screenshot and store it locally. Afterwards use the BitBar API to push the locally saved image back to our project. The swagger spec for our BitBar Cloud API can be foundhere. In line 30 of the BitBar Selenium sample script we set a location to store Screenshots on the local machine. Note, this is set up to store files in a directory called 'Screenshots' in the root folder of your project. self.screenshot_dir = os.getcwd() + '/screenshots' To retrieve a Screenshot and store it, we perform a 'get_screenshot_as_file' call, as seen on line 45 in the BitBar Selenium example script. self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + '1_home_page.png') Now we want to to take our Screenshot and push it back to our project in BitBar. Note that in this case for Python, we are using the 'httpx' module for the API calls back to BitBar. The 'requests' module only supports HTTP 1.1 and we will need a module capable of handling HTTP 2/3 requests. # Let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") The final piece of the puzzle is to set our Test Result Status. We have alternate naming conventions for test results, these are 'Succeeded' and 'Failed' for BitBar vs 'Pass' and 'Fail' for CrossBrowserTesting. # CrossBrowserTesting Successful test syntax self.test_result = 'pass' # CrossBrowserTesting Failed test syntax self.test_result = 'fail' # BitBar Successful test syntax self.test_result = 'SUCCEEDED' # BitBar Failed test syntax self.test_result = 'FAILED' Note that in the snippet provided below, we start by performing Get requests for session information. These requests are sent to the same Selenium Hub we are using for the webDriver, so make sure the hub address is set to the same hub used for webDriver. We would recommend to turn this into a variable to avoid having to switch this manually for alternate hubs. These processes are found in the 'tearDown' function of the updated CrossBrowserTesting sample script foundhere. #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() deviceRunID = str(response["deviceRunId"]) projectID = str(response["projectId"]) RunId = str(response["testRunId"]) Finally, we set the Test Result with the Post method below using session information retrieved with the Get request above. Note, the URL for the Post request will NOT need to be updated to reflect the specific Selenium hub in use. # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + projectID + '/runs/' + RunId + '/device-sessions/' + deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) Now that we have made these changes you are ready to run your test through BitBar! As a summary, we replace our CrossBrowserTesting authKey with the BitBar API Key, set the new Selenium hub address, build new screenshot calls and update the test result function. Quick Reference Documentation; BitBar WebFAQ. Complete documentation with code samples in various languages are foundhere. Retrieve your BitBar API Key in account settings as describedhere. BitBar Selenium Capability Configurator and Sample Scripts are foundhere. CrossBrowserTestingCapability Configurator and Sample Scripts are foundhere. The Swagger spec for our BitBar Cloud API can be foundhere. Here is our complete Python CBT-BB conversion script; # Please visit http://selenium-python.readthedocs.io/ for detailed installation and instructions # Getting started: http://docs.seleniumhq.org/docs/03_webdriver.jsp # API details: https://github.com/SeleniumHQ/selenium#selenium # Requests is the easiest way to make RESTful API calls in Python. You can install it by following the instructions here: # http://docs.python-requests.org/en/master/user/install/ import unittest from selenium import webdriver import requests import os import httpx class BasicTest(unittest.TestCase): def setUp(self): #get rid of the old way of doing auth with just an API key self.apiKey = '' self.api_session = requests.Session() self.test_result = None self.screenshot_dir = os.getcwd() + '/screenshots' self.screenshotName1 = 'SS1.png' self.deviceRunID = "" self.projectID = "" self.RunId = "" #old platformName has been split into platformName and osVersion capabilities = { 'bitbar_apiKey': '', 'platform': 'Linux', 'osVersion': '18.04', 'browserName': 'firefox', 'version': '101', 'resolution': '2560x1920', } # start the remote browser on our server self.driver = webdriver.Remote( desired_capabilities=capabilities, #the hub is changed, also not sending the user and pass through the hub anymore #US hub url: https://appium-us.bitbar.com/wd/hub command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" #EU hub url ) self.driver.implicitly_wait(20) def test_CBT(self): # We wrap this all in a try/except so we can set pass/fail at the end try: # load the page url print('Loading Url') self.driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html') # maximize the window - DESKTOPS ONLY #print('Maximizing window') #self.driver.maximize_window() #check the title print('Checking title') self.assertEqual("Selenium Test Example Page", self.driver.title) # take a screenshot and save it locally print("Taking a Screenshot") self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + self.screenshotName1) # change pass to SUCCEEDED self.test_result = 'SUCCEEDED' except AssertionError as e: # delete cbt api calls # change fail to FAILED self.test_result = 'FAILED' raise def tearDown(self): print("Done with session %s" % self.driver.session_id) if self.test_result is not None: #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() self.deviceRunID = str(response["deviceRunId"]) self.projectID = str(response["projectId"]) self.RunId = str(response["testRunId"]) # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) # let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") self.driver.quit() if __name__ == '__main__': unittest.main(warnings='ignore') Thanks for reading along, I hope this helps your conversion to BitBar! Happy Testing!Function to Read the text file and compare with the Baseline
This function will help to read the text file and compare it with Baseline text string. Use Case would be like it will help to read the Logs or any Text File and than compare with the Baseline or any expected Text output in the file. //function is to find Text in the Text File and compare with the Baseline function CompareTextLine(filePath,stringid, baseline) { var read, line, result, nores; //open and read text file read = aqFile.OpenTextFile(filePath, aqFile.faRead, aqFile.ctANSI); //set the ANSI UTF or any other format of Text File read.Cursor = 0; nores = 0; //Searching each line for the identifying key string while(! read.IsEndOfFile()) { line = read.ReadLine(); result = aqString.Find(line, stringid); if (result != -1) // -1 indicates occurrence not found { Log.Message("The key string identifier was found in this line = " + line + "; the baseline to compare = " + baseline); if (baseline == line) { Log.Checkpoint("The string found matches the baseline"); return; } else { Log.Warning("The value is not correct. Actual = " + line + "; Expected = " + baseline);} nores = 1; } } read.Close(); //Log error if string was not found if (nores == 0) Log.Error("String was not found in any of the text lines."); }2likes0CommentsFunction to wait for processing to complete
In many cases we have seen that processing times varies so it will wait for the processing to get idle and you can edit the max timeout in this , so this will help in cases of rendering and some process extensive work where we cant use any wait command or any other progress. I tried this most of the CAD Softwares like AutoCAD, SolidWorks, Navisworks, Revit, Aveva PDMS, Bentley Microstation, BricsCAD and it worked well. //function is to Wait for CPU processing to get Idle and with a timeout time function WaitForProcessing() { Log.Message("Memory Usage : " + Sys.Process("PROCESS").MemUsage); var time1 = aqDateTime.Time(); while(Sys.Process("PROCESS").CPUUsage != 0) { Log.CheckPoint("While Loop : cpu usage " + Sys.Process("PROCESS").CPUUsage); aqUtils.Delay(2000,"Wait for Processing"); //Waiting for 2 sec //Timeout for max 10sec var time2 = aqDateTime.Time(); var diff = time2 - time1; if(diff >= 10000) ///Please edit the time accordingly { Log.CheckPoint("Max Timeout for While loop with time : " + diff); break; } } Log.Message("While Loop Completed with Cpu Usage : " + Sys.Process("PROCESS").CPUUsage); }3likes0CommentsTestComplete URL for the Jenkins TestComplete report page
I want to offer a wayto-get-URL-of-TestComplete-Test-Results-of-a-Jenkins-Buildwithout accessing the Jenkins controller. Outside of those with an admin role for the Jenkins instance, granting teams access to the Jenkins controller is a security risk. TestComplete uses an epoch timestamp as part of the report page URL. This makes it a challenge to use the URL unless you know how to get the timestamp. Fortunately there is an easy way to get it. Once the test plan completes, the results become available using theJenkins-TestComplete API. Any Jenkins ID that has read access to view the job will have access to use the API. In your job, once the test is completed, use the API to parse the xml for the URL. This allows you to keep your Jenkins instance safe and obtain the report URL for use in your job (such var passed to outgoing email). Here is an example of using a windows batch build step in a freestyle job, after the TestComplete build step. Echo off curl -X GET "http://jenkins:8080/job/folder/job/name/%BUILD_NUMBER%/TestComplete/api/xml" --user %ID%:%pwd%>Test_Results.txt powershell -c "((((gc Test_Results.txt) -replace '<url>','@') -replace '</report>','@') | ForEach-Object { $_.split('@') } | Select-String -Pattern '</url>' -SimpleMatch ) -replace '</url>','' | set-content TestComplete_URL.txt" if exist TestComplete_URL.txt set /p TestComplete_URL=<TestComplete_URL.txt Once you have the URL, you can use the inject plugin to use it in other steps of your job. For example, you can create a var to be used in an outgoing email. echo Line_1=$JOB_BASE_NAME ^<a href^=^"%TestComplete_URL%^"^>test results^</a^>: >>email.properties Hope this information is of use.2likes0CommentsCode for Tracking Tested App Info On Start Test.
Question I like to know the information about the tested app each test ran on so I wrote up a little code and put it in the OnStartTest Test Engine Event. Answer This will run every time I run a test telling me the tested app info. This is wonderful for tracking one off test runs and which app version a test passed on and which it failed on. https://support.smartbear.com/testcomplete/docs/testing-with/advanced/handling-events/index.html https://support.smartbear.com/testcomplete/docs/reference/events/onstarttest.html?sbsearch=OnStartTe... function EventControl_OnStartTest(Sender) { try { Log.AppendFolder("< EventControl_OnStartTest >"); Log.AppendFolder("Version Information"); var FileName = "C:\\Program Files (x86)\\Some Folder\\TestedApp.exe"; var VerInfo = aqFileSystem.GetFileInfo(FileName).VersionInfo; var FileInf = aqFileSystem.GetFileInfo(FileName); var HostName = Sys.HostName; var dtObj; Log.Message("File Name: " + FileInf.Name); Log.Message("File Version: " + VerInfo.FileMajorVersion + "." + VerInfo.FileMinorVersion + "." + VerInfo.FileBuildVersion + "." + VerInfo.FileRevisionVersion); dtObj = new Date(FileInf.DateLastModified); Log.Message("File Date: " + FileInf.DateLastModified); Log.Message("Host Name: " + HostName); Log.PopLogFolder(); } catch(err) { Process.Halt("Exception: EventControl_OnStartTest - " + err.message); //Stop Test Run. } }Converting UTC TimeDate in an Excel file
Task Read the UTC DateTime in an Excel file (attached), convert the value for the PST (Pacific StandardTime) time zone and log each date in the following format: <month name> <day of month>, <full weekday name>. For example: September 8, Tuesday. Steps Read the dates from the Excel file using one of the approaches described inWorking with Microsoft Excel Files. Convert the dates using theaqDateTime objectmethods. Log the date using theaqConvert.DateTimeToFormatStrmethod. Solution Solutions were given within thethe TechCornerChallenge eventby different community members. bySiwarSayahi //JavaScript function DateFormat() { // Creates a driver DDT.ExcelDriver("C:\\Challenge11\\DateTime.xls", "Sheet1"); // Iterates through records while (! DDT.CurrentDriver.EOF()) { //Display the date in the format <month name> <day of month>, <full weekday name> DisplayDate(); DDT.CurrentDriver.Next(); } // Closes the driver DDT.CloseDriver(DDT.CurrentDriver.Name); } function DisplayDate() { for(i = 0; i < DDT.CurrentDriver.ColumnCount; i++) var dateA =aqConvert.VarToStr(DDT.CurrentDriver.Value(i)); //Convert the date from UTC to PST var dateB = aqDateTime.AddHours(dateA, -8); var date = aqConvert.DateTimeToFormatStr(dateB,"%B %d, %A"); Log.Message( "The date of " + dateA + " is : " + date ); } byelanto # DelphiScript procedure Challenge_11(); var fileExcel, exSheet, Valx: OleVariant; i: Integer; begin fileExcel := Excel.Open('C:\\Temp\DateTime.xlsx'); exSheet := fileExcel.SheetByTitle['Sheet1']; for i := 1 to exSheet.RowCount do begin Valx := aqDateTime.AddHours(exSheet.Cell('A', i).Value, -8); Log.Message(aqConvert.DateTimeToFormatStr(Valx, '%B %d, %A')); end; end;0likes0Comments