TestComplete: Headless Browser Testing in Parallel
In addition to standard web test execution, with the Intelligent Quality Add-on TestComplete supports running tests from an Execution Plan in a Headless browser environment. Browser options in this environment can be configured for Chrome, Edge, and Firefox, but require the installation of the supported web drivers for each of these browser types. More detailed information can be found in our support documentation on how to setup these drivers if TestComplete does not install them automatically when the environments are configured.Headless Browser Testing in TestComplete Here are some quick links to sites to download these web drivers as well: Chrome Edge Firefox The First Step: Running Parallel Tests in Sequence To configure the desired testing environments in the Execution Plan, one simply needs to add the desired keyword tests or scripts to the Execution Plan, choose the Local Headless Browser option, enter the desired endpoint URL, and configure the desired test environments. It should be noted that each test in the Execution needs to be configured for the URL and the test environments individually, and that each environment can only be configured once for each test for each resolution option available (there are up to 5 resolution settings for each browser). The image below details these configuration options. In this setup, each test in the Execution Plan will execute on the configured environments in parallel when the Execution Plan is started. The test will fire sequentially as shown here running “BearStore_Contact_Us” on the three environments simultaneously, then “Google_Panda_search” on its configured environments, and finally, “Google_Starfish_Search” on the environments configured for that test. The following figure shows the successful completion of this execution plan. Note the start times for each test showing them running in parallel, then executing the next keyword test in the Execution Plan sequence, also in parallel. We can even up the count for each keyword test. This will result in running each test case the set number of times, also in sequence, for the given count, before moving on to the next test case. With the resulting test log showing us those parallel test runs in sequential order: The Next Step: Running Parallel Tests in Parallel Now that we have run our three keyword tests in parallel environments, but in sequential order, let’s run them in parallel environments, in parallel! To accomplish this task, we need to setup a parallel Group in our Execution Plan and add our desired tests into this Group. Then configure the Local Headless Browser URL and Environments and run the Execution Plan. Since we are now launching all of our test cases, 3 test cases x 3 configurations x 3 execution counts, TestComplete will be running 27 headless browser sessions. As you can imagine, this is extremely processor intensive. My laptop, with a few applications running, including TestComplete, hovers between 25-50% CPU utilization. Starting this test run, will easily stress my CPU at 100% for most of the duration of the test. Our log shows all 27 tests starting at the same time. The test results also show several failures for very simple tests, most of which are associated with failures to reach the designated sites, likely caused by lack of system resources or over-taxed network connections. In conclusion, Local Headless Browser testing can be a very useful tool for running tests in “Sequential-Parallel” or “Parallel-Parallel” modes, but system resources are a factor to consider to ensure your tests are running cleanly and successfully without generating false failures.50Views1like0CommentsTestComplete with Zephyr Scale
In this post, we are going to talk about SmartBear’s UI testing tool, TestComplete, and writing the results to Zephyr Scale in an automated fashion. When we think about using TestComplete with any test management tool, it can really be accomplished in two ways: Natively inside TestComplete or integrating with some CI-CD system. When we are using Zephyr Scale, both ways will utilize the Zephyr Scale REST API. When we link to Zephyr Scale natively from TestComplete, it is a script heavy approach. An example of that can be found here. Today we are going to go into detail about using TestComplete, with a CI-CD system, and sending the results to Zephyr Scale. Now let’s talk about automating this process. The most common way to automate triggering TestComplete tests is through one if its many integrations. TestComplete can integrate to any CI-CD system as it has Command Line options, REST API options, and many native integrations like Azure Dev Ops or Jenkins. The use of a CI-CD system makes managing the executions at scale much easier. The general approach to this workflow would be a two-stage pipeline something like: node { stage('Run UI Tests') { // Run the UI Tests using TestComplete stage('Pass Results') { //Pass Results to Zephyr Scale } } First, we trigger TestComplete to execute our tests somewhere. This could be a local machine or a cloud computer, anywhere, and we store the test results in a relative location. Next, we use a batch file (or alike) to take the results from that relative location, send them to Zephyr Scale. When executing TestComplete tests, there are easy ways to write the results to a specific location in an automated fashion. We will look at options through the CLI as well as what some of the native integrations offer. Starting with the TestComplete CLI, the /ExportSummary:File_Name flag will generate a summary report for the test runs, and save it to a fully qualified or relative path in Junit-XML structure. At a basic level we need this: TestComplete.exe <ProjectSuite Location> [optional-arguments] So something like this: TestComplete.exe "C:\Work\My Projects\MySuite.pjs" /r /ExportSummary:C:\Reports\Results.xml /e The /ExportSummary flag can be stored in a relative or fully qualified directory. We could also use one of TestComplete’s many native integrations, like Jenkins and specify in the settings where to output results: Now that our TestComplete tests are executing, and the results are writing to a relative location we are ready for stage 2 of the pipeline, sending the results to Zephyr Scale. So now let’s send our results to Zephyr Scale. I think the easiest option is to use the Zephyr Scale API, and the Auto-Create Test Case option to true. The command below is a replica of what you would use in a batch file script in the pipeline. curl -H "Authorization: Bearer Zephyr-Scale-Token-Here" -F file= Relative-Location-of-Report-Here\report.xml;type=application/xml "https://api.zephyrscale.smartbear.com/v2/automations/executions/junit?projectKey=Project-Key-Here&autoCreateTestCases=true" After you modify the API token, relative location, and project key you are good to run the pipeline. The pipeline should look something like this: After we run the pipeline let’s jump into Jira to find to confirm the results are populating. Even with execution data: Also, with transactional data to analyze the failed test steps:1.9KViews1like0CommentsCrossBrowserTesting to BitBar Selenium Script Migration - QuickStart Guide
On June 21, 2022, SmartBear launched web application testing on our unified cloud testing solution that will include both browser and device testing on the BitBar platform! We have listened to our customers and having one product for both web and device testing will better meet your needs. BitBar is a scalable, highly reliable and performant platform with multiple datacenters. On it, you will have access to the latest browsers and devices with additional deployment options to meet your needs, including private cloud and dedicated devices. For more frequently asked questions about the launch of web app testing on BitBar, visit ourFAQ. This Quickstart Guide is intended to walk through the conversion of your existing CrossBrowserTesting Selenium tests to use BitBar! We have updated Selenium hubs and API calls that will require conversion, though little else will be required. As with CrossBrowserTesting, we have sample scripts and a Selenium Capabilities Configurator you may use to build out the specific capabilities for the desired tested device. This tool can be foundhere. To start conversion, you will need your BitBar API key versus the CrossBrowserTesting Authkey.This is a new method to authenticate the user and to make API calls. You may find your BitBar API Key in account settings as describedhere. Most of the Code examples and talking points for conversion are in reference to the CrossBrowserTesting Selenium Sample script that is availablehere. All code snippets in this article will be in Python. Now that you have your BitBar API Key, let's alter the original Authkey variable with our new BitBar API key located at line 18 in the CrossBrowserTesting sample script. This step is for connection to the BitBar API for processes such as taking screenshots and setting the status of your tests. # Old CrossBrowser Testing Sample Authkey Variable self.authkey = "<"CrossBrowserTesting Authkey">" # New Bitbar API Key Variable self.apiKey = "<"insert your BitBar API Key here">" In regards to the capabilities used in BitBar, there are a couple things to note. First, we do not need to specify a'record_video'capability as we do in CrossBrowserTesting. Videos are generated automatically for every test, so we no longer need to provide this capability. Doing so will result in webDriver errors.The second thing to note is that we now also pass the BitBar API Key along with the Capabilities; capabilities = { 'platform': 'Windows', 'osVersion': '11', 'browserName': 'chrome', 'version': '102', 'resolution': '1920x1080', 'bitbar_apiKey': '<insert your BitBar API key here>', } With BitBar we now have four Selenium hub options to choose from. Both US and EU Selenium hubs are available to aid in performance for your location. Separate hubs are also provided depending on the type of device (Desktop vs Mobile) you wish to test against. You may pick the applicable Desktop or Mobile hub closest to your location and replace your existing hub with the updated URL; BitBar Desktop Selenium Hubs; US_WEST:DESKTOP: https://us-west-desktop-hub.bitbar.com/wd/hub EU:DESKTOP: https://eu-desktop-hub.bitbar.com/wd/hub BitBar Mobile Selenium Hubs; US_WEST:MOBILE: https://us-west-mobile-hub.bitbar.com/wd/hub EU:MOBILE: https://eu-mobile-hub.bitbar.com/wd/hub # start the remote browser on our server self.driver = webdriver.Remote desired_capabilities=capabilities command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" Now that we have our BitBar API Key, and Capabilities and Selenium Hub set up, we can move on to altering our requests for Screenshots and Test Result Status. In the CrossBrowserTesting sample script, we use standalone API requests to create Screenshots. For the BitBar sample scripts, we are doing this with the Selenium driver itself to create the Screenshot and store it locally. Afterwards use the BitBar API to push the locally saved image back to our project. The swagger spec for our BitBar Cloud API can be foundhere. In line 30 of the BitBar Selenium sample script we set a location to store Screenshots on the local machine. Note, this is set up to store files in a directory called 'Screenshots' in the root folder of your project. self.screenshot_dir = os.getcwd() + '/screenshots' To retrieve a Screenshot and store it, we perform a 'get_screenshot_as_file' call, as seen on line 45 in the BitBar Selenium example script. self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + '1_home_page.png') Now we want to to take our Screenshot and push it back to our project in BitBar. Note that in this case for Python, we are using the 'httpx' module for the API calls back to BitBar. The 'requests' module only supports HTTP 1.1 and we will need a module capable of handling HTTP 2/3 requests. # Let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") The final piece of the puzzle is to set our Test Result Status. We have alternate naming conventions for test results, these are 'Succeeded' and 'Failed' for BitBar vs 'Pass' and 'Fail' for CrossBrowserTesting. # CrossBrowserTesting Successful test syntax self.test_result = 'pass' # CrossBrowserTesting Failed test syntax self.test_result = 'fail' # BitBar Successful test syntax self.test_result = 'SUCCEEDED' # BitBar Failed test syntax self.test_result = 'FAILED' Note that in the snippet provided below, we start by performing Get requests for session information. These requests are sent to the same Selenium Hub we are using for the webDriver, so make sure the hub address is set to the same hub used for webDriver. We would recommend to turn this into a variable to avoid having to switch this manually for alternate hubs. These processes are found in the 'tearDown' function of the updated CrossBrowserTesting sample script foundhere. #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() deviceRunID = str(response["deviceRunId"]) projectID = str(response["projectId"]) RunId = str(response["testRunId"]) Finally, we set the Test Result with the Post method below using session information retrieved with the Get request above. Note, the URL for the Post request will NOT need to be updated to reflect the specific Selenium hub in use. # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + projectID + '/runs/' + RunId + '/device-sessions/' + deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) Now that we have made these changes you are ready to run your test through BitBar! As a summary, we replace our CrossBrowserTesting authKey with the BitBar API Key, set the new Selenium hub address, build new screenshot calls and update the test result function. Quick Reference Documentation; BitBar WebFAQ. Complete documentation with code samples in various languages are foundhere. Retrieve your BitBar API Key in account settings as describedhere. BitBar Selenium Capability Configurator and Sample Scripts are foundhere. CrossBrowserTestingCapability Configurator and Sample Scripts are foundhere. The Swagger spec for our BitBar Cloud API can be foundhere. Here is our complete Python CBT-BB conversion script; # Please visit http://selenium-python.readthedocs.io/ for detailed installation and instructions # Getting started: http://docs.seleniumhq.org/docs/03_webdriver.jsp # API details: https://github.com/SeleniumHQ/selenium#selenium # Requests is the easiest way to make RESTful API calls in Python. You can install it by following the instructions here: # http://docs.python-requests.org/en/master/user/install/ import unittest from selenium import webdriver import requests import os import httpx class BasicTest(unittest.TestCase): def setUp(self): #get rid of the old way of doing auth with just an API key self.apiKey = '' self.api_session = requests.Session() self.test_result = None self.screenshot_dir = os.getcwd() + '/screenshots' self.screenshotName1 = 'SS1.png' self.deviceRunID = "" self.projectID = "" self.RunId = "" #old platformName has been split into platformName and osVersion capabilities = { 'bitbar_apiKey': '', 'platform': 'Linux', 'osVersion': '18.04', 'browserName': 'firefox', 'version': '101', 'resolution': '2560x1920', } # start the remote browser on our server self.driver = webdriver.Remote( desired_capabilities=capabilities, #the hub is changed, also not sending the user and pass through the hub anymore #US hub url: https://appium-us.bitbar.com/wd/hub command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" #EU hub url ) self.driver.implicitly_wait(20) def test_CBT(self): # We wrap this all in a try/except so we can set pass/fail at the end try: # load the page url print('Loading Url') self.driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html') # maximize the window - DESKTOPS ONLY #print('Maximizing window') #self.driver.maximize_window() #check the title print('Checking title') self.assertEqual("Selenium Test Example Page", self.driver.title) # take a screenshot and save it locally print("Taking a Screenshot") self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + self.screenshotName1) # change pass to SUCCEEDED self.test_result = 'SUCCEEDED' except AssertionError as e: # delete cbt api calls # change fail to FAILED self.test_result = 'FAILED' raise def tearDown(self): print("Done with session %s" % self.driver.session_id) if self.test_result is not None: #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() self.deviceRunID = str(response["deviceRunId"]) self.projectID = str(response["projectId"]) self.RunId = str(response["testRunId"]) # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) # let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") self.driver.quit() if __name__ == '__main__': unittest.main(warnings='ignore') Thanks for reading along, I hope this helps your conversion to BitBar! Happy Testing!1.6KViews2likes0CommentsFunction to Read the text file and compare with the Baseline
This function will help to read the text file and compare it with Baseline text string. Use Case would be like it will help to read the Logs or any Text File and than compare with the Baseline or any expected Text output in the file. //function is to find Text in the Text File and compare with the Baseline function CompareTextLine(filePath,stringid, baseline) { var read, line, result, nores; //open and read text file read = aqFile.OpenTextFile(filePath, aqFile.faRead, aqFile.ctANSI); //set the ANSI UTF or any other format of Text File read.Cursor = 0; nores = 0; //Searching each line for the identifying key string while(! read.IsEndOfFile()) { line = read.ReadLine(); result = aqString.Find(line, stringid); if (result != -1) // -1 indicates occurrence not found { Log.Message("The key string identifier was found in this line = " + line + "; the baseline to compare = " + baseline); if (baseline == line) { Log.Checkpoint("The string found matches the baseline"); return; } else { Log.Warning("The value is not correct. Actual = " + line + "; Expected = " + baseline);} nores = 1; } } read.Close(); //Log error if string was not found if (nores == 0) Log.Error("String was not found in any of the text lines."); }609Views2likes0CommentsFunction to wait for processing to complete
In many cases we have seen that processing times varies so it will wait for the processing to get idle and you can edit the max timeout in this , so this will help in cases of rendering and some process extensive work where we cant use any wait command or any other progress. I tried this most of the CAD Softwares like AutoCAD, SolidWorks, Navisworks, Revit, Aveva PDMS, Bentley Microstation, BricsCAD and it worked well. //function is to Wait for CPU processing to get Idle and with a timeout time function WaitForProcessing() { Log.Message("Memory Usage : " + Sys.Process("PROCESS").MemUsage); var time1 = aqDateTime.Time(); while(Sys.Process("PROCESS").CPUUsage != 0) { Log.CheckPoint("While Loop : cpu usage " + Sys.Process("PROCESS").CPUUsage); aqUtils.Delay(2000,"Wait for Processing"); //Waiting for 2 sec //Timeout for max 10sec var time2 = aqDateTime.Time(); var diff = time2 - time1; if(diff >= 10000) ///Please edit the time accordingly { Log.CheckPoint("Max Timeout for While loop with time : " + diff); break; } } Log.Message("While Loop Completed with Cpu Usage : " + Sys.Process("PROCESS").CPUUsage); }656Views3likes0Comments- 773Views1like0Comments
TestComplete URL for the Jenkins TestComplete report page
I want to offer a wayto-get-URL-of-TestComplete-Test-Results-of-a-Jenkins-Buildwithout accessing the Jenkins controller. Outside of those with an admin role for the Jenkins instance, granting teams access to the Jenkins controller is a security risk. TestComplete uses an epoch timestamp as part of the report page URL. This makes it a challenge to use the URL unless you know how to get the timestamp. Fortunately there is an easy way to get it. Once the test plan completes, the results become available using theJenkins-TestComplete API. Any Jenkins ID that has read access to view the job will have access to use the API. In your job, once the test is completed, use the API to parse the xml for the URL. This allows you to keep your Jenkins instance safe and obtain the report URL for use in your job (such var passed to outgoing email). Here is an example of using a windows batch build step in a freestyle job, after the TestComplete build step. Echo off curl -X GET "http://jenkins:8080/job/folder/job/name/%BUILD_NUMBER%/TestComplete/api/xml" --user %ID%:%pwd%>Test_Results.txt powershell -c "((((gc Test_Results.txt) -replace '<url>','@') -replace '</report>','@') | ForEach-Object { $_.split('@') } | Select-String -Pattern '</url>' -SimpleMatch ) -replace '</url>','' | set-content TestComplete_URL.txt" if exist TestComplete_URL.txt set /p TestComplete_URL=<TestComplete_URL.txt Once you have the URL, you can use the inject plugin to use it in other steps of your job. For example, you can create a var to be used in an outgoing email. echo Line_1=$JOB_BASE_NAME ^<a href^=^"%TestComplete_URL%^"^>test results^</a^>: >>email.properties Hope this information is of use.663Views2likes0CommentsCode for Tracking Tested App Info On Start Test.
Question I like to know the information about the tested app each test ran on so I wrote up a little code and put it in the OnStartTest Test Engine Event. Answer This will run every time I run a test telling me the tested app info. This is wonderful for tracking one off test runs and which app version a test passed on and which it failed on. https://support.smartbear.com/testcomplete/docs/testing-with/advanced/handling-events/index.html https://support.smartbear.com/testcomplete/docs/reference/events/onstarttest.html?sbsearch=OnStartTe... function EventControl_OnStartTest(Sender) { try { Log.AppendFolder("< EventControl_OnStartTest >"); Log.AppendFolder("Version Information"); var FileName = "C:\\Program Files (x86)\\Some Folder\\TestedApp.exe"; var VerInfo = aqFileSystem.GetFileInfo(FileName).VersionInfo; var FileInf = aqFileSystem.GetFileInfo(FileName); var HostName = Sys.HostName; var dtObj; Log.Message("File Name: " + FileInf.Name); Log.Message("File Version: " + VerInfo.FileMajorVersion + "." + VerInfo.FileMinorVersion + "." + VerInfo.FileBuildVersion + "." + VerInfo.FileRevisionVersion); dtObj = new Date(FileInf.DateLastModified); Log.Message("File Date: " + FileInf.DateLastModified); Log.Message("Host Name: " + HostName); Log.PopLogFolder(); } catch(err) { Process.Halt("Exception: EventControl_OnStartTest - " + err.message); //Stop Test Run. } }527Views2likes0CommentsConverting UTC TimeDate in an Excel file
Task Read the UTC DateTime in an Excel file (attached), convert the value for the PST (Pacific StandardTime) time zone and log each date in the following format: <month name> <day of month>, <full weekday name>. For example: September 8, Tuesday. Steps Read the dates from the Excel file using one of the approaches described inWorking with Microsoft Excel Files. Convert the dates using theaqDateTime objectmethods. Log the date using theaqConvert.DateTimeToFormatStrmethod. Solution Solutions were given within thethe TechCornerChallenge eventby different community members. bySiwarSayahi //JavaScript function DateFormat() { // Creates a driver DDT.ExcelDriver("C:\\Challenge11\\DateTime.xls", "Sheet1"); // Iterates through records while (! DDT.CurrentDriver.EOF()) { //Display the date in the format <month name> <day of month>, <full weekday name> DisplayDate(); DDT.CurrentDriver.Next(); } // Closes the driver DDT.CloseDriver(DDT.CurrentDriver.Name); } function DisplayDate() { for(i = 0; i < DDT.CurrentDriver.ColumnCount; i++) var dateA =aqConvert.VarToStr(DDT.CurrentDriver.Value(i)); //Convert the date from UTC to PST var dateB = aqDateTime.AddHours(dateA, -8); var date = aqConvert.DateTimeToFormatStr(dateB,"%B %d, %A"); Log.Message( "The date of " + dateA + " is : " + date ); } byelanto # DelphiScript procedure Challenge_11(); var fileExcel, exSheet, Valx: OleVariant; i: Integer; begin fileExcel := Excel.Open('C:\\Temp\DateTime.xlsx'); exSheet := fileExcel.SheetByTitle['Sheet1']; for i := 1 to exSheet.RowCount do begin Valx := aqDateTime.AddHours(exSheet.Cell('A', i).Value, -8); Log.Message(aqConvert.DateTimeToFormatStr(Valx, '%B %d, %A')); end; end;848Views0likes0CommentsSending HTTP requests and parsing JSON in TestComplete
Tasks Sending HTTP requests and parsing JSON in TestComplete. Here are the steps how it can be resolved: Send a GET request tohttps://dog.ceo/api/breeds/image/random. Check the status of the request - if it is successful, the response will return a JSON that contains a link to a random picture of a dog. Parse the returned JSON to extract the link to the image. JavaScript and Python provide support for JSON out of the box; for other languages, you might want to parse JSON as a string or use regular expressions. Send a GET request to the URL obtained from the previous response - this will return an image. 4. Save the response as an image to a JPG file by calling the SaveToFile method like this: response.SaveToFile("C:\\image.jpg") Solution //JavaScript function getThisDog(https) { var aqHttpRequest = aqHttp.CreateGetRequest(https); aqHttpRequest.SetHeader("Accept", "application/vnd.api+json; version=1"); aqHttpRequest.SetHeader("Content-Type", "application/x-www-form-urlencoded; charset=utf-8"); aqHttpRequest.SetHeader("Accept-Language", "pl"); aqHttpRequest.SetHeader("Accept-Charset", "utf-8, iso-8859-13;q=0.8"); aqHttpRequest.SetHeader("Content-Language", "pl"); var aqHttpRes = aqHttpRequest.Send(); Log.Message(aqHttpRes.Text); return aqHttpRes; } function parseThisDog() { let jsonResponse = getThisDog("https://dog.ceo/api/breeds/image/random"); if(jsonResponse.StatusCode === 200) { let doggyJson = JSON.parse(jsonResponse.Text); let dogImage = getThisDog(doggyJson.message); let randomString = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15); dogImage.SaveToFile("C:\\TEST-TRASH\\" + randomString + "dog.jpg"); } else { Log.Error("Something went wrong while trying to connect") } } Screen of img:1.1KViews0likes0CommentsGenerate a random number within a range
Question How to generate a random number within the following range (30-75) in TestComplete? The article was created based onthe TechCornerChallenge eventwith answers given by different community members. Answer Solution given byanupamchampati //JavaScript //JScript function test() { Log.Message(getRandomNumber(30,75)) } function getRandomNumber(min, max) { return Math.floor(Math.random() * (max - min + 1) ) + min; } Solution given bymkambham 'VBScript Function RandomNumber_In_givenRange() Dim Minimum Dim Maximum Minimum = 30 Maximum = 75 Randomize Log.Message Int((Maximum - Minimum + 1) * Rnd + Minimum) End Function Solution given byWrongtown #python from random import randrange Log.Message(randrange(30, 75, 1))596Views0likes0CommentsLaunch Browser in Incognito/Private Mode
Question Thought of sharing the code in the community for launching browsers in their incognito modes. The function is parameterized such a way to run for the browsers Internet Explorer, Edge, Chrome and Firefox. Hope it will be useful for more people. Answer //JScript function runIncognitoMode(browserName){ //var browserName = "firefox" //iexplore,edge,chrome,firefox if (Sys.WaitBrowser(browserName).Exists){ var browser = Sys.Browser(browserName); Log.Enabled = false // To disable the warning that might occur during closing of the browser browser.Close(); Log.Enabled = true // enabling the logs back } if(browserName=="edge"){ Browsers.Item(btEdge).RunOptions = "-inprivate" Delay(3000) Browsers.Item(btEdge).Run(); } else if (browserName=="iexplore"){ Browsers.Item(btIExplorer).RunOptions = "-private" Delay(3000) Browsers.Item(btIExplorer).Run(); } else if (browserName=="chrome"){ Browsers.Item(btChrome).RunOptions = "-incognito" Delay(3000) Browsers.Item(btChrome).Run(); } else if (browserName=="firefox"){ Browsers.Item(btFirefox).RunOptions = "-private" Delay(3000) Browsers.Item(btFirefox).Run(); } Sys.Browser(browserName).BrowserWindow(0).Maximize() }701Views4likes0CommentsAdvanced search object for complex model
Hello all, Please find here my implementation of advanced search of in-memory object for complex tree model. Sometimes when the tested app is really complex (especially in heavy client application), the search for object could be too slow due to depth. One of the solution is to do by finding intermediate object to narrow and guide the search. So you multiply findChildEx. To ease that i made the following method : /** * <a id="system.findContainer"></a> * Rechercher un objet TestComplete en mémoire de manière containérisé<br> * Si <i>system.debug</i> est à <b>true</b> alors des logs complémentaires sur la recherche sont inscrits * @memberof system * @function * {Object} ParentObject - Objet parent porteur de l'objet à rechercher. Racine de départ de la recherche. Plus l'objet parent est de haut niveau, plus le temps de recherche peut être long * {array} Props - Tableau des propriétés des objets à rechercher par container, si plusieurs propriétés recherchées pour un container donnée alors le séparateur est le |, exemple ["Visible|Name", "Name"] -> recherche des 2 propriétés Visble et Name pour le premier container et recherche de Name seulement pour le deuxième container * {array} Values - Tableau des valeurs des propriétés des objets à rechercher * {number|array} [Depth=4] - Limiter la recherche a <i>Depth>/i> profondeur depuis l'objet parent. Plus la profondeur est grande, plus le temps de recherche peut être long. Peut être un tableau pour définir un temps de rechercher par container * @returns {object} Renvoie l'objet s'il existe ou <b>null</b> le cas échéant ou bien en cas d'erreur */ system.findContainer = function(ParentObject = null, Props = null, Values = null, Depth = 4) { // Vérification des paramètres obligatoires if ((ParentObject == null) || (Props == null) || (Values == null)) { if (system.debug) Log.Message('findContainer() - Un paramètre obligatoire est non renseigné (ParentObject ou Props ou Values)', "", pmHigher, system.logWarning); return null; } if (ParentObject == "current") { if ((typeof system.container != 'undefined') && (system.container != null) && (system.container.Exists)) ParentObject = system.container else return null; } if (system.debug) { let propsKey = typeof Props == "string" ? Props : Props.join("|"); let valuesKey; switch (typeof Values) { case "string": valuesKey = Values; break; case "number": case "boolean": valuesKey = Values.toString(); break; case "null": valuesKey = "null"; break; default: valuesKey = Values.join("|"); break; } Log.Message("findContainer(" + ParentObject.FullName + ", " + propsKey + ", " + valuesKey + ", " + Depth.toString() + ") - Recherche d'objet", "", pmLowest, system.logDebug); } var objectfind = ParentObject; let currentProps; let currentValues; let currentDepth; try { for (let i=0;i<Props.length;i++) { currentProps = Props[i].split('|'); currentValues = new Array(); currentDepth = typeof Depth == 'number' ? Depth : Depth[i]; for (let j=0;j<currentProps.length;j++) { currentValues.push(Values.shift()); } objectfind = objectfind.FindChildEx(currentProps, currentValues, currentDepth, true, system.time.medium); if ((typeof objectfind == 'undefined') || ((objectfind != null) && (!objectfind.Exists))) break; } } catch (e) { objectfind = null; if (system.debug) Log.Message("findContainer() - Une exception est apparue dans la recherche", e.message, pmHighest, system.logError); } finally { // Ne renvoyer que "null" sur non trouvé ou erreur if ((typeof objectfind == 'undefined') || ((objectfind != null) && (!objectfind.Exists))) objectfind = null; if (system.debug) { if (objectfind == null) Log.Message("findContainer() renvoie un objet null ou undefined", "", pmLowest, system.logDebug) else Log.Message('findContainer() a trouvé un objet', objectfind.FullName, pmLowest, system.logDebug); } return objectfind; } } It comes from my testing framework so everything starting by system. is specific but you should understand the use : system.debug -> true to activate an additionnal level of log. system.logDebug -> specific debug log attributes. system.container -> a global variable that can hold a frequently used object in portion of test system.time.medium -> this is 5 seconds (5000ms) Samples usage : let objectToTest = system.findContainer(SourceObject, ["Visible|Name", "Visible|Name"], [true,'dlmAccueil', true,'editionEnfants']); Will search in SourceObject a visible object named 'editionEnfants' which is located inside another visible object named 'dlmAccueil'. The search will use a default max depth of 4 levels for both objects. let objectToTest = system.findContainer(SourceObject, ["Visible|Name", "Name", "Header|Index|Visible"], [true, "Saisie", "DockingPanel"', "ILPaneGroup", 1, true], [2, 3, 2]); Will search in SourceObject a visible object with property Index to 1 and property Header to 'ILPanelGroup' which is located inside another object named 'DockingPanel' which is located inside another visible object named 'Saisie'. The search will use a depth of 2 levels for first object, 3 levels for second object and 2 levels for final object.550Views0likes0CommentsTestComplete integration with Gitlab (or in theory any CICD)
TLDR: Create or link to repository containing TestComplete Scripts Configure CI agent/test runner (self-hosted w/ auto-logon) Figure out syntax/keywords of the pipeline (usually a YAML file) Tinker with the command line arguments for the TC executable Occasionally, I’ll get asked, “Do you integrate with X CI/CD framework?” There is a cookie cutter answer “Yes, because TestComplete is command line accessible”. While this is a generic “template” answer, I figured it would be worth-while to outline some of the generalized steps that I took to create a simple, sample pipeline using Gitlab CI to launch TestComplete tests. 1- Repository First, I created a new repository within GitLab to contain the relevant TestComplete Project Suite that I planned on running. This is the exact same as creating a GitHub Repo. I just pointed to the GitLab location instead: “git remote add origin my_repo_location” 2- Configure CI Agent Now, to create a shared understanding, I think it’s important to note that TestComplete is a Windows thick client application. This means that whichever CI framework we choose to work with will need to launch our TestComplete.exe (or TestExecute, TestExecuteLite, or SessionCreator) via a self-hosted agent that has access to the executable itself (which points back to the whole “command line accessible comment above”). This is most apparent when integrating with Azure DevOps pipelines (where within our documentation you will find bolded multiple times the requirements for that self-hosted agent). Extending this line of thought, I first “investigated” (googled) Gitlab’s CICD runners (“agents” as I’ve been calling them). While they can run inside a docker container or be deployed into a Kubernetes cluster, they can also be downloaded/installed/registered/started (in that order) on any machine with the supported operating systems. These guys will run our CICD jobs. Additionally, at the time of the install, you can define the executor that you’d like for these runners to use; conveniently for us, these runners have the shell capabilities that can consume command line arguments. Additionally, you can tag these runners, such that you can define the agent/executor that you’d like to use to run your CI jobs (this is done later within your YAML file – explanation in part 3). 3- Figure out syntax/keywords of the pipeline Turns out, GitLab does use a YAML file to describe the stages/jobs/steps of the pipeline to run. This is standard in the industry (i.e. jenkinsfile, travis-ci.yml, etc.) and those who have familiarities with any of the other CI frameworks should be able to pick it up quickly. I will say that I found the syntax and the keywords for the gitlab-ci.yml file to be even more intuitive than the rest, so kudos to whoever designed that! I probably cannot do GitLab CI justice with my current depth of understanding of its full features, so I will give the simplest example/explanation of how we can use it (with TestComplete). Essentially, pipelines are comprised of jobs and stages, where jobs define what to do, whereas stages define when to do. From the documentation linked above, we can see that: A neat behavior of the basic pipelines is that if there are multiple jobs defined within the same stage (i.e. “test”) then all the jobs will be executed in parallel. There might be some objection to this (say if certain “test” jobs have dependencies with certain “build” jobs that finish quicker, would we really want to run everything in parallel based on stages? --GitLab also provides DAG pipelines to target this very issue but that’s a whole another topic in itself), but I thought that a brilliant use of this built-in/default parallelism feature would be to leverage our Device Cloud tests (where we would want to launch all tests in parallel at the same time). nice diagram from the same docs linked above for the pipeline behaviors 4- Tinker with the command line arguments The only thing I had trouble with while creating the yaml file was the shell behavior where I couldn’t provide the full path of the TestExecuteLite.exe because it was contained within my “Program Files (x86)” directory, and apparently (didn’t know this before) Windows PowerShell really doesn’t like having spaces within full paths even with quotes surrounding the executable to trigger. I googled around for a bit for a workaround but figured it would just be easier to add the bin directory of that executable to my environment variables, so that I could trigger the executable by just by invoking “TestExecuteLite.exe” within PowerShell (the executor that our runner will use). Finally, an example of a gitlab-ci.yml file that I created (at the root of the project directory containing my TestComplete test scripts) looks like this: build-job: stage: build tags: - gitlabTC script: - echo "Hello, $GITLAB_USER_LOGIN! Running some Device Cloud jobs in parallel" test-job1: stage: test tags: - gitlabTC script: - echo "This job runs test set 3" - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test3" /ExportLog:$CI_PROJECT_DIR\logs\1_log.mht /e artifacts: when: always expire_in: 1 week paths: - logs\*.mht allow_failure: true test-job2: stage: test tags: - gitlabTC script: - echo "This job runs test set 4" - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test4" /ExportLog:$CI_PROJECT_DIR\logs\2_log.mht /e artifacts: when: always expire_in: 1 week paths: - logs\*.mht allow_failure: true test-job3: stage: test tags: - gitlabTC script: - echo "This job runs test set 5" - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test5" /ExportLog:$CI_PROJECT_DIR\logs\3_log.mht /e artifacts: when: always expire_in: 1 week paths: - logs\*.mht allow_failure: true clean-up-test: stage: .post tags: - gitlabTC script: - echo "This job deploys something from the $CI_COMMIT_BRANCH branch.. This cleans up workspace" Concretely, the gitlab-ci.yml file above does: “build-job” during the build stage, where it prints out the string “Hello, $GITLAB_USER_LOGIN! Running some Device Cloud jobs in parallel”. Typically you would be compiling things or building things that are needed for the test step Since there are no other jobs in the “build” stage, we move on to the “test” stage where: test-job1, test-job2, and test-job3 are all all executed at the same time. Each test-job triggers a Device Cloud test using TestExecuteLite.exe (which in our case specifies just a single keyword test to run, but this can be changed to fit your needs) For each of the jobs run, we will also receive back an artifact (regardless of execution status), which is going to be the respective mht results file generated by the test run. We also specify that any jobs in any following stages will also be run regardless of failures (as defined by the “allow_failure: true” keyword description) We go to the .post stage, where this runs at the very end of our pipelines, but typically one might expect to see some sort of a deploy stage job (to either staging or production or both!). In my case, we just print out some string values using the echo command. The following pipeline runs whenever there is a commit to the master branch of this repository. We can see some nice visual confirmation of this in the “Pipelines” UI: In sum, follow those “generalized” four steps up at the very top of this post to integrate with your CI framework of choice! What are some CICD frameworks currently being used within your organization? What kind of pre-test (build) stages/steps OR post-test (deploy) stages/steps are involved within your TestComplete pipelines? Other than the mht/junit logs for artifacts, what other information are you currently collecting? What do you encounter more? The need for parallelism, or the need for acyclic/asynchronous build stages/steps? Or parent-child pipelines?785Views2likes0CommentsTestComplete and Zephyr Scale - work together
Hi Everyone! Today I'm bringing news about Zephyr Scale (previously called TM4J, a Jira test management plugin that SmartBear acquired a few months ago). This is exciting because Zephyr Scale is considerably different fromZephyr for Jirain a number of ways. The biggest differentiation being that Zephyr Scale creates it's own table in your Jira database to house all of your test case management data and analytics, instead of issues of type "test". This means that you willnotexperience the typical performance degradation that you might expect from housing hundreds if not thousands of test cases within your Jira instance. Additionally, there's many more reports available:Upwards of 70(and it's almost "more the merrier" season so that's nice) Now how does this relate to TestComplete at all? Well, seeing as we don't have a native integration between the two tools as of yet (watch out in the coming months?), we have to rely on using Zephyr Scale's REST apiin order to map to corresponding test cases kept in Zephyr Scale's test library. I wanted to explore the option of having our TestComplete tests be mapped and updated (per execution) back to the corresponding test cases housed within Zephyr Scale by making use of event handlers and the REST api endpoints. To start, I created a set of test cases to mirror the test cases within Zephyr Scale and TestComplete: You will notice that I have a "createTestRun" KDT test within my TestComplete project explorer. This test case will make the initial POST request to Zephyr Scale in order to create a test run (otherwise known as a test cycle). This is done by using TestComplete's aqHttp object. Within that createTestRun kdt, I'm just running a script routine a.k.a run code snippet (because that's the easiest way to use the aqhttp object). The code snippet below shows the calls import json import base64 #init empty id dictionary id_dict = {} def createTestRun(): projName = "Automated_TestComplete_Run " + str(aqDateTime.Now()) #timestamped test cycle name address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun" #TM4J endpoint to create test run username = "JIRA-username" #TM4J username password = "JIRA-password!" #TM4J password # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #intialize empty item list items= [] for i in range(Project.TestItems.ItemCount): #for all test items listed at the project level entry = {"testCaseKey":getTestCaseID(Project.TestItems.TestItem[i].Name)} #grab each tc key as value pair according to name found in id_dict items.append(entry) #append as a test item #building request body requestBody = { "name" : projName, "projectKey" : "KIM", #jira project key for the tm4j project "items" : items #the items list will hold the key value pairs of the test case keys to be added to this test cycle } response = request.Send(json.dumps(requestBody)) df = json.loads(response.Text) key = str(df["key"]) #set the new test cycle key as a project level variable for later use Project.Variables.testRunKey = key Log.Message(key) #output new test cycle key Within that snippet, you may have noticed an operation called "getTestCaseID()" and this was how I mapped my TestComplete tests back to the Zephyr Scale test cases (by name to their corresponding test id key) as shown below: def getTestCaseID(argument): #list out testcaseID's in dict format - this is where you will map your internal testcases (by name) to their corresponding tm4j testcases id_dict = { "login": "KIM-T279", "logout": "KIM-T280", "createTestRun": "KIM-T281", "UI_Error": "KIM-T282", "UI_Warning": "KIM-T283" } tc_ID = id_dict.get(argument, "Invalid testCase") #get testcase keys by name from dictionary above return tc_ID #output tm4j testcase ID Referring to the screenshots above, you will notice that the names of my KDT are the key values, where as the corresponding Zephyr Scale test id keys are the paired values within theid_dict variable. Now that we have a script to create a new test cycle per execution, which also assigns all test cases (at the project level) to the newly created test cycle, we just need to update that test run with the corresponding execution statuses for each of the test case to be run. We can do this by leveraging TestComplete's (onstoptestcase) Event handler in conjunction with Zephyr Scale's REST api. the Event handler: import json import base64 def EventControl1_OnStopTestCase(Sender, StopTestCaseParams): import utilities #to use the utility functions testrunKey = Project.Variables.testRunKey #grab testrun key from createstRun script tc_name = aqTestCase.CurrentTestCase.Name #grab current testcase name to provide to getTestCaseID function below tcKey = utilities.getTestCaseID(tc_name) #return testcase Key for required resource path below address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun/" + str(testrunKey) + "/testcase/" + str(tcKey) + "/testresult" #endpoint to create testrun w test cases username = "JIRA-USERNAME" #TM4J username password = "JIRA-PASSWORD!" #TM4J passowrd # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #building requirest body; limited to pass,warning, and fail from within TestComplete. mapping to their corresponding execution statuses in tm4j comment = "posting from TestComplete" #default comment to test executions if StopTestCaseParams.Status == 0: # lsOk statusId = "Pass" # Passed elif StopTestCaseParams.Status == 1: # lsWarning statusId = "Warning" # Passed with a warning comment = StopTestCaseParams.FirstWarningMessage elif StopTestCaseParams.Status == 2: # lsError statusId = "Fail" # Failed comment = StopTestCaseParams.FirstErrorMessage #request body for each pertinent statuses requestBody = {"status": statusId,"comment":comment} response = request.Send(json.dumps(requestBody)) #in case the post request fails, let us know via logs if response.StatusCode != 201: Log.Warning("Failed to send results to TM4J. See the Details in the previous message.") We're all set to run our TestComplete tests while having the corresponding execution statuses get automatically updated within Zephyr Scale. Now a few things to remember: always include the "createTestRun" kdt as the topmost test item within our project run. this is needed to create the test cycle within Zephyr Scale (and there was no "onProjectStart" event handler so I needed to do this manually) make sure that within the script routine called gettestcaseID() that you have mapped the key value pairs correctly with the matching names and testcase keys. create a test set within the project explorer, for the test items you'd like to run (which has been mapped per the above bullet point, otherwise the event handler will throw an error). Now every time you run your TestComplete tests, you should see a corresponding test run within Zephyr Scale, with the proper execution statuses and pass/warning/error messages. You can go on to customize/parameterize the POST body that we create to contain even more information (i.e environment, tester, attachments, etc.) and you can go ahead and leverage those Zephyr Scale test management reports now, looking at execution efforts, defects raised, and traceability for all of the TestComplete tests you've designed to report back to Zephyr Scale. Happy Testing!1.1KViews1like0CommentsTestComplete and Device Cloud Add On
Hi All, Today, I wanted to discuss the Device Cloud add on and its practical usage just a bit further, most of what I'm talking about is pretty well covered in the documentation here:https://support.smartbear.com/testexecute/docs/running/cross-platform-tests/run/command-line.html In short, once we have a stable set of test cases that we'd like to run in parallel across multiple OS and browser configurations, this should be your go-to move! This will make use of a newly released application called "TestExecuteLite.exe", which supports the launching multiple instances of the application - hence the "parallels" as the documentation above refers to. Things to note: TestExecuteLite (henceforth to be called TElite, can launch multiple instances, and is currently supported in the Jenkins plugin/pipeline, Azure DevOps pipeline, and the CMD line) We need to use the "new name mapping" of the Xpath and CSS selectors We should try to make our entire pjs only contains web-app based projects/tests (no LL procedures, desktop based tests/ testedapps, etc.) To start, we need to create a function that will consume our custom command line parameters. The docs above have the same function, but here is mine (which is annotated just a little bit more) config = "" def processCommandLineArguments(): #ParamCount returns an integer of # of parameters used in cmd line args for i in range(0, BuiltIn.ParamCount() + 1): #added 1 to the range above for list indexing to print out the info about the last cmd arg Log.Message(BuiltIn.ParamStr(i)) #print each of the cmd args, looks like for some reason tc cant print the custom arg entry at the very end processCommandLineArgument(BuiltIn.ParamStr(i)) #ParamStr returns a str of each of the cmd line entries ie. /config=Safari def processCommandLineArgument(arg): items = arg.split("=") #using python str split method to create a new list variable with newly split strs as each of the index items if (len(items) != 2): #if the cmd arg didnt use an '=' to divide up (hence not a custom arg) return blank #(since len(items) woudl be 1 if they didnt use '=' in the cmd line args) return #get rid of break characters, replace backslash with nothing for items[0] entries (looking for the custom arg) item = aqString.ToLower(aqString.Replace(aqString.Trim(items[0]), "/", "")) if (item == "config"): #if we find the custom arg that we are looking for (which is "config") Log.Message(items[1]) #this should read as the caps name that we want to run global config # Set the config variable for this unit using the "global" keyword config = items[1] #access the index 1 entry of items (this should be the config name i.e Safari) Now that we can receive the command line parameters dictating which config to use, we need to create another function which will tell TestComplete/TestExecuteLite what those configs are in the JSON format that the Run Remote Browser expects as a part of the parameters. We do it with the code below (once again documented, but I changed it slightly to make it a bit more readable) config_dict = {} #empty dict #implmenting a switch case style dictionary for the get capabilities function instead #(dict support for JSON notation makes it easier to manipulate later) def getCapabilities(argument): #list out some capabilities that we want to use config_dict = { "Safari": {"platform":"Mac OSX 10.15","browserName": "Safari","version": "13","screenResolution": "1366x768","record_video":"true"}, "Edge": {"platform":"Windows 10","browserName":"MicrosoftEdge","version":"79","screenResolution":"1366x768","record_video": "true"}, "Chrome": {"platform":"Windows 10","browserName":"Chrome","version":"80x64","screenResolution":"1366x768","record_video": "true"}, } #get the caps in dictionary format- json-ish standard, otherwise, output "invalid config" capabilities = config_dict.get(argument, "Invalid Config") #assign current testcase name to the caps-dict['name'] capabilities['name'] = str(aqTestCase.CurrentTestCase.Name) + " " + argument + " Test" return capabilities #ouput the cmd line config in caps format - to be consumed in our actual test Now that we have these helper functions, we want to create one more function so that we can generalize our tests to receive the command line arguments from either the cmd line or from our CICD frameworks, such that it kicks off our tests in the correct configurations in our remote browser. def startUp_withConfigs(URL): # Get capabilities processCommandLineArguments() aCap = getCapabilities(config) #get the corresponding capabilities from the getCapabilities() function if (aCap != None): server = "http://hub.crossbrowsertesting.com:80/wd/hub" #defining our CBT server no need to ever change this Browsers.RemoteItem[server, aCap].Run(URL) #launching remote browser Now, if you disable/comment out the first line of your web app functional tests which make references to hard coded local/remote browsers, this startUp_withConfigs(URL) function will take the command line arguments, parse it, and supply the correct capabilities and start up the remote browser session to the specified URL (of course we don't need to make this URL mandatory). If you've build some modular test cases, you can go ahead and start changing/disabling/commenting and replacing the other "blocks" of code to use similarly refactored, generalized code like: def NavigateCurrentBrowser(URL): Browsers.CurrentBrowser.Navigate(URL) def MaximizeCurrentBrowser(): Aliases.browser.BrowserWindow(0).Maximize() to make sure that we continue to use the active remote browser session, stay on the designated web page, and to make sure that our remote browsers are maximized (sometimes elements will face an object not found error if this isn't true, at least on non-mobile browsers). Now your test scripts may look something like this: where the previous lines of code expecting a local browser, or a hard coded remote browser config has been commented out or for your keyword tests, something like the following screenshot attached such that when run through a CICD framework or the CMD line, we achieve as many parallels as we described within the configs section alongside the tests we selected (won't go into deployment configurations for CICD and cmd line since that is also well documented, and without code, so no further annotations needed from me): I'd love to hear if anyone in the community has some more practical usage of the TELite application, or any questions or concerns surrounding the example shown above. Being able to make certain of your web app's functionality across multiple configs of major browsers can certainly help with the traditional browser coverage in testing, but now we are able to do it in parallel using a combination of 3-4 functions. What are some other ways that the community has been using in order to increase your testing velocity when it comes to web app testing?510Views0likes0CommentsGet properties of a web page element
Question In this example, we will demonstrate how to get some more information about a web element by using TestComplete. How to find the “Start a topic” button on the community page page and get the following info about it: color, font family, and font size and post the script and the log info below. Answer //JavaScript function test2() { var url = "https://community.smartbear.com/t5/TestComplete-General-Discussions/TechCorner-Challenge-13-Get-properties-of-a-web-page-element/m-p/207539"; Browsers.Item(btChrome).Run(url); var page = Sys.Browser().Page(url); var element = page.FindChildByXPath("//*[@class='NewTopic-link']"); var style = page.contentDocument.defaultView.getComputedStyle(element, ""); Log.Message("The Properties of web page element are as follows") Log.Message("Background Color : " + style.backgroundColor); Log.Message("Font Family : " + style.fontFamily); Log.Message("Font Size : " + style.fontSize); }543Views0likes0Comments