CrossBrowserTesting to BitBar Selenium Script Migration - QuickStart Guide
On June 21, 2022, SmartBear launched web application testing on our unified cloud testing solution that will include both browser and device testing on the BitBar platform! We have listened to our customers and having one product for both web and device testing will better meet your needs. BitBar is a scalable, highly reliable and performant platform with multiple datacenters. On it, you will have access to the latest browsers and devices with additional deployment options to meet your needs, including private cloud and dedicated devices. For more frequently asked questions about the launch of web app testing on BitBar, visit ourFAQ. This Quickstart Guide is intended to walk through the conversion of your existing CrossBrowserTesting Selenium tests to use BitBar! We have updated Selenium hubs and API calls that will require conversion, though little else will be required. As with CrossBrowserTesting, we have sample scripts and a Selenium Capabilities Configurator you may use to build out the specific capabilities for the desired tested device. This tool can be foundhere. To start conversion, you will need your BitBar API key versus the CrossBrowserTesting Authkey.This is a new method to authenticate the user and to make API calls. You may find your BitBar API Key in account settings as describedhere. Most of the Code examples and talking points for conversion are in reference to the CrossBrowserTesting Selenium Sample script that is availablehere. All code snippets in this article will be in Python. Now that you have your BitBar API Key, let's alter the original Authkey variable with our new BitBar API key located at line 18 in the CrossBrowserTesting sample script. This step is for connection to the BitBar API for processes such as taking screenshots and setting the status of your tests. # Old CrossBrowser Testing Sample Authkey Variable self.authkey = "<"CrossBrowserTesting Authkey">" # New Bitbar API Key Variable self.apiKey = "<"insert your BitBar API Key here">" In regards to the capabilities used in BitBar, there are a couple things to note. First, we do not need to specify a'record_video'capability as we do in CrossBrowserTesting. Videos are generated automatically for every test, so we no longer need to provide this capability. Doing so will result in webDriver errors.The second thing to note is that we now also pass the BitBar API Key along with the Capabilities; capabilities = { 'platform': 'Windows', 'osVersion': '11', 'browserName': 'chrome', 'version': '102', 'resolution': '1920x1080', 'bitbar_apiKey': '<insert your BitBar API key here>', } With BitBar we now have four Selenium hub options to choose from. Both US and EU Selenium hubs are available to aid in performance for your location. Separate hubs are also provided depending on the type of device (Desktop vs Mobile) you wish to test against. You may pick the applicable Desktop or Mobile hub closest to your location and replace your existing hub with the updated URL; BitBar Desktop Selenium Hubs; US_WEST:DESKTOP: https://us-west-desktop-hub.bitbar.com/wd/hub EU:DESKTOP: https://eu-desktop-hub.bitbar.com/wd/hub BitBar Mobile Selenium Hubs; US_WEST:MOBILE: https://us-west-mobile-hub.bitbar.com/wd/hub EU:MOBILE: https://eu-mobile-hub.bitbar.com/wd/hub # start the remote browser on our server self.driver = webdriver.Remote desired_capabilities=capabilities command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" Now that we have our BitBar API Key, and Capabilities and Selenium Hub set up, we can move on to altering our requests for Screenshots and Test Result Status. In the CrossBrowserTesting sample script, we use standalone API requests to create Screenshots. For the BitBar sample scripts, we are doing this with the Selenium driver itself to create the Screenshot and store it locally. Afterwards use the BitBar API to push the locally saved image back to our project. The swagger spec for our BitBar Cloud API can be foundhere. In line 30 of the BitBar Selenium sample script we set a location to store Screenshots on the local machine. Note, this is set up to store files in a directory called 'Screenshots' in the root folder of your project. self.screenshot_dir = os.getcwd() + '/screenshots' To retrieve a Screenshot and store it, we perform a 'get_screenshot_as_file' call, as seen on line 45 in the BitBar Selenium example script. self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + '1_home_page.png') Now we want to to take our Screenshot and push it back to our project in BitBar. Note that in this case for Python, we are using the 'httpx' module for the API calls back to BitBar. The 'requests' module only supports HTTP 1.1 and we will need a module capable of handling HTTP 2/3 requests. # Let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") The final piece of the puzzle is to set our Test Result Status. We have alternate naming conventions for test results, these are 'Succeeded' and 'Failed' for BitBar vs 'Pass' and 'Fail' for CrossBrowserTesting. # CrossBrowserTesting Successful test syntax self.test_result = 'pass' # CrossBrowserTesting Failed test syntax self.test_result = 'fail' # BitBar Successful test syntax self.test_result = 'SUCCEEDED' # BitBar Failed test syntax self.test_result = 'FAILED' Note that in the snippet provided below, we start by performing Get requests for session information. These requests are sent to the same Selenium Hub we are using for the webDriver, so make sure the hub address is set to the same hub used for webDriver. We would recommend to turn this into a variable to avoid having to switch this manually for alternate hubs. These processes are found in the 'tearDown' function of the updated CrossBrowserTesting sample script foundhere. #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() deviceRunID = str(response["deviceRunId"]) projectID = str(response["projectId"]) RunId = str(response["testRunId"]) Finally, we set the Test Result with the Post method below using session information retrieved with the Get request above. Note, the URL for the Post request will NOT need to be updated to reflect the specific Selenium hub in use. # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + projectID + '/runs/' + RunId + '/device-sessions/' + deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) Now that we have made these changes you are ready to run your test through BitBar! As a summary, we replace our CrossBrowserTesting authKey with the BitBar API Key, set the new Selenium hub address, build new screenshot calls and update the test result function. Quick Reference Documentation; BitBar WebFAQ. Complete documentation with code samples in various languages are foundhere. Retrieve your BitBar API Key in account settings as describedhere. BitBar Selenium Capability Configurator and Sample Scripts are foundhere. CrossBrowserTestingCapability Configurator and Sample Scripts are foundhere. The Swagger spec for our BitBar Cloud API can be foundhere. Here is our complete Python CBT-BB conversion script; # Please visit http://selenium-python.readthedocs.io/ for detailed installation and instructions # Getting started: http://docs.seleniumhq.org/docs/03_webdriver.jsp # API details: https://github.com/SeleniumHQ/selenium#selenium # Requests is the easiest way to make RESTful API calls in Python. You can install it by following the instructions here: # http://docs.python-requests.org/en/master/user/install/ import unittest from selenium import webdriver import requests import os import httpx class BasicTest(unittest.TestCase): def setUp(self): #get rid of the old way of doing auth with just an API key self.apiKey = '' self.api_session = requests.Session() self.test_result = None self.screenshot_dir = os.getcwd() + '/screenshots' self.screenshotName1 = 'SS1.png' self.deviceRunID = "" self.projectID = "" self.RunId = "" #old platformName has been split into platformName and osVersion capabilities = { 'bitbar_apiKey': '', 'platform': 'Linux', 'osVersion': '18.04', 'browserName': 'firefox', 'version': '101', 'resolution': '2560x1920', } # start the remote browser on our server self.driver = webdriver.Remote( desired_capabilities=capabilities, #the hub is changed, also not sending the user and pass through the hub anymore #US hub url: https://appium-us.bitbar.com/wd/hub command_executor="https://us-west-desktop-hub.bitbar.com/wd/hub" #EU hub url ) self.driver.implicitly_wait(20) def test_CBT(self): # We wrap this all in a try/except so we can set pass/fail at the end try: # load the page url print('Loading Url') self.driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html') # maximize the window - DESKTOPS ONLY #print('Maximizing window') #self.driver.maximize_window() #check the title print('Checking title') self.assertEqual("Selenium Test Example Page", self.driver.title) # take a screenshot and save it locally print("Taking a Screenshot") self.driver.get_screenshot_as_file(self.screenshot_dir + '/' + self.screenshotName1) # change pass to SUCCEEDED self.test_result = 'SUCCEEDED' except AssertionError as e: # delete cbt api calls # change fail to FAILED self.test_result = 'FAILED' raise def tearDown(self): print("Done with session %s" % self.driver.session_id) if self.test_result is not None: #get all necessary IDs of current session response = requests.get('https://us-west-desktop-hub.bitbar.com/sessions/' + self.driver.session_id, auth=(self.apiKey, '')).json() self.deviceRunID = str(response["deviceRunId"]) self.projectID = str(response["projectId"]) self.RunId = str(response["testRunId"]) # Here we make the api call to set the test's score requests.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID, params={'state': self.test_result}, auth=(self.apiKey, '')) # let's take our locally saved screenshot and push it back to BitBar! # First we start by declaring the 'params' and 'files' variables to hold our Screenshot name and location. params = { 'name': self.screenshotName1, } files = { 'file': open(self.screenshot_dir + '/' + self.screenshotName1, 'rb'), } # Now we build out our API call to push our locally saved screenshot back to our BitBar Project print("Uploading our Screenshot") response = httpx.post('https://cloud.bitbar.com/api/v2/me/projects/' + self.projectID + '/runs/' + self.RunId + '/device-sessions/' + self.deviceRunID + '/output-file-set/files', params=params, files=files, auth=(self.apiKey, '')) # Here we check that our upload was successfull if response.status_code == 201: print("Screenshot Uploaded Successfully") else: print("Whoops, something went wrong uploading the screenshot.") self.driver.quit() if __name__ == '__main__': unittest.main(warnings='ignore') Thanks for reading along, I hope this helps your conversion to BitBar! Happy Testing!1.6KViews2likes0CommentsCode for Tracking Tested App Info On Start Test.
Question I like to know the information about the tested app each test ran on so I wrote up a little code and put it in the OnStartTest Test Engine Event. Answer This will run every time I run a test telling me the tested app info. This is wonderful for tracking one off test runs and which app version a test passed on and which it failed on. https://support.smartbear.com/testcomplete/docs/testing-with/advanced/handling-events/index.html https://support.smartbear.com/testcomplete/docs/reference/events/onstarttest.html?sbsearch=OnStartTe... function EventControl_OnStartTest(Sender) { try { Log.AppendFolder("< EventControl_OnStartTest >"); Log.AppendFolder("Version Information"); var FileName = "C:\\Program Files (x86)\\Some Folder\\TestedApp.exe"; var VerInfo = aqFileSystem.GetFileInfo(FileName).VersionInfo; var FileInf = aqFileSystem.GetFileInfo(FileName); var HostName = Sys.HostName; var dtObj; Log.Message("File Name: " + FileInf.Name); Log.Message("File Version: " + VerInfo.FileMajorVersion + "." + VerInfo.FileMinorVersion + "." + VerInfo.FileBuildVersion + "." + VerInfo.FileRevisionVersion); dtObj = new Date(FileInf.DateLastModified); Log.Message("File Date: " + FileInf.DateLastModified); Log.Message("Host Name: " + HostName); Log.PopLogFolder(); } catch(err) { Process.Halt("Exception: EventControl_OnStartTest - " + err.message); //Stop Test Run. } }549Views2likes0CommentsAdvanced search object for complex model
Hello all, Please find here my implementation of advanced search of in-memory object for complex tree model. Sometimes when the tested app is really complex (especially in heavy client application), the search for object could be too slow due to depth. One of the solution is to do by finding intermediate object to narrow and guide the search. So you multiply findChildEx. To ease that i made the following method : /** * <a id="system.findContainer"></a> * Rechercher un objet TestComplete en mémoire de manière containérisé<br> * Si <i>system.debug</i> est à <b>true</b> alors des logs complémentaires sur la recherche sont inscrits * @memberof system * @function * {Object} ParentObject - Objet parent porteur de l'objet à rechercher. Racine de départ de la recherche. Plus l'objet parent est de haut niveau, plus le temps de recherche peut être long * {array} Props - Tableau des propriétés des objets à rechercher par container, si plusieurs propriétés recherchées pour un container donnée alors le séparateur est le |, exemple ["Visible|Name", "Name"] -> recherche des 2 propriétés Visble et Name pour le premier container et recherche de Name seulement pour le deuxième container * {array} Values - Tableau des valeurs des propriétés des objets à rechercher * {number|array} [Depth=4] - Limiter la recherche a <i>Depth>/i> profondeur depuis l'objet parent. Plus la profondeur est grande, plus le temps de recherche peut être long. Peut être un tableau pour définir un temps de rechercher par container * @returns {object} Renvoie l'objet s'il existe ou <b>null</b> le cas échéant ou bien en cas d'erreur */ system.findContainer = function(ParentObject = null, Props = null, Values = null, Depth = 4) { // Vérification des paramètres obligatoires if ((ParentObject == null) || (Props == null) || (Values == null)) { if (system.debug) Log.Message('findContainer() - Un paramètre obligatoire est non renseigné (ParentObject ou Props ou Values)', "", pmHigher, system.logWarning); return null; } if (ParentObject == "current") { if ((typeof system.container != 'undefined') && (system.container != null) && (system.container.Exists)) ParentObject = system.container else return null; } if (system.debug) { let propsKey = typeof Props == "string" ? Props : Props.join("|"); let valuesKey; switch (typeof Values) { case "string": valuesKey = Values; break; case "number": case "boolean": valuesKey = Values.toString(); break; case "null": valuesKey = "null"; break; default: valuesKey = Values.join("|"); break; } Log.Message("findContainer(" + ParentObject.FullName + ", " + propsKey + ", " + valuesKey + ", " + Depth.toString() + ") - Recherche d'objet", "", pmLowest, system.logDebug); } var objectfind = ParentObject; let currentProps; let currentValues; let currentDepth; try { for (let i=0;i<Props.length;i++) { currentProps = Props[i].split('|'); currentValues = new Array(); currentDepth = typeof Depth == 'number' ? Depth : Depth[i]; for (let j=0;j<currentProps.length;j++) { currentValues.push(Values.shift()); } objectfind = objectfind.FindChildEx(currentProps, currentValues, currentDepth, true, system.time.medium); if ((typeof objectfind == 'undefined') || ((objectfind != null) && (!objectfind.Exists))) break; } } catch (e) { objectfind = null; if (system.debug) Log.Message("findContainer() - Une exception est apparue dans la recherche", e.message, pmHighest, system.logError); } finally { // Ne renvoyer que "null" sur non trouvé ou erreur if ((typeof objectfind == 'undefined') || ((objectfind != null) && (!objectfind.Exists))) objectfind = null; if (system.debug) { if (objectfind == null) Log.Message("findContainer() renvoie un objet null ou undefined", "", pmLowest, system.logDebug) else Log.Message('findContainer() a trouvé un objet', objectfind.FullName, pmLowest, system.logDebug); } return objectfind; } } It comes from my testing framework so everything starting by system. is specific but you should understand the use : system.debug -> true to activate an additionnal level of log. system.logDebug -> specific debug log attributes. system.container -> a global variable that can hold a frequently used object in portion of test system.time.medium -> this is 5 seconds (5000ms) Samples usage : let objectToTest = system.findContainer(SourceObject, ["Visible|Name", "Visible|Name"], [true,'dlmAccueil', true,'editionEnfants']); Will search in SourceObject a visible object named 'editionEnfants' which is located inside another visible object named 'dlmAccueil'. The search will use a default max depth of 4 levels for both objects. let objectToTest = system.findContainer(SourceObject, ["Visible|Name", "Name", "Header|Index|Visible"], [true, "Saisie", "DockingPanel"', "ILPaneGroup", 1, true], [2, 3, 2]); Will search in SourceObject a visible object with property Index to 1 and property Header to 'ILPanelGroup' which is located inside another object named 'DockingPanel' which is located inside another visible object named 'Saisie'. The search will use a depth of 2 levels for first object, 3 levels for second object and 2 levels for final object.565Views0likes0Comments