Video Recording of Test Execution is Easy
Hi folks! My name is Alexander Gubarev, and I'm one of the QA engineers who are responsible for the TestComplete quality. Today, I want to share the VideoRecorder extension for TestComplete with you, which I hope will be helpful in your job. This extension enables you to record videos for your automated tests running in SmartBear TestComplete or TestExecute. It records a video for your test runs helping you to check the test execution and to understand what happened in your system and in the tested application during the test run. All of us know, sometimes, it is really difficult to find the cause of issues that occur during nightly test runs. Videos can help you with this. The extension adds the VideoRecorder script object for starting and stopping video recording from your script tests, and the Start and Stop Video Recording keyword-test operations for doing this from keyword tests. Start using the VideoRecorder extension is easy - you simply need to install the extension on your computer and add call of it from TestComplete. INSTALL VIDEORECORDER 1. Download the VLC installer from https://www.videolan.org/. 2. Install the VLC media player on your computer. The installation is straight-forward. Just follow the instructions of the installation wizard. 3. Download VideoRecorder.tcx (it's attached to this article). 4. Close TestComplete or TestExecute. 5. Double-click on the extension and install it in TestComplete or TestExecute. USE VIDEORECORDER 1. In Keyword Tests Add the Start Video Recording and Stop Video Recording operations at the beginning and at the end of your test. You can find these operations in the Logging operation category. 2. In Scripts Use the VideoRecorder.Start() method to start recording and VideoRecorder.Stop() to stop it. Code example: //JScript function foo() { // Start recording with High quality VideoRecorder.Start("High"); //Do some test actions //…. //Stop recording VideoRecorder.Stop(); } Find the recorded video in your project folder – the link to it is located in the Test Log panel. FULL DOCUMENTATION https://github.com/SmartBear/testcomplete-videorecorder-extension/blob/master/README.md WANT TO IMPROVE THE VIDEORECORDER? We put this script extension on GitHub, so you can take part in its development. Feel free to make pull requests which can make this extension better: https://github.com/SmartBear/testcomplete-videorecorder-extension LATEST VIDEORECORDER VERSION Also, to make sure you have the latest version of the script extension, you can download VideoRecorder.tcx from the GitHub repository: https://github.com/SmartBear/testcomplete-videorecorder-extension/releases/latest9.5KViews11likes27CommentsJavascript for TestComplete
Dear community I just released a course for teaching Javascript to TestComplete testers interested in learning how Javascript works so that they can take advantage of this powerful scripting language inside of TestComplete. The course is purely Javascript to explain all the ins and outs. Everything you learn in the course is 100% usable in TestComplete. The course is free! https://github.com/TheTrainingBoss/Javascript-for-TestComplete Enjoy and good luck! -Lino3.8KViews10likes4CommentsTestComplete and (newly released) Zephyr Scale!!!
Update (February 4, 2022): Since my the post was created, some core changes have happened: 1. Atlassian's move to the cloud 2. Zephyr Scale cloud instance having a different set of API's & Bearer token authentication (to the example code snippets provided below) As such I am adding a link (https://community.smartbear.com/html/assets/ZScale.zip) to download the zipped .PJS file of the example integration. A couple things to change in the zipped file to make it work for you: Go to Project variables, find the one called cloud_token, and replace it with yours. (you can get it by clicking on your profile icon in jira, and by clicking zephyr scale api token) (it should say "REPLACE_ME" when you open the project variables page, just to make it extra clear) Go to the script routine called "utilities" Change the createTestRun_cloud function (line 22) project key to your Jira project key where your zephyr scale tests are. Change the createTestRun_cloud function "folderID" value (line 23) to your zephyr scale test folderId (this is optional; you can just delele/comment out this line as well) Change getTestCaseID function (lines 85-88) to match your Zephyr scale test case key (it should be something like JIRAKEY-T<123>) to match the names of the keyword tests that you have in testcomplete (in my case it was "login", "logout", "UI_Error", "UI_Warning", which mapped to KIM-T<123> etc.) Go to the script routine called "test11" change lines (36,37) optional - to match your jira user id (you can google how to find this) Change lines (104) to your jira project key Change lines (105) optional - to the zephyr scale test case folder id (you can google how to get this too) That should be enough to get started on this integration. The rest of the content below is a bit outdated (api's referenced are using server deployments, we no longer need to use the create_test_run item to create cycles -- I created an additional on start event handler to check if the current test is the "first" test item in your project run, etc.), but the general idea remains the same. ------------------------------------ Hi Everyone! Today I'm bringing news about Zephyr Scale (previously called TM4J, a Jira test management plugin that SmartBear acquired a few months ago). This is exciting because Zephyr Scale is considerably different fromZephyr for Jirain a number of ways. The biggest differentiation being that Zephyr Scale creates it's own table in your Jira database to house all of your test case management data and analytics, instead of issues of type "test". This means that you willnotexperience the typical performance degradation that you might expect from housing hundreds if not thousands of test cases within your Jira instance. Additionally, there's many more reports available:Upwards of 70(and it's almost "more the merrier" season so that's nice) Now how does this relate to TestComplete at all? Well, seeing as we don't have a native integration between the two tools as of yet (watch out in the coming months?), we have to rely on using Zephyr Scale's REST apiin order to map to corresponding test cases kept in Zephyr Scale's test library. I wanted to explore the option of having our TestComplete tests be mapped and updated (per execution) back to the corresponding test cases housed within Zephyr Scale by making use of event handlers and the REST api endpoints. To start, I created a set of test cases to mirror the test cases within Zephyr Scale and TestComplete: You will notice that I have a "createTestRun" KDT test within my TestComplete project explorer. This test case will make the initial POST request to Zephyr Scale in order to create a test run (otherwise known as a test cycle). This is done by using TestComplete's aqHttp object. Within that createTestRun kdt, I'm just running a script routine a.k.a run code snippet (because that's the easiest way to use the aqhttp object). The code snippet below shows the calls the snippet directly below is no longer needed for the integration. Instead, the onstart test case event handler (test11/def EventControl1_OnStartTestCase(Sender, StartTestCaseParams)) will go on to check to see if the currently started test case is the first test case in the project run (aka your test cycle), in which case it will create a new cycle for you. import json import base64 #init empty id dictionary id_dict = {} def createTestRun(): projName = "Automated_TestComplete_Run " + str(aqDateTime.Now()) #timestamped test cycle name address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun" #TM4J endpoint to create test run username = "JIRA-username" #TM4J username password = "JIRA-password!" #TM4J password # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #intialize empty item list items= [] for i in range(Project.TestItems.ItemCount): #for all test items listed at the project level entry = {"testCaseKey":getTestCaseID(Project.TestItems.TestItem[i].Name)} #grab each tc key as value pair according to name found in id_dict items.append(entry) #append as a test item #building request body requestBody = { "name" : projName, "projectKey" : "KIM", #jira project key for the tm4j project "items" : items #the items list will hold the key value pairs of the test case keys to be added to this test cycle } response = request.Send(json.dumps(requestBody)) df = json.loads(response.Text) key = str(df["key"]) #set the new test cycle key as a project level variable for later use Project.Variables.testRunKey = key Log.Message(key) #output new test cycle key Within that snippet, you may have noticed an operation called "getTestCaseID()" and this was how I mapped my TestComplete tests back to the Zephyr Scale test cases (by name to their corresponding test id key) as shown below: def getTestCaseID(argument): #list out testcaseID's in dict format - this is where you will map your internal testcases (by name) to their corresponding tm4j testcases id_dict = { "login": "KIM-T279", "logout": "KIM-T280", "createTestRun": "KIM-T281", "UI_Error": "KIM-T282", "UI_Warning": "KIM-T283" } tc_ID = id_dict.get(argument, "Invalid testCase") #get testcase keys by name from dictionary above return tc_ID #output tm4j testcase ID Referring to the screenshots above, you will notice that the names of my KDT are the key values, where as the corresponding Zephyr Scale test id keys are the paired values within theid_dict variable. Now that we have a script to create a new test cycle per execution, which also assigns all test cases (at the project level) to the newly created test cycle, we just need to update that test run with the corresponding execution statuses for each of the test case to be run. We can do this by leveraging TestComplete's (onstoptestcase) Event handler in conjunction with Zephyr Scale's REST api. the Event handler: import json import base64 def EventControl1_OnStopTestCase(Sender, StopTestCaseParams): import utilities #to use the utility functions testrunKey = Project.Variables.testRunKey #grab testrun key from createstRun script tc_name = aqTestCase.CurrentTestCase.Name #grab current testcase name to provide to getTestCaseID function below tcKey = utilities.getTestCaseID(tc_name) #return testcase Key for required resource path below address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun/" + str(testrunKey) + "/testcase/" + str(tcKey) + "/testresult" #endpoint to create testrun w test cases username = "JIRA-USERNAME" #TM4J username password = "JIRA-PASSWORD!" #TM4J passowrd # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #building requirest body; limited to pass,warning, and fail from within TestComplete. mapping to their corresponding execution statuses in tm4j comment = "posting from TestComplete" #default comment to test executions if StopTestCaseParams.Status == 0: # lsOk statusId = "Pass" # Passed elif StopTestCaseParams.Status == 1: # lsWarning statusId = "Warning" # Passed with a warning comment = StopTestCaseParams.FirstWarningMessage elif StopTestCaseParams.Status == 2: # lsError statusId = "Fail" # Failed comment = StopTestCaseParams.FirstErrorMessage #request body for each pertinent statuses requestBody = {"status": statusId,"comment":comment} response = request.Send(json.dumps(requestBody)) #in case the post request fails, let us know via logs if response.StatusCode != 201: Log.Warning("Failed to send results to TM4J. See the Details in the previous message.") We're all set to run our TestComplete tests while having the corresponding execution statuses get automatically updated within Zephyr Scale. Now a few things to remember: always include the "createTestRun" kdt as the topmost test item within our project run. this is needed to create the test cycle within Zephyr Scale (and there was no "onProjectStart" event handler so I needed to do this manually) make sure that within the script routine called gettestcaseID() that you have mapped the key value pairs correctly with the matching names and testcase keys. create a test set within the project explorer, for the test items you'd like to run (which has been mapped per the above bullet point, otherwise the event handler will throw an error). Now every time you run your TestComplete tests, you should see a corresponding test run within Zephyr Scale, with the proper execution statuses and pass/warning/error messages. You can go on to customize/parameterize the POST body that we create to contain even more information (i.e environment, tester, attachments, etc.) and you can go ahead and leverage those Zephyr Scale test management reports now, looking at execution efforts, defects raised, and traceability for all of the TestComplete tests you've designed to report back to Zephyr Scale. Happy Testing!7KViews9likes4CommentsRelease of two TestComplete Workshops on Github
It is my pleasure to release today two workshops on TestComplete, one for Keyword testing and one for Script testing, free of charge to the TestComplete community. I have been lucky enough to use the product for 18 years with great success and wanted to give something back to the community hoping that more testers will get the resources they need to adopt successfully this awesome product. https://github.com/TheTrainingBoss/TestComplete-Keyword-Workshop https://github.com/TheTrainingBoss/TestComplete-Scripting-Workshop I wanted to thanktristaanogre,Marsha_RandAlexKarasfor all their help and support to the community over the years. Go TestComplete! Happy Testing! -Lino2KViews9likes2CommentsAutomation Execution Report - Ready to Go!
Hi, I hope you guys aware of this thread Automation Execution Report So, I have completed scripting for the creating customized HTML using JavaScript. PFA TC project. Steps to access: ReportingFunctions.js having all the functions which is used to create HTML reports for Test case vise as well as high level report. Basically, After import the ReportingFunctions.js into your project. 1) At the starting point of your execution put below code ReportingFunctions.setLogsPath("C:\\AutomationLogs\\") ReportingFunctions.setExecutionStartTime(aqDateTime.Time()) 2) See the below steps for a test case i guess if see the functions name it is self-explanatory //Test Case #1 ReportingFunctions.setTestCaseExeStartTime(aqDateTime.Time()) ReportingFunctions.fn_createtestcasedescription("Module1","TestCaseID","TestCaseDescription","SYS"); ReportingFunctions.fn_createteststep(1,"Expe Result","Actual Result","Test Data",false); ReportingFunctions.fn_createteststep(2,"Expe Result","Actual Result","Test Data",false); ReportingFunctions.fn_createteststep(3,"Expe Result","Actual Result","Test Data",false); ReportingFunctions.fn_createteststep(0,"Expe Result","Actual Result","Test Data",false); ReportingFunctions.fn_createteststep(1,"Expe Result","Actual Result","Test Data",false); ReportingFunctions.setTestCaseExeEndTime(aqDateTime.Time()); ReportingFunctions.fn_createtestcaseduration() ReportingFunctions.fn_completetestcase() 3) In point of execution end put below code, ReportingFunctions.setExecutionEndTime(aqDateTime.Time()) ReportingFunctions.fn_generatehighlevelreport() The same flow workable function i have created in SampleCallMethod.js in the attached project. Please try this and let me know your feedback. sanjay0288Colin_McCraetristaanogre3.1KViews8likes0CommentsLaunch Browser in Incognito/Private Mode
Thought of sharing the code in the community for launching browsers in their incognito modes. The function is parameterized such a way to run for the browsers Internet Explorer, Edge, Chrome and Firefox. Hope it will be useful for more people. function runIncognitoMode(browserName){ //var browserName = "firefox" //iexplore,edge,chrome,firefox if (Sys.WaitBrowser(browserName).Exists){ var browser = Sys.Browser(browserName); Log.Enabled = false // To disable the warning that might occur during closing of the browser browser.Close(); Log.Enabled = true // enabling the logs back } if(browserName=="edge"){ Browsers.Item(btEdge).RunOptions = "-inprivate" Delay(3000) Browsers.Item(btEdge).Run(); }else if (browserName=="iexplore"){ Browsers.Item(btIExplorer).RunOptions = "-private" Delay(3000) Browsers.Item(btIExplorer).Run(); }else if (browserName=="chrome"){ Browsers.Item(btChrome).RunOptions = "-incognito" Delay(3000) Browsers.Item(btChrome).Run(); }else if (browserName=="firefox"){ Browsers.Item(btFirefox).RunOptions = "-private" Delay(3000) Browsers.Item(btFirefox).Run(); } Sys.Browser(browserName).BrowserWindow(0).Maximize() }3.5KViews7likes3CommentsTestComplete and Zephyr Enterprise
Hi everyone, This post in mainly for those of you already using both TestComplete and Zephyr Enterprise, or looking into integrating one with the other via the zbot. I'll try to make this discussion have as little gaps in thought as possible, such that I'm not working off of the assumption that everyone is familiar with how the integration works. To begin with, Zephyr Enterprise (ZE) can trigger TestComplete (TC) tests by triggering command line arguments using the zbot (the secure job agent and results parser). The zbot sits on the target location, and when commanded (by ZE) will find the described invocation script path, and use the directed batch file or shell script to launch the automated tests (in our case, the TC tests). It then waits for the execution completion, and looks into another predefined directory for the results file generated by these TC tests in a junit xml format. Attached is what the configuration looks like from the ZE side: Within the dropdown panel that describes the automation framework (where we have selected TestComplete) what this really describes is the junit xml file parser template that the zbot will use to go through our results file to upload our automated TC test results back to ZE. (https://zephyrdocs.atlassian.net/wiki/spaces/ZE/pages/1558445554/Parser+Templates) This means that the zbot will look for certain tags and elements within our junit xml file generated by TestComplete to upload the results back to our test management solution. Typically, a junit xml file generated by TestComplete looks like this: Given the tags, this means that we cannot upload additional information such as attachment files since there is no tags that describe the file location that the supporting docs may reside (i.e the mht files). Now if we wanted to, we could go back in and edit these junit xml files and rerun the tests such that the zbot parses our results (https://zephyrdocs.atlassian.net/wiki/spaces/ZE/pages/1558445158/Vortex+Job+Execution+Events) but that would defeat the purpose of being able to launch our automated TC tests, no? So what we can do is write an additional script, where we have our favorite language of choice run through our junit xml files such that it appends that information for us automatically. This is by no means the right way to do this, just one of the ways! Couple of ways to improve this would be to refactor this script such that it can take additional inputs such as file location (hard coded for now) and also make the attachment subelement recursive so that we can add in multiple attachments. But for now this will do... Now that we have our script that will modify our junit xml results file for us, all we need to do is append a line to our original batch file that triggers our TC job, such that it modifies the generated xml file for us! So the batch file would something like this: where the highlighted portion is just an additional call to the script we wrote above. As you can see the format is familiar for those who have been using command line arguments for TC <TC executable location> <TC pjs. file you want to run> <any additional arugments i.e /r /p /ExportSummary to generate the xml file> <call to our python exe. to run our script> <our python script to append to our xml file> <xml file location ^^> Now our original xml file has been modified to look like this: So now whenever we trigger our TC jobs from within ZE, it will not only carry over the traditional default info such as test case name, run time, failure messages, and execution status, but also the supporting attachment files that we want! More curious users may want to browse our ZE docs to learn how to map it to existing requirements, or add in further information (https://zephyrdocs.atlassian.net/wiki/spaces/ZE/pages/1558445158/Vortex+Job+Execution+Events) On an additional note, we really only went over one portion of the automation integration when it comes to TC and ZE, but we can modify the steps used within this post to apply to the jobs we are running within our CI frameworks as well by adding in an additional build step that will perform the same type of junit xml results file modification we see here. -------------------- How is everyone else utilizing TC with ZE? What additional information do you think could be useful in terms of test management when it comes to your automation jobs? Having supporting attachments, and maybe even automatic code/test coverage is something that can be created with this method, but I'm curious to see what you think could be helpful. Anyhow, I hope everyone is staying safe and sane!2.3KViews5likes0Comments[TechCorner] Incorrect object location in Chrome
Hello Community, TestComplete may identify the location of elements incorrectly on a web page opened in Google Chrome. Since version 12.40, TestComplete can work with non-standard System DPI and Chrome zoom settings. So, if you are using TestComplete 12.40 or newer, most probably, this is not a cause of the issue. There known situations when this could happen. They are described in this KB article: HOW TO FIX WRONG POSITION OF ELEMENTS IN CHROME Does the article help you? If it does, please give a Kudo to the post - this will motivate me to prepare more articles for you.1.3KViews5likes0Comments[TechCorner] SetText, Keys, keyboard input, and other disasters
Hi Everyone, We often get reports from you that keyboard input in applications behaves, let's say, a bit weird. Have you ever seen your test enter login and password on your web page, and then click that Login button just to make the app report a login error? Of course, your login and password are perfectly fine, and every time you enter them manually, they work. Why do things like this happen? The short answer is: events. So, what's the solution? The short answer is: the Keys method. If you're looking for a longer version, check this KB article: Why doesn't keyboard input made by a test work in the same way as manual input I tried to describe things related to keyboard input from tests in plain English there. I hope this helps. Yuri1.8KViews5likes1CommentScript Extension for Data driven framework!
Hi All, I have prepareda Script Extension for Automation framework (DataDrivenFramework) I guess today is the right to day to present this here. This Script extension controls the execution and generates the Awesome HTML report. And simple to implement. To implement this, youneed setup below things. I have attached sample project for this. Download DataDrivenFramework.tcx file from attachments Follow the instructions in the link to install the Script extension https://support.smartbear.com/testcomplete/docs/working-with/extending/script/installing-and-uninstalling.html Create a function in anywhere in your project and do as below VB: Function InitExecution 'This is to initiate the execution adFramework.startExecution() End Function JavaScript/JScript: Function InitExecution 'This is to initiate the execution adFramework.startExecution() End Function Usually in TC you will Log.Checkpoit or Log.Error functions in-order to make reports but here instead of that use below functions to do the same. It will create TC logs as well as HTML reports. //For adding a passed step adFramework.CreateTestStep(1,"Expected result","Actual result","Testdata used"); //For adding failed step[by default you will see the screenshot in the HTML report] adFramework.CreateTestStep(0,"Expected result","Actual result","Testdata used"); //For adding warning step adFramework.CreateTestStep(2,"Expected result","Actual result","Testdata used"); //For adding warning step with screenshot adFramework.CreateTestStep(2,"Expected result","Actual result","Testdata used",true); //For adding done step [like clicking buttons, and some information messages] adFramework.CreateTestStep(3,"Expected result","Actual result","Testdata used"); //For adding a Title adFramework.CreateTestStep("","Title","","",false,true); In your scripts you need to get the from the Excel for each test case for that you can use below functions //To get the test data from the excel sheet adFramework.GetTestData("ColumnName"); //To get the environment from environment sheet adFramework.GetEnvironmentData("ColumnName"); That's it you got your Data-driven framework. Happy automating! Feel free ask your clarifications on this!5.9KViews5likes29Comments