ContributionsMost RecentMost LikesSolutionsRe: Visualtest AI model hiSwiftester398 thanks for your patience, this is a new community and surely the answers will come much quicker. VisualTest uses a variety of algorithms to find visual regression bugs. As of today, for the web app comparisons we break the problem down to 3 parts: 1. object detection (what are the visible elements on screen) 2. object quantification (how does it look? what is it like?) 3. object matching (what happened to each element across these two versions) we use various techniques for each section of the problem, but the end result is that you get a content aware comparison across the 2 versions of your app's screenshots. As we have an in house data science team, we are also fine tuning our deep learning models for release sometime next year in preparation for native mobile app support. Re: Cypress installation before I run npx visualtest-setup hiVinayak03 Apologies for the late response as this is a new community; from the info you've provided, the SSL issue seems local to your environment. I've found some related documentation (https://stackoverflow.com/questions/24372942/ssl-error-unable-to-get-local-issuer-certificate ) which may help debug the issue. we also recently released playwright-js support as well (although it is likely you may run into similar SSL issue when trying to npm install playwright as well, if you are facing such issues with cypress) Finally, we also recently released our 'manual' screenshots; users can input any public facing url, and get visual regressions by "running test" from various OS/browser configuration. Re: From CrossBrowser to BitBar... How do you generate multiple desktop or iOS screenshots Heymgrimley This is Justin, product manager for VisualTest here at SmartBear. BitBar does not have the same "URL" screenshotting capability in the product. SmartBear did just release a new product called VisualTest, thats slightly similar in this regard where it is an automated visual regression tool that's SDK-based. With that said, VisualTest also doesn't have the URL Screenshots yet, but we are currently working on the first iteration of that feature (about 1~2 months to go till production, and it likely will have preset browser configurations in the first iteration), but users will be able to input a URL and get full-page screenshots from various different OS/Browser configurations. I'm very interested in learning more about your use case surrounding the URL screenshots; I'll list them below, but if you'd rather get on a short zoom call, that would be even better! (you can DM me here via the community messages) Do you use any additional features listed under the URL bar? (such as adding short selenium scripts, for example to accept cookies, etc) How do you determine the browsers to test on? Given that CBT does not always have the latest browsers, are you specifically looking to test older browsers? What is the biggest value you get out of the screenshotting feature? How does this help you make decisions? Do you find value in the “dom layout” difference results that CBT provides? OR are the X number of differences listed across those full-page screenshots something you usually disregard (and proceed to do a glance over at the screens) What kinds of differences are you looking for? Given that there should be differences across different OS, browsertype, resolution, etc. how do you determine what is right or not? Do you use any of the other features such as scheduling, sending emails, etc. What is the one you find most valuable and why? Re: Error when opening or running script please file for a support ticket all Re: Recruiting users for Feedback on our new Deep learning/Computer vision capability Thank you everyone! I'll be sending out an email shortly to everyone who commented Recruiting users for Feedback on our new Deep learning/Computer vision capability Hello everyone! My name is Justin and I'm on the product team for TestComplete here at SmartBear. We've been working on some exciting new AI/ML capabilities, and we're looking for some feedback on the following areas: Visual object recognition: test any type of app (built with any framework) without depending on selectors Natural Language Command: most simplistic, legible test scripts (again no technical imperatives here) New Test authoring mechanism: a new test item that is even simpler than keyword driven tests. You can be brand new to TestComplete, or a seasoned veteran. Please comment in this post thread if you're interested, and I'll get in touch with you! TestComplete and (newly released) Zephyr Scale!!! Update (February 4, 2022): Since my the post was created, some core changes have happened: 1. Atlassian's move to the cloud 2. Zephyr Scale cloud instance having a different set of API's & Bearer token authentication (to the example code snippets provided below) As such I am adding a link (https://community.smartbear.com/html/assets/ZScale.zip) to download the zipped .PJS file of the example integration. A couple things to change in the zipped file to make it work for you: Go to Project variables, find the one called cloud_token, and replace it with yours. (you can get it by clicking on your profile icon in jira, and by clicking zephyr scale api token) (it should say "REPLACE_ME" when you open the project variables page, just to make it extra clear) Go to the script routine called "utilities" Change the createTestRun_cloud function (line 22) project key to your Jira project key where your zephyr scale tests are. Change the createTestRun_cloud function "folderID" value (line 23) to your zephyr scale test folderId (this is optional; you can just delele/comment out this line as well) Change getTestCaseID function (lines 85-88) to match your Zephyr scale test case key (it should be something like JIRAKEY-T<123>) to match the names of the keyword tests that you have in testcomplete (in my case it was "login", "logout", "UI_Error", "UI_Warning", which mapped to KIM-T<123> etc.) Go to the script routine called "test11" change lines (36,37) optional - to match your jira user id (you can google how to find this) Change lines (104) to your jira project key Change lines (105) optional - to the zephyr scale test case folder id (you can google how to get this too) That should be enough to get started on this integration. The rest of the content below is a bit outdated (api's referenced are using server deployments, we no longer need to use the create_test_run item to create cycles -- I created an additional on start event handler to check if the current test is the "first" test item in your project run, etc.), but the general idea remains the same. ------------------------------------ Hi Everyone! Today I'm bringing news about Zephyr Scale (previously called TM4J, a Jira test management plugin that SmartBear acquired a few months ago). This is exciting because Zephyr Scale is considerably different fromZephyr for Jirain a number of ways. The biggest differentiation being that Zephyr Scale creates it's own table in your Jira database to house all of your test case management data and analytics, instead of issues of type "test". This means that you willnotexperience the typical performance degradation that you might expect from housing hundreds if not thousands of test cases within your Jira instance. Additionally, there's many more reports available:Upwards of 70(and it's almost "more the merrier" season so that's nice) Now how does this relate to TestComplete at all? Well, seeing as we don't have a native integration between the two tools as of yet (watch out in the coming months?), we have to rely on using Zephyr Scale's REST apiin order to map to corresponding test cases kept in Zephyr Scale's test library. I wanted to explore the option of having our TestComplete tests be mapped and updated (per execution) back to the corresponding test cases housed within Zephyr Scale by making use of event handlers and the REST api endpoints. To start, I created a set of test cases to mirror the test cases within Zephyr Scale and TestComplete: You will notice that I have a "createTestRun" KDT test within my TestComplete project explorer. This test case will make the initial POST request to Zephyr Scale in order to create a test run (otherwise known as a test cycle). This is done by using TestComplete's aqHttp object. Within that createTestRun kdt, I'm just running a script routine a.k.a run code snippet (because that's the easiest way to use the aqhttp object). The code snippet below shows the calls the snippet directly below is no longer needed for the integration. Instead, the onstart test case event handler (test11/def EventControl1_OnStartTestCase(Sender, StartTestCaseParams)) will go on to check to see if the currently started test case is the first test case in the project run (aka your test cycle), in which case it will create a new cycle for you. import json import base64 #init empty id dictionary id_dict = {} def createTestRun(): projName = "Automated_TestComplete_Run " + str(aqDateTime.Now()) #timestamped test cycle name address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun" #TM4J endpoint to create test run username = "JIRA-username" #TM4J username password = "JIRA-password!" #TM4J password # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #intialize empty item list items= [] for i in range(Project.TestItems.ItemCount): #for all test items listed at the project level entry = {"testCaseKey":getTestCaseID(Project.TestItems.TestItem[i].Name)} #grab each tc key as value pair according to name found in id_dict items.append(entry) #append as a test item #building request body requestBody = { "name" : projName, "projectKey" : "KIM", #jira project key for the tm4j project "items" : items #the items list will hold the key value pairs of the test case keys to be added to this test cycle } response = request.Send(json.dumps(requestBody)) df = json.loads(response.Text) key = str(df["key"]) #set the new test cycle key as a project level variable for later use Project.Variables.testRunKey = key Log.Message(key) #output new test cycle key Within that snippet, you may have noticed an operation called "getTestCaseID()" and this was how I mapped my TestComplete tests back to the Zephyr Scale test cases (by name to their corresponding test id key) as shown below: def getTestCaseID(argument): #list out testcaseID's in dict format - this is where you will map your internal testcases (by name) to their corresponding tm4j testcases id_dict = { "login": "KIM-T279", "logout": "KIM-T280", "createTestRun": "KIM-T281", "UI_Error": "KIM-T282", "UI_Warning": "KIM-T283" } tc_ID = id_dict.get(argument, "Invalid testCase") #get testcase keys by name from dictionary above return tc_ID #output tm4j testcase ID Referring to the screenshots above, you will notice that the names of my KDT are the key values, where as the corresponding Zephyr Scale test id keys are the paired values within theid_dict variable. Now that we have a script to create a new test cycle per execution, which also assigns all test cases (at the project level) to the newly created test cycle, we just need to update that test run with the corresponding execution statuses for each of the test case to be run. We can do this by leveraging TestComplete's (onstoptestcase) Event handler in conjunction with Zephyr Scale's REST api. the Event handler: import json import base64 def EventControl1_OnStopTestCase(Sender, StopTestCaseParams): import utilities #to use the utility functions testrunKey = Project.Variables.testRunKey #grab testrun key from createstRun script tc_name = aqTestCase.CurrentTestCase.Name #grab current testcase name to provide to getTestCaseID function below tcKey = utilities.getTestCaseID(tc_name) #return testcase Key for required resource path below address = "https://YOUR-JIRA-INSTANCE/rest/atm/1.0/testrun/" + str(testrunKey) + "/testcase/" + str(tcKey) + "/testresult" #endpoint to create testrun w test cases username = "JIRA-USERNAME" #TM4J username password = "JIRA-PASSWORD!" #TM4J passowrd # Convert the user credentials to base64 for preemptive authentication credentials = base64.b64encode((username + ":" + password).encode("ascii")).decode("ascii") request = aqHttp.CreatePostRequest(address) request.SetHeader("Authorization", "Basic " + credentials) request.SetHeader("Content-Type", "application/json") #building requirest body; limited to pass,warning, and fail from within TestComplete. mapping to their corresponding execution statuses in tm4j comment = "posting from TestComplete" #default comment to test executions if StopTestCaseParams.Status == 0: # lsOk statusId = "Pass" # Passed elif StopTestCaseParams.Status == 1: # lsWarning statusId = "Warning" # Passed with a warning comment = StopTestCaseParams.FirstWarningMessage elif StopTestCaseParams.Status == 2: # lsError statusId = "Fail" # Failed comment = StopTestCaseParams.FirstErrorMessage #request body for each pertinent statuses requestBody = {"status": statusId,"comment":comment} response = request.Send(json.dumps(requestBody)) #in case the post request fails, let us know via logs if response.StatusCode != 201: Log.Warning("Failed to send results to TM4J. See the Details in the previous message.") We're all set to run our TestComplete tests while having the corresponding execution statuses get automatically updated within Zephyr Scale. Now a few things to remember: always include the "createTestRun" kdt as the topmost test item within our project run. this is needed to create the test cycle within Zephyr Scale (and there was no "onProjectStart" event handler so I needed to do this manually) make sure that within the script routine called gettestcaseID() that you have mapped the key value pairs correctly with the matching names and testcase keys. create a test set within the project explorer, for the test items you'd like to run (which has been mapped per the above bullet point, otherwise the event handler will throw an error). Now every time you run your TestComplete tests, you should see a corresponding test run within Zephyr Scale, with the proper execution statuses and pass/warning/error messages. You can go on to customize/parameterize the POST body that we create to contain even more information (i.e environment, tester, attachments, etc.) and you can go ahead and leverage those Zephyr Scale test management reports now, looking at execution efforts, defects raised, and traceability for all of the TestComplete tests you've designed to report back to Zephyr Scale. Happy Testing! Re: Sharing name mappings between page objects when several pages have elements in common to do conditional name mapping, you must have that page open on your machine. then rightclick edit on the free space in the namemapping editor here to bring up this window (for the pageObject youd like to make into conditional mode) select conditional mode from the bottom left corner of the new window click on the URL property, and then the "OR" toolbar option from the right to make it look like: and select UR: (for property), equals (operator) and add in your second url that youd like to accept: now, any of those coinciding menu/nav tool bar options that may exist on both url 1 and url 2 (google.com and google123.com in my case) should all be mapped under this particular page object --------------------- second way of doing this would be wildcarding the URL property to accept a larger range of values: Re: Microsoft.Windows.StartMenuExperienceHost Glitch https://community.smartbear.com/t5/TestComplete-Questions/Why-does-startup-of-TestComplete-cause-other-Windows-apps-to/m-p/118457 Re: Locating away where Copying and Pasting Data into a Web Textbox with multiple user accounts Largent803 my second reply should solve that problem for you, since we are no longer using hostname, but using the Sys.UserName instead (which should give you the "Largent" or any other username value that you wanted; that is, whoever logged on to the VM with testcomplete should run this script and expect to get their username filled into that path) as for exporting as a script file, that is not possible (nor would it be helpful, since you cannot run these scripts outside of the TC environment)