Ask a SmartBear: Your Biggest Test Automation Gaps
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ask a SmartBear: Your Biggest Test Automation Gaps
If you joined us in our recent webinar, you learned five common gaps we often see missing from teams’ UI test automation strategies.
Now, we want to hear from you! What do you struggle with on a daily basis? What are your biggest challenges when it comes to UI test automation?
Join us for Ask a SmartBear: Your Biggest Test Automation Gaps on Wednesday, February 27th from 8AM to 2PM ET. Submit your questions to this topic, and our experts, Jeff Martin and Nicholas Bonofiglio, will answer all your burning questions. They'll respond to questions in a video on February 27th 4PM ET.
By asking a question in this topic, you'll be automatically entered to win one of five $20 gift cards! If that's not an incentive, I don't know what is 🙂
2/27/19 - Good morning everyone! Here's a message from Nick and Jeff. They're excited to see all the questions coming in and are looking forward to answering them later today.
2/27/19 - Thanks again to everyone that submitted a question to Ask a SmartBear: Your Biggest Test Automation Gaps! Jeff and Nick had a great time answering your questions. Check out the video below to hear their responses.
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One of the biggest issues we face is keep data in good states across a multitude of databases. Reset scripts / completely nuking and setting back up isn't really an option, and there are multiple applications hitting all of these.
Would love to hear some others thoughts on ways to alleviate this.
Also, would love to know how others utilize one applications APIs to generate data for testing another applications UI.
Thanks,
Carson
Click the Accept as Solution button if my answer has helped
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@cunderw wrote:
One of the biggest issues we face is keep data in good states across a multitude of databases.
My usual preference is to generate semi-random unique identifiable data and use them as test input.
An example of such data may be like this:
Invoice20190221_1500_jwofiSW9rjdf0d239r
Where
-- Invoice - is a primary item type identification prefix;
-- 20190221_1500 - date/time stamp to make data unique;
-- jwofiSW9rjdf0d239r - random string of random length to make data more varying.
In almost all cases test data like above may be used without significant risk to get unexpected failure because of data duplication and make it possible to quickly and easily understand when and where the given data were entered into the application.
As a side benefit, if one executes such tests against the same database for a long time, he will get a relatively large database that can be used to measure application's performance when working with large data volumes.
/Alex [Community Champion]
____
[Community Champions] are not employed by SmartBear Software but
are just volunteers who have some experience with the tools by SmartBear Software
and a desire to help others. Posts made by [Community Champions]
may differ from the official policies of SmartBear Software and should be treated
as the own private opinion of their authors and under no circumstances as an
official answer from SmartBear Software.
The [Community Champion] signature is assigned on quarterly basis and is used with permission by SmartBear Software.
https://community.smartbear.com/t5/Community-Champions/About-the-Community-Champions-Program/gpm-p/252662
================================
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I decided to share my views which may/may-not fall in Automation gaps.
- Identifying the actual critically of the business functionalities in the application to automate
- Not covering the negative validations and concentrating only on positive scenarios which always pass in most times.
- One of the biggest obstacles is the environment, everyone moving with Agile and people starting to forget the main criteria of automation which stable environment to test.
- When your application has lot os felids to enter the data (like Healthcare) then TestData going to be the biggest problem
- Automation is good for regression, I understand but the scenarios that we are trying to automate should be End to End instead of doing module vise (which is mostly covered in Unit testing)
- Web pages are getting beatified with the type of components (like svg) which must be supported by the automation tools (I know TestComplete does this with limitations) and keep updating the trends application components
- Some people think Automation should be run faster and it should not take a long time, I slightly disagree, Sometimes It may take time to cover all possible scenarios
As of now, these are things blinked in my mind, there will be a lot in order to get the desired ROI in the automation.
Thanks
Shankar R
LinkedIn | CG-VAK Software | Bitbucket | shankarr.75@gmail.com
“You must expect great things from you, before you can do them”- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How do you determine the life cycle of an automated test? At some point, the tests become stale or redundant and no longer valuable.
How do you track your automation coverage? What percentage of an application is being tested by your regression suite?
Thanks,
Quenton
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@TheQinQA wrote:
How do you determine the life cycle of an automated test? At some point, the tests become stale or redundant and no longer valuable.
I map each Automation tests against JIRA test and labeled with Automated, each release we spent some time to clean up the JIRA tests related to automation whether those test functionality is changed or entirely removed based on that we will modify the scripts or remove it from the suite. It is all about keeping track of Automation tests with Test Management tools.
@TheQinQA wrote:
How do you track your automation coverage? What percentage of an application is being tested by your regression suite?
As I said earlier, Each automation tests should be mapped/created in TestManagement tools like JIRA, ALM, etc. On each release, once the automation run completed, We will do the failure analysis and update the data in Excel then the macro will take care of updating details into JIRA (Executing the tests with status).
You can't calculate the exact percentage of automation coverage on AUT, but A very well maintained manual tests and automation tests can be easy to generate automation coverage. For Ex: If we have 100 manual tests and you automated 50 of them then you're automated 50% (approx.).
Thanks
Shankar R
LinkedIn | CG-VAK Software | Bitbucket | shankarr.75@gmail.com
“You must expect great things from you, before you can do them”- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How long does it normally take to become an expert using TestComplete for an average person?
@amyhuie wrote:
If you joined us in our recent webinar, you learned five common gaps we often see missing from teams’ UI test automation strategies.
Now, we want to hear from you! What do you struggle with on a daily basis? What are your biggest challenges when it comes to UI test automation?
Join us for Ask a SmartBear: Your Biggest Test Automation Gaps on Wednesday, February 27th from 8AM to 2PM ET. Submit your questions to this topic, and our experts, Jeff Martin and Nicholas Bonofiglio, will answer all your burning questions. They'll respond to questions in a video on February 27th 4PM ET.
By asking a question in this topic, you'll be automatically entered to win one of five $20 gift cards! If that's not an incentive, I don't know what is 🙂
Here were some FAQs from the webinar to help get you started:
- Why is it important to test in older browser versions?
- How do we decide which browsers and devices to test?
- When should we automate a test vs. when should we stick to manual testing?
- What is the difference between record and playback vs scripting for test automation?
- What should we look for in a test automation tool?
- What is the ROI of parallel testing?
- Can we run automated tests as part of our CI/CD pipeline?
- How can we create test cases that are easy to reuse and maintain?
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Per person that will vary depending on the experience and technical knowledge of the person. From a company perspective, I can tell you the once we started using TestComplete that we went from having a single automated test to several hundred within the first year. 4 years into using the tool and we now have thousands of tests. Just this week we ran 45 regressions equalling about 8000 tests run. Making sure that you have a solid framework and design plan that doesn't result in a maintenance debt that is unattainable is key.
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are currently using TestComplete for our enterprise desktop solution. Do you have any recommendations on how we can incorporate performance and load testing? I understand the basic concepts for applying performance and load to web applications... but would like some suggections for getting started on a desktop app, Thanks!
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think this really depends on what you are wanting to accomplish with your performance and load testing. SmartBear offers a UI load testing tool (LoadComplete). This is a great tool for performance and load testing on the front end. LoadUI in the ReadyAPI suite can be used to perform load testing on the API services on the back end. My team works with our infrastructure team to run API load test in conjunction with Dynatrace. This gives us a great picture of how performant our API services are and if the hardware supporting it can handle the requests. You can use TestComplete to set up tests that will check on the response time on the UI and either give a warning or fail depending on the amount of time it takes to respond. For a full picture of how performant an application is I believe you need to do backend and front end performance/load testing. This provides better overall coverage and context. If API services and hardware are not optimized and performant then your front end will not perform well. If they are but your front end is not performant you have UI issues.
