Forum Discussion
One of the biggest issues we face is keep data in good states across a multitude of databases. Reset scripts / completely nuking and setting back up isn't really an option, and there are multiple applications hitting all of these.
Would love to hear some others thoughts on ways to alleviate this.
Also, would love to know how others utilize one applications APIs to generate data for testing another applications UI.
- shankar_r6 years agoCommunity Hero
I decided to share my views which may/may-not fall in Automation gaps.
- Identifying the actual critically of the business functionalities in the application to automate
- Not covering the negative validations and concentrating only on positive scenarios which always pass in most times.
- One of the biggest obstacles is the environment, everyone moving with Agile and people starting to forget the main criteria of automation which stable environment to test.
- When your application has lot os felids to enter the data (like Healthcare) then TestData going to be the biggest problem
- Automation is good for regression, I understand but the scenarios that we are trying to automate should be End to End instead of doing module vise (which is mostly covered in Unit testing)
- Web pages are getting beatified with the type of components (like svg) which must be supported by the automation tools (I know TestComplete does this with limitations) and keep updating the trends application components
- Some people think Automation should be run faster and it should not take a long time, I slightly disagree, Sometimes It may take time to cover all possible scenarios
As of now, these are things blinked in my mind, there will be a lot in order to get the desired ROI in the automation.
- AlexKaras6 years agoChampion Level 3
cunderw wrote:
One of the biggest issues we face is keep data in good states across a multitude of databases.
My usual preference is to generate semi-random unique identifiable data and use them as test input.
An example of such data may be like this:
Invoice20190221_1500_jwofiSW9rjdf0d239r
Where
-- Invoice - is a primary item type identification prefix;
-- 20190221_1500 - date/time stamp to make data unique;
-- jwofiSW9rjdf0d239r - random string of random length to make data more varying.
In almost all cases test data like above may be used without significant risk to get unexpected failure because of data duplication and make it possible to quickly and easily understand when and where the given data were entered into the application.
As a side benefit, if one executes such tests against the same database for a long time, he will get a relatively large database that can be used to measure application's performance when working with large data volumes.
Related Content
- 4 months agoSmartBear
- 2 months agoD0UG
- 5 months agoFelecia628
Recent Discussions
- 14 hours agoAivanitskiy
- 18 hours agonastester