I'd like to manage 500+ test cases in 10+ test suites and run them every day after deployment. But it's not always running smoothly. Maybe it becoz of data issue / environment issue / scripts issue. mostly failed due to complex business and complicated test scenarios for dynamic data retrieve (ie. performance issue, timeout). Currently the structure of test cases like initialize / test cases (assertions) / cleanup.
And, most of test cases keep relatively independent and reduce the dependency between test suites / test cases. encapslated common modules and refactor more methods / class in scripts library. All data which needs to be updated / changed / dynamically retrieved are saved in Excel / properties, and running in sequence.
What about your ideas or any best practice? welcome to share with community
While I wouldn't say that that we have a reusable framework, but we do have several principles that we follow. These are much like yours @aaronpliu.
We always make sure that a test case is independently runnable on it's own, with no dependencies on other tests cases. This way we can always run an individual test case as required. We then group test cases into logical groups with test suites. This does mean we usually have a "Common Library" test suite, and then make extensive use to the Run TestCase Test Step to avoid duplication. I've spoke about this more here and here.
We also try to make use of Events to avoid duplication as well, I've detailed a couple of my event scripts here.
We run our tests via Jenkins on a nightly basis (personally I view the ReadyAPI GUI application like a development IDE), but also it's not always smooth, false failures just waste time and reduce confidence in the tests.
I previously worked in a company where we had nearly 3000 data-driven tests running in over 40 test suites. The actual test cases in SoapUI were around 400 or so, we re-used the same test cases to test various scenarios i.e a test case is to login, getProfile, verifyExpected and actual, fill out report, clear relevent properties and run the same test again with different data-set, i.e login with superUser, admin or employee etc.
You can re-use the same tests with data-driven approach to test various combinations. The practice you have described seems pretty decent, though I would add reporting/tests outcome in there to know what scenario/data-set passed or failed.
In automation, the key thing is to re-use as much as possible. We don't have an option of step defs in SoapUI, though we can use the concept of step defs within our test cases.
Strongly agree with Radford in regards to ensure the test cases can be run independently. They should not depend on previous steps to pass or generate data. In future, if your regression pack beocme very large, it can be managed by splitting the XMLs in two or three and run them in paralell to save time and to ensure all tests will still pass as there is no dependency.
Great Topic @aaronpliu,
What i would suggest and i have developed a datadriven framework, where Test-Suite Name is in Sheet 1 and for 1st Suite corresponding Test-Cases in Sheet 2 for 2nd Test-Suite corresponding TestCases in Sheet 3 and so on.
When i put Y in front of Test-Suite and Y in front of Particular Test-Case(s) my Test-Cases got executed in SoapUI for that i have written Groovy Code and Placed it in Seperate Suite Named it Run Controller and Run only this Script, it reads the Excel sheet and automatically executes the Test Cases where it finds Y.
Click "Accept as Solution" if my answer has helped,
Remember to give "Kudos" 🙂 ↓↓↓↓↓
I agree with reuse as much as you can rather than duplicating
Resuse will mean parametrizing
I am even thinking of having all the test data in data base and then being being passed on as parameters.
Also you never want to have dependencies on test steps as @Radford mentioned that is just a brittle test
Thank you all for sharing your experience. I think that following these principles should help other users a lot in their work.
Community, feel free to share what you think here
> ensure the test cases can be run independently.
> In automation, the key thing is to re-use as much as possible.
Well, just my $0.02...
(Disclaimer: I am from TestComplete world which means functional testing and this might matter)
I think that the peculiarity here is that test cases may migrate into test steps when we proceed from simplest (unit) tests to more and more complex (end-to-end) ones.
For example, let's consider the lifecycle of some publication in some system.
Initially, an empty publication must be created. Then one or more topics can be added to it. Then one or more images can be added as well. Then other items like content, annotation, etc. can be added. Publication can be saved after any above step. Saved publication that is assumed to be ready, may need to pass one or more approval steps. Approved publication may be released to the public. Released publication may require additional registration.
All the above steps must be verified during development and initial testing and this verification can be implemented as independent tests that either use stubs (that imitate existing empty publication) or use the minimal steps set (e.g. create an empty publication and add an image to it).
However, when you proceed from simple verifications to more and more complex end-to-end tests, more and more of existing independent tests may (and must) be reused, thus, becoming to be test steps.
For example, when you test that a publication can be created, an image can be added to it and then publication can be saved, you are actually creating a wrapping test case that utilizes three existing ones as its steps. But those three steps (tests) are no more independent ones as they must act against the same publication.
Even more: in the real world, some actions may require a reasonable time to be executed which means that it may be a good idea to reuse results of the tests executed previously in order to speed-up the overall tests run time.
For example, if it 'costs' (in terms of time, resources, etc.) to create a new publication and fill it with the content, it may be not a bad idea when verifying that the created publication can be publiched, not to create a brand new publication, but reuse the one that was already created by tests executed previously and just publish it.
With all the above in mind, one might consider a framework (not sure if it is considered lightweight or not:) ), when tests store references to the created artifacts (publication name, its status, sections added, etc.) to some storage. And it is up to the subsequent tests to either reuse the existing results or repeat the already executed actions over and over again.