Contributions
Re: Performance Test to sequentially run large quantity of Test Cases
Thanks for the advice Rao. To answer your question, the plan is to use ReadyAPI (Pro). This is because we have over 200 Functional Test Cases in ReadyAPI we want to leverage. To refine my goal, what we are trying to do is to establish individual baseline Performance metrics for each of our API endpoints (REST calls). That is, how each endpoint performs (response time) when the system is under minimal Load. The intent is to run these test on a lower environment (not our full blown Load environment) earlier on in our test cycle to identify individual calls that have notably increased response times as compared to the baseline (and work can proceed to address performance issues earlier – i.e. shift left). The above goal is what led to the requirements to run the Test Cases in sequential order and run each a fixed number of times (e.g. 10). We are not looking to run a standard type of Load test, more of a baseline Performance test. As for loadtestrunner, the limitation that it can run only one load test (one -n opt) per invocation makes it unsuitable for what we are trying to do (we would need one Scenario with all of our test cases in it – arduous to maintain). Thanks, Michael3 years agoPlace ReadyAPI QuestionsReadyAPI Questions1KViews0likes1CommentPerformance Test to sequentially run large quantity of Test Cases
Hi, My goal is to run a Performance Test that executes a relatively large number of Test Cases (>200) in sequential order. The Performance Test is configured to run with 10 Virtual Users. The purpose of this test is to get metrics (avg Response time taken, min, max, etc.) over the course of 10 executions (once per VU) for each of the Test Cases and Test Steps (it is a requirement that we collect per Test Step metrics). To do this, my first approach was to create a single Performance Test Scenario that includes all 200 Test Cases as Targets (can’t use multiple Scenarios as the Test Cases need to run sequentially). This would capture the required metrics . However, this is a large project with multiple developers creating, modifying and potentially deleting Test Cases. It is the deleting that is a problem as per the warning when removing a Test Case: “All load scenarios containing this test case will be removed too”. This is difficult to manage and assure that a Test Case, even inadvertently, never gets removed. Another problem is creating and maintaining a Scenario with >200 Targets is cumbersome. Is there a well-known/better way to do this? I have tried using various versions of Groovy scripts to directly run the Test Cases, but the reports do not capture the Test Step metrics. Is there a way for a script to add Targets (Test Cases) to a Performance Test Scenario (along the lines of cloneTestCase() for functional test)? In fact, a Groovy script solution is preferable if it avoids the need to have a Scenario with 200+ Targets. Using ReadyAPI 3.9 (With Pro License). And suggestions are appreciated. Thanks, Michael1.1KViews0likes5Comments