Forum Discussion

MichaelMcGroary's avatar
MichaelMcGroary
New Contributor
3 years ago

Performance Test to sequentially run large quantity of Test Cases

Hi,

 

My goal is to run a Performance Test that executes a relatively large number of Test Cases (>200) in sequential order. The Performance Test is configured to run with 10 Virtual Users. The purpose of this test is to get metrics (avg Response time taken, min, max, etc.) over the course of 10 executions (once per VU) for each of the Test Cases and Test Steps (it is a requirement that we collect per Test Step metrics).

 

To do this, my first approach was to create a single Performance Test Scenario that includes all 200 Test Cases as Targets (can’t use multiple Scenarios as the Test Cases need to run sequentially). This would capture the required metrics . However, this is a large project with multiple developers creating, modifying and potentially deleting Test Cases. It is the deleting that is a problem as per the warning when removing a Test Case: “All load scenarios containing this test case will be removed too”. This is difficult to manage and assure that a Test Case, even inadvertently, never gets removed. Another problem is creating and maintaining a Scenario with >200 Targets is cumbersome.

 

Is there a well-known/better way to do this? I have tried using various versions of Groovy scripts to directly run the Test Cases, but the reports do not capture the Test Step metrics. Is there a way for a script to add Targets (Test Cases) to a Performance Test Scenario (along the lines of cloneTestCase() for functional test)? In fact, a Groovy script solution is preferable if it avoids the need to have a Scenario with 200+ Targets.

 

Using ReadyAPI 3.9 (With Pro License).

 

And suggestions are appreciated.

 

Thanks,

Michael

5 Replies

  • nmrao's avatar
    nmrao
    Champion Level 3

    Questions:

    1. Are you going to use ReadyAPI tool to test the performance? or LoadUI?

     

    Other pointers:

    # Consider what metrics are needed in the performance report, accordingly design your tests. For example, if certain API get call thruput with varying size of the request, similarly put call or delete calls.

    # Instead of calling groovy scripts which calls another test cases, use out-of-box test cases which might help you to get right performance data.

    # Run the tests in the same subnet to avoid the latencies and better throughput.

    # Running tests sequentially, does not give the desired results. The right way should be design the test, and create the load test and run the load test. There should be limited number of tests based on APIs to get right performance data.

    # Use commandline tool (such as loadtestrunner) to invoke the perfromance test.

     

    • MichaelMcGroary's avatar
      MichaelMcGroary
      New Contributor

      Thanks for the advice Rao.

       

      To answer your question, the plan is to use ReadyAPI (Pro). This is because we have over 200 Functional Test Cases in ReadyAPI we want to leverage.

       

      To refine my goal, what we are trying to do is to establish individual baseline Performance metrics for each of our API endpoints (REST calls). That is, how each endpoint performs (response time) when the system is under minimal Load. The intent is to run these test on a lower environment (not our full blown Load environment) earlier on in our test cycle to identify individual calls that have notably increased response times as compared to the baseline (and work can proceed to address performance issues earlier – i.e. shift left).

       

      The above goal is what led to the requirements to run the Test Cases in sequential order and run each a fixed number of times (e.g. 10). We are not looking to run a standard type of Load test, more of a baseline Performance test.

       

      As for loadtestrunner, the limitation that it can run only one load test (one -n opt) per invocation makes it unsuitable for what we are trying to do (we would need one Scenario with all of our test cases in it – arduous to maintain).

       

      Thanks,

      Michael

      • nmrao's avatar
        nmrao
        Champion Level 3
        That is ok to run on load test case. You can add the different APIs in that. Otherwise, have different logical tests and run them separately. Of course, each test either you are going to run with fixed number of users (threads) for specified time. This way, if some performance fixes are done, it is easily possible to repeat the test either to reproduce or verify the fix.
  • aaronpliu's avatar
    aaronpliu
    Frequent Contributor

    MichaelMcGroary 

    it is possible to automatically create / clone test cases in ReadyAPI.

    Would you please clearly state what you want. Actually I do not understand what you want.

     

    Thanks,

    /Aaron