Ask a Question

Wintertainment 2020 - Day 6: Measuring API Performance Testing Success



Let's talk about measuring API performance testing success today! This is a burning topic for anyone who wants to create reliable tests.


Answer the questions at the end of the article, share your thoughts on the subject with the SmartBear Community, and get the chance to win prizes. Your experience can be of great help to others!

See participation rules and event schedule


Let's make this an insightful discussion!


Measuring API Performance Testing Success 


It’s often argued that, even with the best testing team in the world, you’ll never find 100% of the bugs in a software system. That instead of trying to find and fix every bug, the goal should be to give as much confidence as you can that the system will work wellOf course, this is easier said than done with any software system, and APIs are no exception. So, what makes a “successful” API? 


According to the State of API Report 2020, performance is the highest measure of API success. 




So how to put this into practice? 


When measuring API performance testing success, it’s best to take a holistic approach. One measure you can take is to link the success criteria for your API load tests to the success criteria for your functional tests.  


Load tests give you clarity 


Let’s take an exampleSay you have a functional test case that is made up of a sequence of API callsYou’ve figured out what the pass/fail criteria are for the test case, and the test is currently passing for the latest version of the API.  


So far, so good. But what happens if 100 users try to go through this sequence of API calls at the same time? What about 1000 users? You should keep these criteria in mind and map them to your load tests. If a load test causes some of your functional test cases to fail where they were passing before, there’s some more work to do. 


Find the bottleneck 


So, we’ve established that linking the success criteria for our functional tests to our load tests is one way to measure successful API performance. Now let’s think a bit more technically. To say that your APIs are “working well” involves more than just the user’s experience.  


While users may experience an API as a single entity, there’s likely one or more servers involved under the hood. These could be web application servers, database servers, or other operating systems depending on the type of API you’re dealing withYou need to ensure these servers aren’t causing any bottlenecks. 


If you’re dealing with an Apache server, for example, you could monitor the number of idle workers available on the server, or the number of requests the server is handling per second. Regardless of the server operating system, it’s always good practice to monitor the amount of CPU being used during test execution. For metrics like this, it’s helpful to run a baseline load test with a “normal” number of users, store the results, and then compare them with the results when the number of virtual users is increased. 


But will it work in real life? 


Making sure that test execution environments are set up in as realistic a way as possible can give you reliable and relevant indications as to what your APIs will experience out in the wild. Chances are, if you’re developing a web API, the users interacting with it will be sending requests from different places. You might get requests being sent from North America, Eastern Europe and South Australia all within the space of an hour.  


This means that generating load test traffic from a single machine is often not enough. If you have cloud infrastructure available for testing, then spinning up machines in different geolocations to generate the load test traffic is a good place to start. A load test report based on multiple, distributed clients interacting with your API is going to be a lot more useful than if you just run the tests from the same computer you wrote them on.  


Give it a true load test 


There are no silver bullets in software testing, but the approaches I’ve discussed here will leave you with better load test metrics that will make measuring the success of your API performance tests that much easier. 


Further Reading 



What do you think? Comment below! 

  1. Are you using different tools for your functional tests and your load tests? 
  2. Do you monitor server performance during your load tests? If so, how? 
  3. What does your load testing environment look like? Cloud-based? Local machines? A hybrid of both?