It’s often argued that, even with the best testing team in the world, you’ll never find 100% of the bugs in a software system. That instead of trying to find and fix every bug, the goal should be to give as much confidence as you can that the system will work well. Of course, this is easier said than done with any software system, and APIs are no exception.So, what makes a “successful” API?
When measuring API performance testing success, it’s best to take a holistic approach. One measure you can take is to link the success criteria for your API load tests to the success criteria for your functional tests.
Load tests give you clarity
Let’s take an example: Say you have a functional test case that is made up of a sequence of API calls. You’ve figured out what the pass/fail criteria are for the test case, and the test is currently passing for the latest version of the API.
So far, so good. But what happens if 100 users try to go through this sequence of API calls at the same time? What about 1000 users? Youshould keep these criteria in mind and map them to your load tests. If a load test causes some of your functional test cases to fail where they were passing before, there’s some more work to do.
Find the bottleneck
So, we’ve established that linkingthe success criteria for our functional teststo our load tests is one way to measure successful API performance. Now let’s think a bit more technically. To say that your APIs are “working well” involves more than just the user’s experience.
While users may experience an API as a single entity, there’s likely one or more servers involved under the hood. These could be web application servers, database servers,or other operating systems depending on the type of API you’re dealing with. You need to ensure these servers aren’t causing any bottlenecks.
If you’re dealing with an Apache server, for example, you could monitor the number of idle workers available on the server, or the number of requests the server is handling per second. Regardless of the server operating system, it’s always good practice to monitor the amount of CPU being used during test execution. For metrics like this, it’s helpful to run a baseline load test with a “normal” number of users, store the results, and then compare them with the results when the number of virtual users is increased.
But will it work in real life?
Making sure that test execution environments are set up in as realistic a way as possible cangive you reliable and relevant indications as to what your APIs will experience out in the wild.Chances are, if you’re developing a web API, the users interacting with it will be sending requests from different places.You might get requests being sent from North America, Eastern Europe and South Australia all within the space of an hour.
This means that generating load test traffic from a single machine is often not enough. If you have cloud infrastructure available for testing, then spinning up machines in different geolocations to generate the load test traffic is a good place to start. A load test report based on multiple, distributed clients interacting with your API is going to be a lot more useful than if you just run the tests from the same computer you wrote them on.
Give it a true load test
There are no silver bullets in software testing, but the approaches I’ve discussed herewill leave you with better load test metrics that will make measuring the success of your API performance tests that much easier.