API Load Testing Mistake #3 to Avoid: Failing to Explore Multiple Load Generation Scenarios

Creating an API load test is only one piece of the huge load testing world. Once your test is ready, you need to integrate it into the business process, simulate different load generation scenarios, and analyze the results. Today, we will continue investigating API Load Testing mistakes you can easily avoid. Robert Schneider, a software testing consultant from WiseClouds, will share exclusive insight into load generation scenarios. Watch the video (find the transcript below) and post your comments:

 

Robert Schneider

Robert D. Schneider is a Silicon Valley–based technology consultant and author, and managing partner at WiseClouds – a provider of training and professional services for NoSQL technologies and high performance APIs. He has supplied distributed computing design/testing, database optimization, and other technical expertise to Global 500 corporations and government agencies around the world. Clients have included Amazon.com, JP Morgan Chase & Co, VISA, Pearson Education, S.W.I.F.T., and the governments of the United States, Brazil, Malaysia, Mexico, Australia, and the United Kingdom.

 

Robert has written eight books and numerous articles on database platforms and other complex topics such as cloud computing, graph databases, business intelligence, security, and microservices. He is also a frequent organizer and presenter at major international industry events. Robert blogs at rdschneider.com.

 

Also, Robert is an author of 7 Common API Testing Mistakes video series.

 

I hope you liked the video! If you have any questions, please post them here. The series contains 5 videos that will be posted each Monday of August 2020. Subscribe to the SmartBear Community Matters Blog to get the news from us first!

Watch previous video of the series:

  1. Mistake #1: Separating Load Tests from Business Process
  2. Mistake #2: Not Calibrating Virtual Users to Real Users

 

Transcript:

Tanya:

Hi! I'm welcoming you in SmartBear Talks where we continue talking about API load testing mistakes with Robert Schneider from WiseClouds. And, today, we will talk about the third mistake. If you missed the previous two mistakes, watch the previous videos.

Hi Robert! How are you today?

 

Robert:

I'm fine! How about everybody else out there? Hope you're all doing really well. Looking forward to talking about a very common problem we see, which is failing to test multiple API load generation strategies. And, what I mean by that is a lot of times when people are testing their APIs under load, if they are testing their APIs under load I should say, but if and when what they're doing is picking a load generation strategy that may not be related to what they actually will see in production at runtime.

 

The most common scenario is if you're using a product like ReadyAPI and LoadUI within that people will go with the default fixed API load generation strategy wherein the test comes awake, and it spends however many virtual users or arriving virtual processes you've defined, and it will just send that forever. It'll go on as long as you want your load test. But the thing is, in the real world, we don't normally see interactions with APIs in a steady state. I mean, perhaps, that's what it is for you, but most people, most applications, most organizations are going to see very different styles of load. Maybe, with even the same application but at different times of the day, or different times of the year. So, you can imagine scenarios where, maybe, you've got a steady ramp up of users. Perhaps, you have a system that is meant to support people during the work day for example. So, imagine when the day starts, some people come in early, some, well, this is, of course, within the world that we're living in now, maybe, they're working remotely, whatever it might be, but whatever the scenario is - people come in a kind of a graduated roll up of load. So, they start maybe a very light load and then as the day goes on it gets higher and higher. And then, it goes steady, and then, it tapers off as the day comes to an end.

 

There are capabilities in ReadyAPI to test that exact scenario I just described. When you do ‘fixed’, you're missing all that ramp up and ramp down, but you can use the load generation strategies and ReadyAPI to simulate that exactly right. So, that's one of many potential load scenarios you might have.

I'll give you another one - let's imagine you're building an application that is working with some kind of transportation system where everybody shows up at once to go through maybe a card reader for a metro system. They all come in and, maybe, a game has ended, or some kind of theater thing has ended. And, they've all come, and they're going through the turnstiles or whatever that thing is, what that would be, we would call that a burst. Very low levels of activity followed by a huge spike, followed by, maybe, a steady state, followed by an immediate drop-off.

 

Another scenario - a lot of times people don't test that, you know, and you find out in run time and in production that, oh, as it turns out we really weren't able to scale for that level of API load that we saw from the real world. The thing is, again, with a product like ReadyAPI, you have that ability to test those kinds of scenarios.

 

Tanya:

I'm not an expert in ReadyAPI, that's why, my question may sound a bit like a question for dummies, sorry about that. For me, it sounds like when you have one API then try to test it by using different scenarios, but at the same time, it's like in parallel.

 

Robert:

You can do it in parallel, you can do it sequential. And, a lot of times, people don't really know what they're going to see in production. Maybe, it's a new application, so why not see how your application does in a ramp up, see how it does in a steady state, see how it does with a ramp down, see how it does with a burst of activity. And, a lot of times, again, through no fault of their own, because people are busy, and they're under a lot of time pressure, we know all that, they just simply choose the default setting in ReadyAPI which is just - give me a steady state of load. Which is fine. But, again, as with a lot of things in testing APIs, especially under load – experiment! It's very easy in the product to choose from the drop down, and say let's move this from a ramp up or a fixed load, let's put it under burst and see what happens. Oh, it turns out we put it under burst, if we're going to get a big spike of traffic at a particular time in the day we're not going to be able to handle that. Maybe, our cloud resources are not elastic enough to grow or shrink back. There's so many things you can find out by doing this, but you will never know any of this if you don't experiment, and try it out, and learn from what the tool is telling you.

 

Tanya:

Okay, experiment and you will learn the new world!

 

Robert:

Absolutely, absolutely.

 

Tanya:

Okay! Thanks a lot for explaining these mistakes. And, we have two more! Stay tuned!

 

Robert:

See you soon!

 

Community Manager
Users online (222)
New Here?
Join us and watch the welcome video: