API Testing Mistake #5: Testing Only Positive Scenarios - Robert Schneider

Many of us create tests only for positive scenarios completely ignoring the negative ones. At the same time, negative scenarios may be expected. Watch the new episode of the 7 Common API Testing Mistakes series with Robert Schneider to learn more on why you need to create both positive and negative scenarios.

 

 

 

Robert Schneider

Robert D. Schneider is a Silicon Valley–based technology consultant and author, and managing partner at WiseClouds – a provider of training and professional services for NoSQL technologies and high performance APIs. He has supplied distributed computing design/testing, database optimization, and other technical expertise to Global 500 corporations and government agencies around the world. Clients have included Amazon.com, JP Morgan Chase & Co, VISA, Pearson Education, S.W.I.F.T., and the governments of the United States, Brazil, Malaysia, Mexico, Australia, and the United Kingdom.

 

Robert has written eight books and numerous articles on database platforms and other complex topics such as cloud computing, graph databases, business intelligence, security, and microservices. He is also a frequent organizer and presenter at major international industry events. Robert blogs at rdschneider.com.

 

 

Watch the previous videos in the 7 Common API Testing Mistakes series:

  1. Focus on the Most Typical Messages
  2. Using Hard-Coded Test Data
  3. Shielding QA Team From End Users
  4. Separating Load Testing from Functional Testing

 

Transcript:

Tanya

Rob, which mistake are we going to cover today?

 

Robert

This one is called ‘Only test positive scenarios’. Let me tell you about the mistake that people do very often. When you have a fixed set of test cases, very often what people will do, is they will only test a positive outcome. In other words, let's use a hotel reservation system, in a hotel reservation, you could think there's going to be maybe 10 different bits of information that's going to be in there: your name, the dates, the hotel, the number of beds, special requests, and so on. When you have a relatively small number of test cases that you're trying to use with hard coded data, it's very tempting to fall into the trap of ‘Let's come up with a good hotel reservation scenario. We're going to give them the right information, maybe, we'll do a bad one occasionally.’ But a lot of times people will just simply test the positive outcomes. They will send in the right name, the right dates, the right room type, the right beds, the right payment method and so on, and they will get back, in the case of a web service, you won't get a SOAP fault, and, in the case of a REST API, you'll get back a 200 HTTP status code. And, you'll say, ‘This positive outcome is good. I sent in a good reservation request that got back a success, we can move on. Our system is capable now of dealing with positive, with good reservations.’

 

And, when you're doing a relatively small number of test cases, you might fall into that trap much more often. We think a better way of doing it is (and this kind of ties into the notion of data-driven testing that we covered earlier in the series) - why not test not only positive outcomes, but test negative outcomes? I'm going to send in a request for a reservation for four years in duration, or for beds that we don't have in the hotel, or for an empty last name, or for an invalid credit card. A lot of times people think, ‘Well, the API is going to send me an error, it's gonna cause my test case, in the world of SoapUI, to fail.’ But, no, you can actually configure SoapUI so that it will look for 404s or 500s or whatever the HTTP status code that's indicative of an error or a SOAP fault, what have you.

 

So what we recommend is in the same place where you drive your API's with data, you drive your tests with data, imagine, let's say, it's a ten row spreadsheet. In that spreadsheet, you have the name, and the date, and the beds and all that one row after the other, each row representing a different test scenario with one test case. Row one could be a valid row which you expect alongside with your input data you have status code expectation of a 200, and you can assert on that using a testing tool such as SoapUI to say, ‘I called the API with good data. I expect to see a 200 in the response.’ You evaluate that in the response, it's there, great, you move on to the next row.

 

The next row in your data is a negative test. This row we expect to cause an error in the API. So, it's going to be, in this case, maybe, it'll be an empty last name or something like that. In the column where you keep your expected HTTP code, you don't put a 200 which is a positive, you put a 404 or whatever your API uses to indicate an error. Then, you can simply go through row by row: positive, negative, positive, negative. Whatever you decide to do, these two things now intersperse in the same data source, and you are covering your API in a much more comprehensive way not just looking for things that should throw a good response, but looking for things that should throw a failure, negative testing - all together in one data source.

 

If you think about it, if you're using hard coded test cases, you may even be doing a little bit of negative testing, but I can guarantee you, a lot of times that the trap that we see people fall into is that they will have hard-coded test cases for these various scenarios, and then it becomes kind of time-consuming and tedious for them to create a new test case with yet another new set of scenarios. But when you use data, especially dynamically generated data, and you have it in some kind of a source of information that you can feed to your API's alongside with what you expect the API to return, you are able to plow through these in a much more efficient way. And then, when people come up with new scenarios, which they always do, you don't write a new test case, you simply take the existing data source, and you add a row into it that represents, let's say, a new negative scenario.

 

So, as you go through this, this is, by the way, a great way of uncovering really hard-to-find bugs because you could have a scenario where you're sending in, for example, a bad reservation. You expect this bad reservation to throw a 404. It's the dates are null, or the payment method is some weird thing, and yet it takes the reservation. You expected an error, you got back a success. That's a problem, you need to alert your developers to that thing.

 

Likewise, you send in what looks to be a good reservation, turns out there's some hidden bug in your API that causes that reservation to throw a 404, and you expected a 200 - once again go notify your developers.

But this level of comprehensive coverage is only possible if you look at the API from all perspectives and try to break it at all times along with trying to make sure that it returns the right positive results to you.

 

Tanya

Does it make sense to do Load testing for negative tests?

 

Robert

It's even more so in a lot of ways because if you think about if someone is placing your API under attack, a lot of times things come in that are malformed or they're not properly structured or there's missing data. So, yeah, you know, if you're going to do proper security and performance testing, you need to be subjecting your API to the kinds of things it's going to see in the real world. And, the real world is not perfect. We're gonna see bad data coming in, we're going to see negative scenarios. Why not bring that all the way back into the functional testing perspective? And then, when you take those functional tests and hand them over to your performance and your security colleagues, they've got all aspects of an API test both positive and negative in the same source of information, in the same functional test.

 

Tanya

Thanks for sharing such a great example with us. Community if you have any questions we are always happy to continue our conversation in the SmartBear Community. Post your comments, and we will be happy to reply to you. Next time, we will cover several more mistakes. Thanks a lot Rob for this video!

 

Robert

Thank you guys. See you next time, bye!

Next time, we'll review the next mistake. If you have any questions or suggestions about the mistakes, please leave your comments under this video, and we will be happy to continue our conversation online.

 

 

 

Community Manager
Users online (377)
New Here?
Join us and watch the welcome video: