API Testing Mistake #7: Running Tests Manually - R...
API Testing Mistake #7: Running Tests Manually - Robert Schneider
Running tests manually is more than just an old habit, it is actually a waste of your time! In this video, Robert Schneider explains why API tests require automation and lists the major requirements. Watch the new video for details:
Robert D. Schneider is a Silicon Valley–based technology consultant and author, and managing partner at WiseClouds – a provider of training and professional services for NoSQL technologies and high performance APIs. He has supplied distributed computing design/testing, database optimization, and other technical expertise to Global 500 corporations and government agencies around the world. Clients have included Amazon.com, JP Morgan Chase & Co, VISA, Pearson Education, S.W.I.F.T., and the governments of the United States, Brazil, Malaysia, Mexico, Australia, and the United Kingdom.
Robert has written eight books and numerous articles on database platforms and other complex topics such as cloud computing, graph databases, business intelligence, security, and microservices. He is also a frequent organizer and presenter at major international industry events. Robert blogs at rdschneider.com.
Watch all videos of the 7 Common API Testing Mistakes series:
Today we're going to talk about executing tests, what is the issue here?
Well, the issue here is another really common one where people will run their tests manually. They will bring up whatever testing tool they might be using, could be ReadyAPI and SoapUI, could be something else. And they'll sit there at their desk, and they will launch a test, they will look at the results and say, ‘Okay, this looks like it's good. Let's launch the next test.’ And so on. They'll maybe do this for four or five tests, and if you've been watching this long-running series what you've probably noticed is that a lot of these problems tie into each other.
In other words, you can get away with running your tests manually if you only have five tests with hard-coded data, but as we've seen, there's some real drawbacks to using hard-coded data and only covering a small part of your API. So, by definition, if you get to a place where you are feeding your API tests with proper amounts of realistic data and lots of it and running lots of different scenarios, it becomes almost impossible to properly test it using purely manual procedures.
Now, clearly, when you start your testing efforts, you're building your tests, you have to run them manually to see how they all work and if they're logical and working in the way that you think they will, but once you get past that, and you start thinking about running them in a typical build environment, you really want to be thinking about automation.
What issues can we face here?
Sure. If you don't use automation, first of all you're not going to be part of the actual build cycle which is where you want to be. If you're doing proper API testing, most organizations these days have moved in many cases from waterfall API development, software development, to more of the agile approach.
And, not everybody is pure agile. But, in general, most organizations are doing builds much more frequently than they used to in the past. So, you, as a tester, or a developer who's building tests, you don't have the luxury anymore of saying, ‘Well, we're gonna be doing a build, this month's build, I'll run my tests.’ It's more like this minute’s build, this hour’s build. So, you have to use automation. If you don't, you're not truly part of the regression testing cycle, you're not part of any new functionality testing, and you frankly are a bottleneck in the overall testing in the software development workflow. You don't want to be that bottleneck.
The good news is that with modern API tools, API testing tools, and, again, I keep referring to SmartBear’s really excellent products, what you can do is - these are very nicely integrated with popular build environments like Jenkins, Maven or even Ant. These things can all launch these API tests that you have built in the tools of your choice. This is called shift left, where you're taking the testing cycle, and you're pulling it back closer to the development cycle.
So, what should hopefully, as we wrap up this series, come across is if you follow some of the best practices we've been talking about here over the past few weeks, what you'll see is that they make you more able to handle automation because things are not hard-coded, they're not a handful of tests that are very fixed. They are a lot more dynamic. Yet, at the same time, they're a lot more able to be pulled into an automated build environment.
When I talk about automation, I mean it in two dimensions. The first level of automation is data driven testing where you're not using five fixed test scenarios, you're using thousands of test scenarios that are maybe encapsulated in five or ten test cases with different data feeding it to them. Automation dimension two is instead of you launching the test on your machine and looking at the results, you have this tied in to the build cycle so that when people kick off a Jenkins build it goes all the way to your API tests, and the results go all the way back to some kind of a centralized repository with automated reports that bring up the results and tell you if you're going in the right direction or the wrong direction.
It totally makes sense. Okay, thanks a lot. That was very interesting. Those were great seven mistakes that you will need to try to avoid.
Community if you are interested in watching other mistake videos, you can find them in the current playlist. So, just watch the videos and leave us your comments - what you think about the mistakes and how you are trying to avoid doing them.
Thank you Robert for this series. That was very useful. I suppose we will invite you to our SmartBear talks!
Looking forward to it, thanks! Thanks everybody!
Thank you! Bye!
If you have any questions or suggestions about the mistakes, please leave your comments below this video, and we will be happy to continue our conversation online.