API Testing Mistake #6: Analyzing Test Reports Separately - Robert Schneider

Analyzing reports is important! However, analyzing all the reports as one big piece is much more important. This will help you see a real picture of your testing process and build high-quality software. Watch the new video of the 7 Common API Testing Mistakes with Robert Schneider, to learn more details on this topic:

 

Robert Schneider

Robert D. Schneider is a Silicon Valley–based technology consultant and author, and managing partner at WiseClouds – a provider of training and professional services for NoSQL technologies and high performance APIs. He has supplied distributed computing design/testing, database optimization, and other technical expertise to Global 500 corporations and government agencies around the world. Clients have included Amazon.com, JP Morgan Chase & Co, VISA, Pearson Education, S.W.I.F.T., and the governments of the United States, Brazil, Malaysia, Mexico, Australia, and the United Kingdom.

 

Robert has written eight books and numerous articles on database platforms and other complex topics such as cloud computing, graph databases, business intelligence, security, and microservices. He is also a frequent organizer and presenter at major international industry events. Robert blogs at rdschneider.com.

 

 

Watch the previous videos in the 7 Common API Testing Mistakes series:

  1. 1. Focus on the Most Typical Messages
  2. 2. Using Hard-Coded Test Data
  3. 3. Shielding QA Team From End Users
  4. 4. Separating Load Testing from Functional Testing
  5. 5. Testing Only Positive Scenarios
  6.  

Transcript:

Tanya

Hi Rob! What mistake are we going to talk about today?

 

Rob

This one is going to be basically – always testing in isolation never having any view of the before of your testing, no history view point. And, this is another very common thing, one of actually the most common ones is you go a customer site, and you say, ‘So, tell me about how your API tests have been going over the past six months or a year?’, and they say, ‘I don't know, they seem to be getting better. We don't really know.’ And it's unfortunate because of course when you don't know what your history is, you don't know if you're getting any better, if you're getting any worse, and a lot of times it's not because people don't want to know these things. But, again, as it is the case with a lot of the other things that we've talked about, people are overworked, and they are underpaid, of course, and they don't keep track of these things because truthfully they don't see an immediate benefit for it. And, that's a very unfortunate thing. I've got some ideas about what you can do to make that better.

 

So, first of all, if you're using a tool like ReadyAPI and SoapUI, these tools have built into them some very nice reporting engines. In SoapUI, you have Jasper, you can use JUnit, and there are some other ones, as well. And, when you do that, you can produce these very nice-looking reports that you can give to your colleagues, to the developers, to the management and say, ‘Look, last week, here's the report, what it looked like, we had this many test cases failed. This week we had half the number failed so we must be getting better.’ So, that's certainly something you can do whether or not you are even using SmartBear’s technology. Most modern technologies have some sort of integration with a reporting engine. That’s the first thing that I would say could be done.

 

I’ve got some other ideas, too. Another one would be if you don't want to use the reporting engine of your product in question, a lot of times you can export these results into a centralized database, and then use business intelligence tools open-source or otherwise to produce some really interesting analytics off that and again give you a much better idea about whether you're making progress, you're holding pace, or you're falling behind in these things.

 

Tanya

And, I suppose this is something where some kind of AI can help to analyze the results, to compare the results that we had a month ago and which we have at the moment and to suggest something.

 

Rob

Well, it's actually a great point, Tanya. A lot of a modern AI actually is great at pattern recognition. People are not so good at pattern recognition. But AI and analytics are really good at this so they could basically look into your results potentially. You could train it to look at your results and make some suggestions and definitely it’s something worthwhile doing. Especially, if you think about some of the other suggestions we've been making in this series regarding using lots of generated data or realistic data, whatever it might be, this is a great way of potentially taking that massive volume of data that you're feeding into your API tests and using automated tools to be able to look at that and say, ‘Oh, we seem to have a trend here. When we're sending in this kind of information inbound, we're getting errors on the outbound.’ And, that might be very hard to look at with your own eyes, but using automation for that could be really helpful.

 

Tanya

What I wanted to ask about are the levels of the reports which I get. For example, I can get a report about one API. It can be kind of technical, and, in this case, one person should review this report, but we can get a report for many API’s, and this will show some other data. Does it make sense to kind of compare them as one report or should we separate them?

 

Rob

Great question. It really depends on what you're trying to achieve. What I didn't really talk about was reviewing these tests in isolation, and you're not looking at your history, never mind, you're not even looking at what the tests look like across your organization. So, to your point, there is different granularity for these reports you could basically say I want to have an organization-wide view of things. Last week, we ran a hundred tests that exercise a hundred and fifty thousand potential different combinations of data for our API, whatever it might be, and across the entire environment, this is really where you start pulling data in form multiple tests and feeding it into a repository with AI and analytics looking at that. You've got all this information gathered in one repository, and you can see things, for example like, we're not doing as much testing as we should, we only tested a hundred different scenarios, we should be testing tens of thousands. If you look at these things just as individual one-off time slices with no history of what things looked like in the past, you're not going to see that degradation of performance or that increase in the number of bugs if you don't look at it over the course of time.

 

Tanya

Great! Thanks for explaining this mistake for us. Community if you're interested in watching videos on other mistakes, please find them in the current playlist. Watch the videos, leave us your comments - we will be happy to talk with you online!

Thank you very much!

 

Rob

Thank you! See you next time!

Next time, we'll review the next mistake. If you have any questions or suggestions about the mistakes, please leave your comments under this video, and we will be happy to continue our conversation online.

 

 

Community Manager
Users online (165)
New Here?
Join us and watch the welcome video: