Ask a Question

Generating Test Results for a Data Driven Test

SuperSingh
Contributor

Generating Test Results for a Data Driven Test

I have my project in SoapUI Pro in the following format.

Project

   Suite

         DataSource     

         TestCase

         DataSink

         DataLoop

 

I want to generate reports for each run that is being carried out from the Datasource. (E.g. DataSource has 50 rows, testcase will run 50 times). But I don't see those results generated .

Can someone please help me out with this ?

 

Thanks !

           

 

 

11 REPLIES 11
richie
Community Hero

Hi @SuperSingh

 

do you need to use the Reporting function or do you just need to archive the requests and responses for your tests?

 

I feel like I keep answering people's posts with the same answer here - but there's an event handler and a bit of groovy that records the requests and responses and results for each test step in each test in each test suite in a specific project - this does record looped tests (cos I use it myself to record the test evidence) - would this satisfy what you need or do you want to rely on the Reporting option?

 

I can't help with the Reporting function - but I can with the event handler option.

 

Cheers,

 

richie

if this helped answer the post, could you please mark it as 'solved'? Also if you consider whether the title of your post is relevant? Perhaps if the post is solved, it might make sense to update the Subject header field of the post to something more descriptive? This will help people when searching for problems. Ta
HimanshuTayal
Community Hero

Hi @SuperSingh,

 

If you are using ReadyAPI, then you can click on "Transaction Log", you can see the detailed view, that how many times your request get executed and how many time it was Pass and how many times it was Fail.

 

Untitled.png

 

Click "Accept as Solution" if my answer has helped, and remember to give "kudos" 🙂

 

Thanks and Regards,

Himanshu Tayal


Click "Accept as Solution" if my answer has helped,
Remember to give "Kudos" 🙂 ↓↓↓↓↓



Thanks and Regards,
Himanshu Tayal

Thanks @richie and @HimanshuTayal for your inputs.

@richie - I am specifically looking for Reports and not archiving the Request/Response .

@HimanshuTayal - I need the output of test runs in an independent file probably a PDF format, that will return the status of all 50 tests that got executed from excel.

 

Thanks !

Hi @SuperSingh,

 

In that case you acn refer below link, it is for Launch Runner GUI by which you can save execution report in desired format.

 

TestRunner GUI

 

Click "Accept as Solution" if my answer has helped, and remember to give "kudos" 🙂

 

Thanks and Regards,

Himanshu Tayal


Click "Accept as Solution" if my answer has helped,
Remember to give "Kudos" 🙂 ↓↓↓↓↓



Thanks and Regards,
Himanshu Tayal
sas
Occasional Contributor

I have a similar use case. Wondering what was the resolution?

@sas 

I moved to a different project and am working on a different arena now. However, I wasn't really able to get my output precisely on a PDF as I wanted to. But I had the output in an excel file. Generating the file was part of my Test Setup script .

Unfortunately, I don't have code snippets to share with you.

 

Thanks,

SuperSingh

nmrao
Champion Level 3

Data driven test is considered as single test case, hence single test in report. That is by design.

Just think if the test is failed because of one row in the data, will you run the test for only particular row or entire data source?


Regards,
Rao.
sas
Occasional Contributor

I get that, but I should be able to customize the report to show what data is being tested in the iteration from the data source. 

In that case, the question is how can I customize the report to show the data element on the reports?

sas
Occasional Contributor

@nmrao I want to add to my earlier response... 

I think, you can write a single test case in SOAP-UI that is highly parameterized and templatized and that can use data from a data source to run different type types of tests. Essentially, you are writing a test engine and fueling it with data. That's the power of data-driven testing. 

Now, one can say that if one test fails then we have to report as a test case as failed for the whole data source. I think end-users should be able to decide how to deal with the failed tests in the data source. Maybe there is flexibility in generating the test data for the data source... few thoughts... 

cancel
Showing results for 
Search instead for 
Did you mean: