Generating Test Results for a Data Driven Test
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Generating Test Results for a Data Driven Test
I have my project in SoapUI Pro in the following format.
Project
Suite
DataSource
TestCase
DataSink
DataLoop
I want to generate reports for each run that is being carried out from the Datasource. (E.g. DataSource has 50 rows, testcase will run 50 times). But I don't see those results generated .
Can someone please help me out with this ?
Thanks !
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @SuperSingh
do you need to use the Reporting function or do you just need to archive the requests and responses for your tests?
I feel like I keep answering people's posts with the same answer here - but there's an event handler and a bit of groovy that records the requests and responses and results for each test step in each test in each test suite in a specific project - this does record looped tests (cos I use it myself to record the test evidence) - would this satisfy what you need or do you want to rely on the Reporting option?
I can't help with the Reporting function - but I can with the event handler option.
Cheers,
richie
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @SuperSingh,
If you are using ReadyAPI, then you can click on "Transaction Log", you can see the detailed view, that how many times your request get executed and how many time it was Pass and how many times it was Fail.
Click "Accept as Solution" if my answer has helped, and remember to give "kudos" 🙂
Thanks and Regards,
Himanshu Tayal
Click "Accept as Solution" if my answer has helped,
Remember to give "Kudos" 🙂 ↓↓↓↓↓
Thanks and Regards,
Himanshu Tayal
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @richie and @HimanshuTayal for your inputs.
@richie - I am specifically looking for Reports and not archiving the Request/Response .
@HimanshuTayal - I need the output of test runs in an independent file probably a PDF format, that will return the status of all 50 tests that got executed from excel.
Thanks !
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @SuperSingh,
In that case you acn refer below link, it is for Launch Runner GUI by which you can save execution report in desired format.
Click "Accept as Solution" if my answer has helped, and remember to give "kudos" 🙂
Thanks and Regards,
Himanshu Tayal
Click "Accept as Solution" if my answer has helped,
Remember to give "Kudos" 🙂 ↓↓↓↓↓
Thanks and Regards,
Himanshu Tayal
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a similar use case. Wondering what was the resolution?
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I moved to a different project and am working on a different arena now. However, I wasn't really able to get my output precisely on a PDF as I wanted to. But I had the output in an excel file. Generating the file was part of my Test Setup script .
Unfortunately, I don't have code snippets to share with you.
Thanks,
SuperSingh
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just think if the test is failed because of one row in the data, will you run the test for only particular row or entire data source?
Regards,
Rao.
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I get that, but I should be able to customize the report to show what data is being tested in the iteration from the data source.
In that case, the question is how can I customize the report to show the data element on the reports?
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@nmrao I want to add to my earlier response...
I think, you can write a single test case in SOAP-UI that is highly parameterized and templatized and that can use data from a data source to run different type types of tests. Essentially, you are writing a test engine and fueling it with data. That's the power of data-driven testing.
Now, one can say that if one test fails then we have to report as a test case as failed for the whole data source. I think end-users should be able to decide how to deal with the failed tests in the data source. Maybe there is flexibility in generating the test data for the data source... few thoughts...
