New Idea currently supports the -T option, which allows users to specify which tagged test cases they want to run. However, there is not a flag that allows the user to specify which tags to ignore in your test run.


For example, if I tagged test cases with "prototype" I could run only those tests with that specific tag using -T"TestCase prototype". However, if I wanted to run all tests but exclude the prototype tags I currently don't have the ability to do that. Maybe a new tag, or addition of ! into the flag would work?



-T"TestCase !prototype" <----- where ! prefix specifies not to run tests with prototype flag



This could even be expanded further where the -T option would support both inclusive and exclusive tags.

Ex: -T"TestCase prototype !ignorethese" <---- run all tests that have the prototype flag, but don't run those that have ignorethese flag

Allow usage of "legacy" Get Data dialog or improve current one

Status: New Idea
by krenevla ‎09-25-2017 07:48 AM - edited ‎09-25-2017 09:26 AM



In SoapUI 5.1.2 (latest legacy version), Get Data dialog was very quick to use because it started with focus in current location from where it was called. So navigating to properties in the same test case (for example) was very quick. In current Ready API, there is some Get Data dialog which starts on Project level and user must navigate via project structure to the level where he called the dialog (usually test case). This is many times slower solution because when project is big, user must scroll in content or remember the path so he can use filter. Anyway in legacy dialog, I was able to reach property in three clicks, now it's like 10 click and lot of scrolling and seeking for my current path


You can compare how it looks in legacy SoapUI and new Ready API and in fact this is the only reason I am still with legacy system because work in it is much faster. NEW.PNGOLD.PNG


Please make possibility in Preferences to use legacy Get Data dialog OR add button that will navigate me to the current area and will fill Project, Test Suite and Test Case columns


Thank You,


br, Vladan



Enhancement Request:

It would be useful if we add an extra "Suite Group", or "Suite Categories" level in the SoapUI test case hierarchy for organizing test suites. This would be a parent of Test Suite. This way you could collapse.expand categories of test suites. This should be an optional level, so that those who just want test suites wouldn't need to have the extra layer of categories.


Reasoning for it's usefulness:

Currently, in each project, we organize tests by test suite, and test case. There is no broader category than test suite.  In a situation where you have multiple APIs that while might mostly be independent, but that do also have interaction with each other, it would be handy to have them all be in a single project.


However, if each API itself needs several test suites for testing it, then having multiple APIs in a single project isn't great for organizing your test suites, since they're all in a flat structure at the test suite level.  That's one reason for keeping the APIs in different projects, so that you can expand only the test suites for the given API, without seeing a bit list of all the other API's test suites.


The most common usage for Suite Groups/Categories would likely be to organize test suites by each API in the project, but their would certainly be other categories that people would come up with.

ReadyApi 2.0 - GetData resizeable

Status: New Idea
by kber on ‎05-10-2017 01:59 AM

With the release of ReadyApi 2.0 GetData behavior was changed and we are NOT impressed.


  1.  Font size is set and unchangeable
  2. Column width is set and unchangeable


This means that ANY TestSuite, TestCase etc. with names longer than 16 characters all look a like, making it impossible to be efficient as new functionality forces us to MouseOver to be able to find the correct TestCase name.


Please at least make the GetData Columns resiazeable, so we can see the names of our TestSuites and TestCases again!


See attached file.

When I close ReadyAPI, I'm prompted with a dialog box asking if I really want to edit. This box appears regardless of whether I've made any changes, or if I have the auto-save project option enabled (which I do).


I request that this prompt be removed, or at least be optional via a user preference. It's silly to have to confirm an action I am deliberately taking and an annoyance when using ReadyAPI constantly. Few applications work this way and certainly not ones that can automatically save user changes, which is the main reason you would want a confirmation on exit. If users do it by accident, there is an easy solution: just start ReadyAPI back up.

When you have several requests that came to a ServiceV mock listener, and you're looking at them in the transaction log, it would be nice if as you looked at each one, that the tab that you selected when viewing a message (e.g. JSON or Raw) remained selected when you clicked on the next message to view, instead of always switching back to XML each time you click then ext message.

Please make ReadyAPI stop keeping what is selected in each of the views in sync with each other when switching between views (Projects, SoapUI, Secure, LoadUI, and ServiceV). It's more of a hindrance than a help.


Reasoning why disabling this syncing is desired:


When switching views, what we most often want is for the last thing selected in a particular view to REMAIN selected the next time we come back to that view after switching to another view.


Unfortunately, whenever we switch views and look at and open an item in it, the other views all select the same project behind the scenes. Then when we switch back to the previous view, we have to go and re-select the item we had left selected and open when we left the view.. This gets frustrating when we're going back and forth a lot between views and ReadyAPI keeps changing focus in each view.

Here is an example:
1. Open a workspace with multiple projects (call them Projects A and B)
2. Go to SoapUI and open Project A and open some test case/steps inside it
3. Go to ServiceV and give focus to a mock listener in Project B (perhaps run it and look at transaction log)
4. Go back to SoapUI and notice that Project B is now selected in the Navigator in the SoapUI project, instead of the the item that you had previously selected in step 2 above, though any opened items are still open, so not too bad... yet
5. Give focus to the opened test case/step in Project A
6. Now go back to Service V. Notice that Project A is now selected, instead of the mock service that you were working in and since ServiceV doesn't work with tabs the service isn't even open anymore.  You have to go and re-selected again.


If you have to go back and forth a lot like above, it can get annoying.


Even if you undock your soapUI tests to put them in a different window and then go to Service V so that you can see both at the same time, whenever you go to the SoapUI items, ServiceV then changes it's focus. So there is no work-around.


I've never had any reason to want each of the views to stay synced in what project they have selected.

One feature that would be very useful would be to alter the "look" of a testcase/testsuite in the Navigator showing there are unsaved changes on a testcase/testsuite.


Presently there does not appear to be any discernible difference shown to let you know there are unsaved changes in a testcase or testsuite. Unfortunately I have fallen foul of this issue where my machine has lost power prior to me saving and I didn't appreciate how many unsaved changes I had in flight..... and by default the autosave feature isn't enabled.


Thank you in advance.


We have a SoapUI project (which I will call project "X" here) responsible for setting up the test environment prior to running regression tests.  Project X takes some time to complete.  If a test fails in X, we want to immediately terminate the project with failure.  We do NOT want to continue executing tests in the project, as the goal is not to collect information from the tests but rather to set up infrastructure. 


Currently Project X continues executing tests (setting up infrastructure) even when a failure has occurred.  This wastes valuable lab time. 


Request that an option be added to configure the project to immediately terminate on first failure.  Default should be false, which is the current behavior.


Immediate termination does not mean messy termination -- SoapUI should update logs appropriately, update any generated JUnit-compatible report.xml to indicate that tests were skipped, release internal resources and terminate gracefully.


In the meantime we will try to approximate this behaviour through other means.



Intellisense on custom groovy library

Status: New Idea
by runzTH on ‎01-29-2016 03:24 PM

Can we, and if so how, use something like intellisense on a custom groovy library?


So if we have created some utility functions and keep them in a script folder, or library, can we then reference those scripts and their methods intelligently from a groovy test step?

Hi, it would be nice if Data Sources can be used for the whole project. Right now it is limited to the test case level.


Status: New Idea
by goliontus on ‎03-06-2017 12:56 AM



I would like to suggest the development of a FTP/SFTP TestStep where I can FTP a generated DataSink file.


That will assist me greatly in automating an entire end to end process without the need for Groovy.


Thank you.





Feature for comparing datasources

Status: New Idea
by kbourdel on ‎08-23-2016 11:39 AM

Would be helpful & efficient if there was a simpler way to compare datasources, especially when working with large data sets where multiple calls to db/json wouldnt be practical. 


Scenario - Taking the results of a jdbc query & putting it into a datasource file. Then taking the results of a json query & putting that into a datasource file. Now compare the 2 datasource files to make sure the values match.

Generic assertion.

Status: New Idea
by mishka on ‎03-22-2016 05:35 AM


It will be nice to have the option to set generic assertion for all the API request in the project , or to select some of the APEs should have the same assertion .



meaning of generic assertion - The user will create the assertion and will have the option to set it the over API requests.

Motivation - I have hundreds of API requests, more then have of them have the same assertion ,so i need to copy paste them one by one.

AFAIK for SoapUI versions up to 1.8.0, the calltestcase step only supports calling test cases within the same project.


We have hundreds of utility test cases in a large project with ~1000 application level test cases that use them.  We also have another project containing tests that need to call the same utility test cases.  Currently we have to maintain two copies of the utility test cases because the second project cannot call into the first.


Instead, we would like to have one SoapUI project containing the utility test cases, and multiple other projects containing the application level test cases that call into the utility tests.  Then we would not need multiple copies of the utility test cases, which is a maintenance headache.


So we'd like to request a SoapUI enhancement to support calling test cases in other projects.  That would reduce our maintenance efforts considerably.

Requesting for an enhancement to repeat the every HTTP/REST request for n times with specified delay between each trials until it passes its assertions.


We are facing a problem in our testing while using two web interfaces.  The first REST send a request to second HTTP/REST service which will take some time (Not sure, it may be from 60secs to 300secs anywhere). With a hard coded delay, it was very tough and wasting lot of time for testing.


With this feature, it saves lot of time and effort. I knew this can be done through conditional step, but it has so much work involved with each step.


BTW, basic behavior of any HTT/REST request should not change by default. it should behave like as usual now. One more this is, it is test step level feature.

Currently test case run steps don't allow you to add assertions on the returns from those test steps, resulting in having to create separate extremely clunky 'Assertion' steps to assert that the data returned is what you expect when running flexible test cases.


eg. Say you want to run a test case to determine the status of something and the status is returned. That status may be different depending on the test case. You should be able to assert on the returned parameters INSIDE of that testcase run step.

Hi There,


It would be great if we could apply the following assertions to requests:

  • JSON Schema Compliance
  • Swagger Compliance Assertion

The documentation around these assertions is a little conflicting as it states "Asserts that the request and response messages are compliant with a swagger definition" currently, however the implementation seems to be limited to responses only.


It's quite a normal expectation that the request body for operations like PUT, PATCH, POST could be validated for compliance. Sames goes for other operations, query parameters, headers etc.


We are using Service-V for API sand-boxing and ideally we should facilitate that we can act like the API provider and can easily validate requests and give appropriate responses as defined in the swagger definitions without having to script all the validations by hand.


Any thoughts on this? Can you consider this enhancement? I am convinced that any customer offering REST APIs levering Swagger / Open API etc would assume such capabilities are possible for requests as well as responses.


Kind regards,


Show test steps also in JUnit report

Status: New Idea
by moginmo on ‎03-30-2017 11:29 AM

Provide an option to also list the Test Steps within each Test Case in the JUnit report. Or provide a way to drill down on each Test Case if needed, by default it can still only show Test Case results. This will be greatly benefit for reporting under a Continuous Integration system.

When running DataSource Loops, provide an option to display each iteration's data point in the transaction logs. Provide this information also for failed teststeps within the data source loop. As of now, if one of the iterations in a Data Source Loop fails, the JUnit result just shows the failed TestCase(that has the Data Source loop). It would be good to know which data point in the loop failed which then caused the Test Case to fail. This will be huge to debug test failures quickly.