It would be useful if we add an extra "Suite Group", or "Suite Categories" level in the SoapUI test case hierarchy for organizing test suites. This would be a parent of Test Suite. This way you could collapse.expand categories of test suites. This should be an optional level, so that those who just want test suites wouldn't need to have the extra layer of categories.
Reasoning for it's usefulness:
Currently, in each project, we organize tests by test suite, and test case. There is no broader category than test suite. In a situation where you have multiple APIs that while might mostly be independent, but that do also have interaction with each other, it would be handy to have them all be in a single project.
However, if each API itself needs several test suites for testing it, then having multiple APIs in a single project isn't great for organizing your test suites, since they're all in a flat structure at the test suite level. That's one reason for keeping the APIs in different projects, so that you can expand only the test suites for the given API, without seeing a bit list of all the other API's test suites.
The most common usage for Suite Groups/Categories would likely be to organize test suites by each API in the project, but their would certainly be other categories that people would come up with.
When you have several requests that came to a ServiceV mock listener, and you're looking at them in the transaction log, it would be nice if as you looked at each one, that the tab that you selected when viewing a message (e.g. JSON or Raw) remained selected when you clicked on the next message to view, instead of always switching back to XML each time you click then ext message.
Please make ReadyAPI stop keeping what is selected in each of the views in sync with each other when switching between views (Projects, SoapUI, Secure, LoadUI, and ServiceV). It's more of a hindrance than a help.
Reasoning why disabling this syncing is desired:
When switching views, what we most often want is for the last thing selected in a particular view to REMAIN selected the next time we come back to that view after switching to another view.
Unfortunately, whenever we switch views and look at and open an item in it, the other views all select the same project behind the scenes. Then when we switch back to the previous view, we have to go and re-select the item we had left selected and open when we left the view.. This gets frustrating when we're going back and forth a lot between views and ReadyAPI keeps changing focus in each view.
Here is an example:
1. Open a workspace with multiple projects (call them Projects A and B)
2. Go to SoapUI and open Project A and open some test case/steps inside it
3. Go to ServiceV and give focus to a mock listener in Project B (perhaps run it and look at transaction log)
4. Go back to SoapUI and notice that Project B is now selected in the Navigator in the SoapUI project, instead of the the item that you had previously selected in step 2 above, though any opened items are still open, so not too bad... yet
5. Give focus to the opened test case/step in Project A
6. Now go back to Service V. Notice that Project A is now selected, instead of the mock service that you were working in and since ServiceV doesn't work with tabs the service isn't even open anymore. You have to go and re-selected again.
If you have to go back and forth a lot like above, it can get annoying.
Even if you undock your soapUI tests to put them in a different window and then go to Service V so that you can see both at the same time, whenever you go to the SoapUI items, ServiceV then changes it's focus. So there is no work-around.
I've never had any reason to want each of the views to stay synced in what project they have selected.
We have a SoapUI project (which I will call project "X" here) responsible for setting up the test environment prior to running regression tests. Project X takes some time to complete. If a test fails in X, we want to immediately terminate the project with failure. We do NOT want to continue executing tests in the project, as the goal is not to collect information from the tests but rather to set up infrastructure.
Currently Project X continues executing tests (setting up infrastructure) even when a failure has occurred. This wastes valuable lab time.
Request that an option be added to configure the project to immediately terminate on first failure. Default should be false, which is the current behavior.
Immediate termination does not mean messy termination -- SoapUI should update logs appropriately, update any generated JUnit-compatible report.xml to indicate that tests were skipped, release internal resources and terminate gracefully.
In the meantime we will try to approximate this behaviour through other means.
With the release of ReadyApi 2.0 GetData behavior was changed and we are NOT impressed.
This means that ANY TestSuite, TestCase etc. with names longer than 16 characters all look a like, making it impossible to be efficient as new functionality forces us to MouseOver to be able to find the correct TestCase name.
Please at least make the GetData Columns resiazeable, so we can see the names of our TestSuites and TestCases again!
See attached file.
One feature that would be very useful would be to alter the "look" of a testcase/testsuite in the Navigator showing there are unsaved changes on a testcase/testsuite.
Presently there does not appear to be any discernible difference shown to let you know there are unsaved changes in a testcase or testsuite. Unfortunately I have fallen foul of this issue where my machine has lost power prior to me saving and I didn't appreciate how many unsaved changes I had in flight..... and by default the autosave feature isn't enabled.
Thank you in advance.
Hi, it would be nice if Data Sources can be used for the whole project. Right now it is limited to the test case level.
I would like to suggest the development of a FTP/SFTP TestStep where I can FTP a generated DataSink file.
That will assist me greatly in automating an entire end to end process without the need for Groovy.
Can we, and if so how, use something like intellisense on a custom groovy library?
So if we have created some utility functions and keep them in a script folder, or library, can we then reference those scripts and their methods intelligently from a groovy test step?
Would be helpful & efficient if there was a simpler way to compare datasources, especially when working with large data sets where multiple calls to db/json wouldnt be practical.
Scenario - Taking the results of a jdbc query & putting it into a datasource file. Then taking the results of a json query & putting that into a datasource file. Now compare the 2 datasource files to make sure the values match.
It will be nice to have the option to set generic assertion for all the API request in the project , or to select some of the APEs should have the same assertion .
meaning of generic assertion - The user will create the assertion and will have the option to set it the over API requests.
Motivation - I have hundreds of API requests, more then have of them have the same assertion ,so i need to copy paste them one by one.
AFAIK for SoapUI versions up to 1.8.0, the calltestcase step only supports calling test cases within the same project.
We have hundreds of utility test cases in a large project with ~1000 application level test cases that use them. We also have another project containing tests that need to call the same utility test cases. Currently we have to maintain two copies of the utility test cases because the second project cannot call into the first.
Instead, we would like to have one SoapUI project containing the utility test cases, and multiple other projects containing the application level test cases that call into the utility tests. Then we would not need multiple copies of the utility test cases, which is a maintenance headache.
So we'd like to request a SoapUI enhancement to support calling test cases in other projects. That would reduce our maintenance efforts considerably.
Currently test case run steps don't allow you to add assertions on the returns from those test steps, resulting in having to create separate extremely clunky 'Assertion' steps to assert that the data returned is what you expect when running flexible test cases.
eg. Say you want to run a test case to determine the status of something and the status is returned. That status may be different depending on the test case. You should be able to assert on the returned parameters INSIDE of that testcase run step.
It would be great if we could apply the following assertions to requests:
The documentation around these assertions is a little conflicting as it states "Asserts that the request and response messages are compliant with a swagger definition" currently, however the implementation seems to be limited to responses only.
It's quite a normal expectation that the request body for operations like PUT, PATCH, POST could be validated for compliance. Sames goes for other operations, query parameters, headers etc.
We are using Service-V for API sand-boxing and ideally we should facilitate that we can act like the API provider and can easily validate requests and give appropriate responses as defined in the swagger definitions without having to script all the validations by hand.
Any thoughts on this? Can you consider this enhancement? I am convinced that any customer offering REST APIs levering Swagger / Open API etc would assume such capabilities are possible for requests as well as responses.
When running DataSource Loops, provide an option to display each iteration's data point in the transaction logs. Provide this information also for failed teststeps within the data source loop. As of now, if one of the iterations in a Data Source Loop fails, the JUnit result just shows the failed TestCase(that has the Data Source loop). It would be good to know which data point in the loop failed which then caused the Test Case to fail. This will be huge to debug test failures quickly.
I would like to suggest the functionality to add a generated datasink as an attachment to a SendMail TestStep.
Currently it doesn't seem to be supported.
This will assist me in automating an entire end to end process without using Groovy.
Thank you very much.
Requesting for an enhancement to repeat the every HTTP/REST request for n times with specified delay between each trials until it passes its assertions.
We are facing a problem in our testing while using two web interfaces. The first REST send a request to second HTTP/REST service which will take some time (Not sure, it may be from 60secs to 300secs anywhere). With a hard coded delay, it was very tough and wasting lot of time for testing.
With this feature, it saves lot of time and effort. I knew this can be done through conditional step, but it has so much work involved with each step.
BTW, basic behavior of any HTT/REST request should not change by default. it should behave like as usual now. One more this is, it is test step level feature.
In SoapUI 4.x, the navigation pane used to rememer it's last saved width between sessions. However, Ready! API does not remember this type of sizing information--every time you restart the app, the pans are back to their default sizes.
Please enhance Ready! API so that all of the panes and view windows remember their last sizes. Everything is resizable, but it's a waste of time to have to keep manually re-sizing things because the application doesn't remember the settings.
Test cases have options assigned to them, which define if the test should abort as soon as a step is on error; if the HTTP session should be maintained; etc. Those options can be found here: http://readyapi.smartbear.com/structure/cases/options/basic
As a developer who writes a lot of tests, I'd like to be able to define the default value of those options once so I don't have to edit each newly created TC.
For example almost all my test cases needs to have the "abort test is an error occurs" option disabled. Having to edit this option manually each time is pretty cumbersome, and easy to forget.
ReadyAPI ServiceV right hand window viewing/editing pane has a number of script nodes that have a script editing area that is not able to be resized vertically. Only 5 lines of script are visible when it should be able to grow as desired. Start Script, Stop Script, OnRequest Script and AfterRequest Script cannot be grown more that 5 lines. That takes a while to scroll through when those scripts are many lines long.