Hi Ali,
of course! here comes a list:
1) Add an option to recalculate statistics with every teststep
-> currently statistics are recalculated first when a testcase finishes, which can be "annoying" with data-driven testcases which don't finish before all data has been processed.
2) Add a "Runs per Thread" limit option
-> The current "Total Runs" option is obviously confusing for many users initially, this option would be more in line with other tools
3) Change the gathering of statistics to be configurable as follows:
- It will be possible to add an arbitrary number of "collectors" to a load-test, for example a "Save-raw-data-to-db" collector, or collectors corresponding to the current diagrams and error log (which will be improved and refactored to be default collectors instead). This will remove the current "memory-consumption" of the loadtest-engine (or at least make it configurable) and also provide a foundation for custom collectors and printable report content (the same collectors will be used to create reports)
-> So for example your previous request to improve the LoadTest Assertion reports would fit within the "LoadTest Errors Collector" which would include more detailed information on which assertions that resulted in which error..
4) The possibility to save loadtest results (and testcase results) to monitor and compare results/improvements over time
5) Exporting of statistics will be in conjunction with the collectors mentioned above
6) In the long run we would like to add more "enterprise-like" loadtesting functionality, like distributed load-testing and collection of system-statistics (memory consumption, etc..) during loadtests. Currently this can be partially achieved with the pushtotest testmaker framework which has integrated support for soapUI tests.. (check out
http://www.pushtotest.com/)
Please don't hesitate to suggest more ideas and/or challenge the above!
kind regards,
/Ole
eviware.com