Forum Discussion

Sean_Paterson's avatar
Sean_Paterson
Occasional Contributor
17 years ago

[Resolved]Response time differences between soapUI 2.5 beta1 and soapUI 3.0.1

Hi, I have been running a load test in soapUI 2.5 beta1 and have now upgraded to soapUI 3.0.1.  My colleague and I have both noticed that there is a huge difference in the response times now recorded when executing a test.  We are using the same test script, using the same HTTP settings in 'Preferences', using the same load test parameters on the same target software on the same test environment; in short the only thing different is the version of soapUI used.  How can this be?  I would appreciate any helpful comments. 

The set of results (in milliseconds) below helps illustrate the problem when running a simple load test with 10 virtual users for a 60s period:

soapUI 2.5 beta 1
min: 147
max: 6865
avg: 376.1
cnt: 511
bytes: 15,704,167

soapUI 3.0.1
min: 4983
max: 24222
avg: 12821
cnt: 46
bytes: 1,413,864

16 Replies

  • Hi Sean

    great thanks for this, it certainly looks like you nailed something down.. I'll check it out and will get back to you accordingly

    regards!

    /Ole
    eviware.com
  • Hi again,

    the upcoming nightly build will have a fix for this, hopefully you will get the desired performance.. please let us know!

    best regards,

    /Ole
    eviware.com
  • Hi Sean,

    I have one more question for you; when doing the above measurements on response times, exactly how/where did you get the displayed values? (I'm curious since the script-evaluation time shouldn't have been included in the response times measured by soapUI..)

    thanks for your reply,

    regards!

    /Ole
    eviware.com
  • Sean_Paterson's avatar
    Sean_Paterson
    Occasional Contributor
    Hi Ole,

    When running my script, I get the average execution time by running a load test (simple strategy, 10 threads, test delay 1000, 60s duration).  Upon completion of the test I am reporting the test case average from the "avg" column in the load test table.  So, whatever is included in this metric is what I am reporting.  When comparing soapUI 2.5 beta 1 results against v.3.0.1 results I am using the same metric, so I am comparing like with like, which indicates an issue to me.  The evidence that leads me to suggest that it is related to groovy script execution is the large time difference when using the two versions of the same stock script - one with 100 hard coded number values, the other with 100 random groovy-script-generated random number values.  

    Appreciate your ongoing investigations here.  Is it worth my time installing the nightly build or do you think the issue requires more investigation/fixing?

    Thanks,
    Sean
  • Sean_Paterson's avatar
    Sean_Paterson
    Occasional Contributor
    Update:

    Curiousity got the better of me and I installed the latest nightly build (2009-09-11-[2]).  I have run and re-run my script with and without the offending groovy-script test step.  The metrics I now see are consistent between the two versions of soapUI.  So, it looks like whatever you fixed last night has cured the problem.  I am going to stick with this latest build until such time as the next formal release of soapUI 3.0 becomes available.

    Thanks for your help and efforts.

    Sean
  • Hi Sean,

    thanks for testing the build and helping out here, your feedback has been extremely valuable.. Please update to the coming nightly build also, more memory fixes are in the oven..

    cheers!

    /Ole
    eviware.com