Forum Discussion

AndyHughes's avatar
AndyHughes
Regular Contributor
14 years ago

LoadUI Limits

Hi

I've finally got our services hosted on some beefy hardware and started load testing. Everything was going great and I was sending in some pretty hefty volumes (I'm talking in excess of 500 req/s)....and everything was fine. I've got a set up where I wanted to ramp up the request rate to 1000/s (or as much as it could deal with) over 5 mins, but run the test itself for 10 mins, so the last 5 mins would be at 1000/sec.

At around 600/sec I got behaviour where it became clear that the rate would not reach 1000 before the 5 mins was up. It appeared to stall and only increment really slowly. The number of running requests was in the hundreds and this statistic became quite 'choppy' as requests were deal with and then built up again. That said, at a rate of 600/sec I kind of expected that and there were only 2 failures out of a total of around 50,000 requests. So I would be tempted to suggest that the system was actually coping well with the load.

So what could cause LoadUI to fail to ramp up to the desired number of requests within the allotted 5 mins. Is it just the limitation of my PC?

3 Replies

  • AndyHughes's avatar
    AndyHughes
    Regular Contributor
    Interestingly, when I look at the graph of the request rate against time, overlayed with the Ave response time, there are clear peaks in the average response time which coincide with troughs in the request rate.

    Also it's clear looking at the graph that as the response time begins to become erratic, so does the request rate and it begins to not follow the straight line you would expect it to, consequently, never gets to the desired rate at the end of the test.

    Why would the response time becoming erratic have any effect on the generated request rate?
    (Presuming it is that which is causing it?)
  • Hi,

    Since I don't know exactly what is happening on your server, this is mainly based on my own speculation. I may be wrong about what is actually happening here, but this seems likely:

    The response time increasing causes more and more requests to start queuing up, which may exhaust the number of available concurrent requests. This number is determined by a few different settings in loadUI, and are there for efficiency reasons. Each Runner component has a Max Concurrent Requests setting (in the Basic tab of the settings dialog), which defaults to 100. Generally, when a Runner reaches this limit, it is because the requests are being queued on the server, and sending more requests won't increase the throughput. Instead, the Runner will start queuing requests internally, increasing the Queued counter (as soon as the number of active, or "Running" requests drops below the max, queued requests will be sent out) until it reaches its max (also a setting in the Basic tab). Once this happends the Runner will start discarding requests, which either will or will not be counted as failures depending on yet another setting (Count Discarded Requests as Failed, false by default).

    So, what may be happening when you see greatly increased response times is: The server is queueing requests, because it is at full capacity. This causes the number of active connections to reach the maximum, causing the Runner to start internally queuing. The queue quickly fills up, and requests are discarded, leading to a lower overall throughput.

    Aside from the Runner settings, there are two global setting which limits the global thread pool used in loadUI (available in the Workspace settings dialog, under Execution), Max internal threads, and Max internal thread queue size. These settings control the global thread pool used by loadUI, and is shared among all Components and several other parts of the application. This sets a hard limit on the total number of concurrent requests among all Runners.

    It may seem a bit limiting that these limits are in place, but the reality is that increasing the request rate once the server has started queuing requests will almost certainly lower the throughout instead of increase it, as the server it already at capacity (hence the queuing in the first place).

    Regards,
    Dain
    SmartBear Software
  • AndyHughes's avatar
    AndyHughes
    Regular Contributor
    These are my settings:

    Workspace
    Max internal threads 100000
    Max internal thread queue size 10000

    Concurrent requests setting on the runner is 10000
    Queue size 10000

    The number that are 'running' is only ever in the hundreds and nothing is ever 'queued' or 'discarded'.

    This leads me to believe that these are just loadui limits imposed and not at all representative of whats happening on the server. Is this right? The server is maxed out at this point so I know it's a hardware limit. But it still doesn't explain why the requests on the runner don't increase. Even if the requests are being queued or discarded either by LoadUI or the server, shouldn't the requests just keep coming and the runner reflect this?