Forum Discussion

Elvorin's avatar
Elvorin
Contributor
13 years ago

Report Doesnt Work

I ran load test for 2 hours. Most of the response times were in the range of milliseconds. I faced multiple issue while generating report -

1. When I generated report after the run, all data just showed 0.0;

2. If I keep the graph window open to monitor performance, and then after the run I change the zooming level, all graphs on that component (e.g. response times) just vanishes. Even resetting the zooming level doesnt bring back the graphs.

3. Graphs shows aggregated data for the set zooming level instead of a proper connected line through actual response times. So when actually probably 1/2 response times were high enough, that entire time period shows as a spike. So it's hard to tell just 1 or 2 calls had bad performance, or all calls in that period had bad performance. Even going down till seconds zooming level doesn't help, as there are 100s of calls being made in a second.

5 Replies

  • SmartBear_Suppo's avatar
    SmartBear_Suppo
    SmartBear Alumni (Retired)
    Hi!
    Elvorin wrote:
    I ran load test for 2 hours. Most of the response times were in the range of milliseconds. I faced multiple issue while generating report -

    1. When I generated report after the run, all data just showed 0.0;

    2. If I keep the graph window open to monitor performance, and then after the run I change the zooming level, all graphs on that component (e.g. response times) just vanishes. Even resetting the zooming level doesnt bring back the graphs.

    Which version are you using? Are you using a nightly build?

    Elvorin wrote:
    3. Graphs shows aggregated data for the set zooming level instead of a proper connected line through actual response times. So when actually probably 1/2 response times were high enough, that entire time period shows as a spike. So it's hard to tell just 1 or 2 calls had bad performance, or all calls in that period had bad performance. Even going down till seconds zooming level doesn't help, as there are 100s of calls being made in a second.


    You can save any extreme values based on any criteria by routing them from the Runner, via an Condition (available in loadUI 2.0) or Assertion (not as powerful) component into a TableLog. The Statistics Workbench always aggregates to seconds-level for storage/performance reasons, but you can view aggregates such as Max, 90th percentile and Standard Deviation to analyze statistics.

    Also, in loadUI 2.0, we have completely redone assertions. Now they are integrated into the Statistics Workbench, so you can easily find the exact position of extreme values.

    Does this sound like it would meet your requirements?

    Henrik
    SmartBear Software
  • SmartBear Support wrote:
    Hi!
    Elvorin wrote:
    I ran load test for 2 hours. Most of the response times were in the range of milliseconds. I faced multiple issue while generating report -

    1. When I generated report after the run, all data just showed 0.0;

    2. If I keep the graph window open to monitor performance, and then after the run I change the zooming level, all graphs on that component (e.g. response times) just vanishes. Even resetting the zooming level doesnt bring back the graphs.

    Which version are you using? Are you using a nightly build?


    No. I'm using stable version.

    SmartBear Support wrote:

    Elvorin wrote:
    3. Graphs shows aggregated data for the set zooming level instead of a proper connected line through actual response times. So when actually probably 1/2 response times were high enough, that entire time period shows as a spike. So it's hard to tell just 1 or 2 calls had bad performance, or all calls in that period had bad performance. Even going down till seconds zooming level doesn't help, as there are 100s of calls being made in a second.


    You can save any extreme values based on any criteria by routing them from the Runner, via an Condition (available in loadUI 2.0) or Assertion (not as powerful) component into a TableLog. The Statistics Workbench always aggregates to seconds-level for storage/performance reasons, but you can view aggregates such as Max, 90th percentile and Standard Deviation to analyze statistics.

    Also, in loadUI 2.0, we have completely redone assertions. Now they are integrated into the Statistics Workbench, so you can easily find the exact position of extreme values.

    Does this sound like it would meet your requirements?

    Henrik
    SmartBear Software


    I suppose we can attempt to work with that.

    The issue is I can't send a report of load testing with those graphs. It doesn't give true picture of performance. Isn't that the graphs are supposed to be for? To give a pictorial view of performance over the time? Aggregated data makes it look like performance degraded over a time period, and then gradually improved again, where actually probably just 1 response time was bad. So whoever looking at the graphs (e.g. managers) will have no idea of it, unless we send every single details of spike and timeline for it. In that case graph is completely mute to look at.

    Anyway, I can understand that you aggregate data to second level on graph for performance, but if we can at least have response time of each call as part of 'raw data', I can then export it to excel and generate my graphs. Right now, that's what I'm doing anyway, but again, it's aggregated to a second now, instead of each call.

    Load testing isn't just for getting a generic idea of how a system behaves with load over a time period. It's also used to find out very specific issue with application and/or system. Having detailed data (request/response for error, every single response time, etc.) always comes useful for that.
  • Well, today I'm seeing some numbers in the report. But some numbers are way off than what I'm seeing in graph view.
  • SmartBear_Suppo's avatar
    SmartBear_Suppo
    SmartBear Alumni (Retired)
    Elvorin wrote:
    Load testing isn't just for getting a generic idea of how a system behaves with load over a time period. It's also used to find out very specific issue with application and/or system. Having detailed data (request/response for error, every single response time, etc.) always comes useful for that.

    I see what you mean.

    However, I think it's not a good solution to save all request data (complete request, complete response, response time) for each request in a load test. I believe this because of the following reasons:

    1. It would be technically impossible in some cases (HDD too slow when doing high load tests, HDD too small when doing long tests, Network too slow when doing distributed testing on agents).

    2. If made optional, it would be hard for the average user to determine when it can be turned on and when it cannot.

    3. It would always decrease the performance of your load testing application.

    4. 99,9% of requests would be completely useless to the user.


    5. In loadUI, we solve this by have four "depths" when getting results:

      1. The Summary Report -- Easy to overview, but no detailed info. Least flexibility.

      2. The Statistics Workbench -- Statistics for the whole test, down to seconds level.

      3. Assertions (loadUI 2.0) -- Used to spot extreme values. Saves the exact value that violated a constraint, along with the time that the constraint was violated.

      4. Condition and a Output component -- Used to save anything from any requests. You can for example save the response of all transactions whose request contained a prime number. Or tweet all sessionIDs of requests that took longer than 4711ms to send+receive. Most flexibility.


      5. With that being said, I would love to hear suggestions on how we could improve getting results in loadUI.

        The issue is I can't send a report of load testing with those graphs. It doesn't give true picture of performance. Isn't that the graphs are supposed to be for? To give a pictorial view of performance over the time? Aggregated data makes it look like performance degraded over a time period, and then gradually improved again, where actually probably just 1 response time was bad. So whoever looking at the graphs (e.g. managers) will have no idea of it, unless we send every single details of spike and timeline for it. In that case graph is completely mute to look at.

        I don't get it. LoadUI doesn't show aggregated data for the whole test. This is what it shows:
        • Seconds Zoom Level: Each datapoint is an aggregation of the requests completed during one single second.

        • Minutes Zoom Level: Each datapoint is an aggregation of the requests completed during 6 seconds.

        • Hours Zoom Level: Each datapoint is an aggregation of the requests completed during 10 minutes.

        • Days Zoom Level: Each datapoint is an aggregation of the requests completed during 2 hours.

        • Weeks Zoom Level: Each datapoint is an aggregation of the requests completed during 12 hours.


        Regards

        Henrik
        SmartBear Software
  • I understand the performance issue. Yeah, it makes sense that the tool's internal performance is of utmost importance to capture proper performance data for the application under load. How about having an option to enable/disable debug? In debug mode it'll log all request/response data. Yeah I understand the hard-disc, network, etc. constraints. But this mode will be very helpful for setting up the load tests. To make sure load is cycling through proper data set, it is getting expected response from app, etc. Load tests are performed by technical people. So I'm sure they'll know how to use a debug mode (provided it's mentioned in documentation to know where to find this option ).
    Also I believe in load test we expect almost all of our requests to pass. So if in debug disabled mode the tool just logs request/response for the failed ones, will it impact tool's performance much?

    Regarding 'four depths' -
    1. It generates reports on total response time of a test case. Wish it could be configurable to generate for request's response time.
    2. This one I found very useful. But it has its own set of bugs (I've reported few of them in my other posts).
    3. Havnt tried it yet, as it's not part of 1.5.0. But correct me if I misunderstood this component - This will only tell if a response time is beyond an acceptable level or such other constraints, but not for what request input it was not met. For example, if my constraint says 2 sec response time, it'll say that a response received, lets say, after 3 seconds. And it will say exactly at, for example, 1:30 PM. But it wont say what was my request data. So I cannot take the data and run it in isolation to figure out if the poor response was due to data or something else.
    4. This one looks promising. Havent tried it yet. But looks like this might serve our purpose, if we can configure it to detect failures and then capture the request.

    Regarding graphs -
    Yes, I'm aware of the detail levels. But the issue is I can't print or save (as pdf) the entire graphs, when it's anything other than overall and spans more than the screen width. If I do, it only shows the portion that is visible on screen. For example, if I change zoom level to minutes, and print/save it, the printed/saved copy will have only half an hour worth graph.

    I'm guessing this might have something to do with paper width. But I suppose the solution would be to generate the graphs in multiple segments.

    I know I have made several threads about LoadUI. So dont get me wrong. I like this tool, a lot. But I think it has room for improvement and currently few bugs are making things a little difficult. So to get those fixed and to see this tool become robust I put my suggestions/concerns.