| SmartBear Support wrote: |
|---|
Hi!
| Elvorin wrote: |
|---|
I ran load test for 2 hours. Most of the response times were in the range of milliseconds. I faced multiple issue while generating report -
1. When I generated report after the run, all data just showed 0.0;
2. If I keep the graph window open to monitor performance, and then after the run I change the zooming level, all graphs on that component (e.g. response times) just vanishes. Even resetting the zooming level doesnt bring back the graphs. |
Which version are you using? Are you using a nightly build? |
No. I'm using stable version.
| SmartBear Support wrote: |
|---|
| Elvorin wrote: |
|---|
| 3. Graphs shows aggregated data for the set zooming level instead of a proper connected line through actual response times. So when actually probably 1/2 response times were high enough, that entire time period shows as a spike. So it's hard to tell just 1 or 2 calls had bad performance, or all calls in that period had bad performance. Even going down till seconds zooming level doesn't help, as there are 100s of calls being made in a second. |
You can save any extreme values based on any criteria by routing them from the Runner, via an Condition (available in loadUI 2.0) or Assertion (not as powerful) component into a TableLog. The Statistics Workbench always aggregates to seconds-level for storage/performance reasons, but you can view aggregates such as Max, 90th percentile and Standard Deviation to analyze statistics.
Also, in loadUI 2.0, we have completely redone assertions. Now they are integrated into the Statistics Workbench, so you can easily find the exact position of extreme values.
Does this sound like it would meet your requirements?
Henrik SmartBear Software |
I suppose we can attempt to work with that.
The issue is I can't send a report of load testing with those graphs. It doesn't give true picture of performance. Isn't that the graphs are supposed to be for? To give a pictorial view of performance over the time? Aggregated data makes it look like performance degraded over a time period, and then gradually improved again, where actually probably just 1 response time was bad. So whoever looking at the graphs (e.g. managers) will have no idea of it, unless we send every single details of spike and timeline for it. In that case graph is completely mute to look at.
Anyway, I can understand that you aggregate data to second level on graph for performance, but if we can at least have response time of each call as part of 'raw data', I can then export it to excel and generate my graphs. Right now, that's what I'm doing anyway, but again, it's aggregated to a second now, instead of each call.
Load testing isn't just for getting a generic idea of how a system behaves with load over a time period. It's also used to find out very specific issue with application and/or system. Having detailed data (request/response for error, every single response time, etc.) always comes useful for that.