Hi!
| Elvorin wrote: |
|---|
I ran load test for 2 hours. Most of the response times were in the range of milliseconds. I faced multiple issue while generating report -
1. When I generated report after the run, all data just showed 0.0;
2. If I keep the graph window open to monitor performance, and then after the run I change the zooming level, all graphs on that component (e.g. response times) just vanishes. Even resetting the zooming level doesnt bring back the graphs. |
Which version are you using? Are you using a nightly build?
| Elvorin wrote: |
|---|
| 3. Graphs shows aggregated data for the set zooming level instead of a proper connected line through actual response times. So when actually probably 1/2 response times were high enough, that entire time period shows as a spike. So it's hard to tell just 1 or 2 calls had bad performance, or all calls in that period had bad performance. Even going down till seconds zooming level doesn't help, as there are 100s of calls being made in a second. |
You can save any extreme values based on any criteria by routing them from the Runner, via an
Condition (available in loadUI 2.0) or Assertion (not as powerful) component into a TableLog. The Statistics Workbench always aggregates to seconds-level for storage/performance reasons, but you can view aggregates such as Max, 90th percentile and Standard Deviation to analyze statistics.
Also, in loadUI 2.0, we have completely redone assertions. Now they are integrated into the Statistics Workbench, so you can easily find the exact position of extreme values.
Does this sound like it would meet your requirements?
Henrik
SmartBear Software