| Elvorin wrote: |
|---|
| Load testing isn't just for getting a generic idea of how a system behaves with load over a time period. It's also used to find out very specific issue with application and/or system. Having detailed data (request/response for error, every single response time, etc.) always comes useful for that. |
I see what you mean.
However, I think it's not a good solution to save all request data (complete request, complete response, response time) for each request in a load test. I believe this because of the following reasons:
- It would be technically impossible in some cases (HDD too slow when doing high load tests, HDD too small when doing long tests, Network too slow when doing distributed testing on agents).
- If made optional, it would be hard for the average user to determine when it can be turned on and when it cannot.
- It would always decrease the performance of your load testing application.
- 99,9% of requests would be completely useless to the user.
In loadUI, we solve this by have four "depths" when getting results:
- The Summary Report -- Easy to overview, but no detailed info. Least flexibility.
- The Statistics Workbench -- Statistics for the whole test, down to seconds level.
- Assertions (loadUI 2.0) -- Used to spot extreme values. Saves the exact value that violated a constraint, along with the time that the constraint was violated.
- Condition and a Output component -- Used to save anything from any requests. You can for example save the response of all transactions whose request contained a prime number. Or tweet all sessionIDs of requests that took longer than 4711ms to send+receive. Most flexibility.
With that being said, I would love to hear suggestions on how we could improve getting results in loadUI. 
| The issue is I can't send a report of load testing with those graphs. It doesn't give true picture of performance. Isn't that the graphs are supposed to be for? To give a pictorial view of performance over the time? Aggregated data makes it look like performance degraded over a time period, and then gradually improved again, where actually probably just 1 response time was bad. So whoever looking at the graphs (e.g. managers) will have no idea of it, unless we send every single details of spike and timeline for it. In that case graph is completely mute to look at. |
I don't get it. LoadUI doesn't show aggregated data for the whole test. This is what it shows:
- Seconds Zoom Level: Each datapoint is an aggregation of the requests completed during one single second.
- Minutes Zoom Level: Each datapoint is an aggregation of the requests completed during 6 seconds.
- Hours Zoom Level: Each datapoint is an aggregation of the requests completed during 10 minutes.
- Days Zoom Level: Each datapoint is an aggregation of the requests completed during 2 hours.
- Weeks Zoom Level: Each datapoint is an aggregation of the requests completed during 12 hours.
Regards
Henrik
SmartBear Software