Ask a Question

Results Interpretation

Occasional Contributor

Results Interpretation

Hello guyz, I am using Loadninja to test how my application is going to behave in a 100 concurrent user scenario.

I set up a duration based test with only one recorded script (to test for example the home page load) which runs during the test.

While the test was running i tried to access my application many times with my own browser in which I configured the no caching option and the loading time that I experienced was totally different from the LoadNinja results under the voice "Avg. step duration", while it was totally in line with the LoadNinja report result marked as "Total nav timings (s)". The avg. step duration is calculated by LoadNinja as the sum of the total nav timing plus the think time, but considering that I only need to test the home page load, in my case the think time (which in Loadninja cannot be set to 0,  but only minimized) is not a relevant metric.

I exposed my doubts to the LoadNinja customer care, getting this feedback:

"This corresponds to the definition of the “Think time” metric - the average time virtual users spent simulating pauses between user actions during the playback. Anyway, I think that using the “Think time” metric is not the right way to measure if page load times are within the expected boundaries. My recommendation would be to use SLA validations instead – they will be throwing errors if a step takes longer than expected and you’ll have an opportunity to measure the number of such errors".

This answer seems to agree with us that "Think time” metric is not the right way to measure page load times performances", but it suggests to use the threshold SLA validation.

From our direct experience the SLA threshold validation takes into account the think time, can you please confirm this hypotesis?

If this is true, do you confirm the answer given above? (they suggested not to use think time as relevant metric but suggest a threshold validation that takes into account the think time).

SmartBear Alumni (Retired)


We are working on a new version of the test reports that will allow excluding think times from the duration.

You are right about the SLA validations, we talked about it with our Product Manager, that we should allow to configure the SLA validation to ignore think times, the SLA validation right now takes in consideration the think times and it doesn’t make a lot of sense.

We are aware of the need, and we are already working on a solution for it.

Showing results for 
Search instead for 
Did you mean: