Do iterations running in real time take longer than "Gross Duration" + "Delay between Iterations?"
I have a test that has 2500 users running a URL navigation test over 11 minutes (1 minute ramp-up, 10 minutes peak).
The average Gross Duration was 10.32 seconds and the Delay Between Iterations is set to 4 seconds. Using the 10 minute test duration, I'm seeing way fewer tests ran than my math suggests they SHOULD have. My goal is to find out how many total times I hit the URL over the course of the test run.
The math suggests...
10 minutes is 600 seconds. I divide that into the sum of the Gross Duration and Delay setting.
600 / 14.32 = 41.9 theoretical iterations by each user.
41.9 x 2500 virtual users = 104,748 theoretical executions of the test (104,748 URL hits).
However, the Statistics tab in my LoadNinja test run says only 45,330 total script steps were run (it's a one-step script). My error rate was 1.03% so that shouldn't play into anything too much.
What is causing this massive difference between the total iterations I expected and the total iterations it actually ran? Does the cycle of each iteration from each user generally take longer than these 14.32 seconds I expected? Maybe LoadNinja takes longer because of processing/setup/etc and I should just look at the completed script count itself?