Forum Discussion
Anonymous
13 years agoEric
we recognize that there is room for improvement in accuracy of simulation that is desirable for high-content and JS-rich pages, and we are working on that.
First, as we now recommend, it would make sense to zero out think time (there is an option for that) and manually set think time for page requests (which should be relatively few). This is a somewhat more accurate setup for highly parallel page loads.
In those types of scenarios, LoadComplete will give you an upper bound of page load time while distributing load on the server over time that is, on average, representative of the user traffic captured in a scenario. That's one interpretation that is used.
In other cases, mostly for optimization, people use the waterfall charts and then compare relative page load times after they change / reorder code to change the sequence of internal events on a page.
Hope this is of help.
Regards,
we recognize that there is room for improvement in accuracy of simulation that is desirable for high-content and JS-rich pages, and we are working on that.
First, as we now recommend, it would make sense to zero out think time (there is an option for that) and manually set think time for page requests (which should be relatively few). This is a somewhat more accurate setup for highly parallel page loads.
In those types of scenarios, LoadComplete will give you an upper bound of page load time while distributing load on the server over time that is, on average, representative of the user traffic captured in a scenario. That's one interpretation that is used.
In other cases, mostly for optimization, people use the waterfall charts and then compare relative page load times after they change / reorder code to change the sequence of internal events on a page.
Hope this is of help.
Regards,