Forum Discussion
AlexKaras
14 years agoChampion Level 3
Hi Mathijs,
> Is there another, more precise approach, to time more accurately?
I can think about several approaches, but it is up to you to decide what best fits your needs and budget...
a) You can use LoadComplete (http://smartbear.com/products/qa-tools/load-testing/) or load testing functionality of TestComplete (available in Enterprise edition and required additional license). It will accurately measure the time spent to get the data from the server (as we are talking about web application), but will not measure additional time spent on the client to execute client scripts, render UI controls, etc.).
b) Consider AQtime (http://smartbear.com/products/development-tools/performance-profiling/). With it you should be able to measure how much time it takes to execute the client-based scripts/code, but it will not measure transmission time between client and server. You can get the (average) total time that it takes for the end-user to see the requested page if you sum the (average) figures provided by LoadComplete and AQtime.
c) The a) and b) options provide most accurate timing. If you do need such
precision, you may consider the approach that was implemented in one of
the projects I know about: the idea was based on the educated guess (BTW, Support, correct please if the below is incorrect) that TestComplete spends extra time only when the sought for object is absent in its internal cache. After the object was found and processed by Object Spy it usually takes much less time for TestComplete to find it for the second time. Especially, if tested application state did not change significantly. With this in mind, the test code measured the figures of interest not once but several times (e.g. three times). I.e. the code opened some page measuring how much time it takes for the page to open completely (e.g. by waiting for the Visible property to get the 'true' value), then closed the page, then immediately opened and closed it two more times. The result of the first measurement then was either ignored or decreased appropriately and the average time was calculated and logged (possibly with other statistical results like deviation, etc.). Then the test code navigated to another page of interest and the measurement cycle repeated.
Hope this will inspire you with the ideas applicable for your project...
> Is there another, more precise approach, to time more accurately?
I can think about several approaches, but it is up to you to decide what best fits your needs and budget...
a) You can use LoadComplete (http://smartbear.com/products/qa-tools/load-testing/) or load testing functionality of TestComplete (available in Enterprise edition and required additional license). It will accurately measure the time spent to get the data from the server (as we are talking about web application), but will not measure additional time spent on the client to execute client scripts, render UI controls, etc.).
b) Consider AQtime (http://smartbear.com/products/development-tools/performance-profiling/). With it you should be able to measure how much time it takes to execute the client-based scripts/code, but it will not measure transmission time between client and server. You can get the (average) total time that it takes for the end-user to see the requested page if you sum the (average) figures provided by LoadComplete and AQtime.
c) The a) and b) options provide most accurate timing. If you do need such
precision, you may consider the approach that was implemented in one of
the projects I know about: the idea was based on the educated guess (BTW, Support, correct please if the below is incorrect) that TestComplete spends extra time only when the sought for object is absent in its internal cache. After the object was found and processed by Object Spy it usually takes much less time for TestComplete to find it for the second time. Especially, if tested application state did not change significantly. With this in mind, the test code measured the figures of interest not once but several times (e.g. three times). I.e. the code opened some page measuring how much time it takes for the page to open completely (e.g. by waiting for the Visible property to get the 'true' value), then closed the page, then immediately opened and closed it two more times. The result of the first measurement then was either ignored or decreased appropriately and the average time was calculated and logged (possibly with other statistical results like deviation, etc.). Then the test code navigated to another page of interest and the measurement cycle repeated.
Hope this will inspire you with the ideas applicable for your project...
Related Content
- 12 years ago
- 11 years ago
- 12 years ago
Recent Discussions
- 2 days ago
- 2 days ago
- 5 days ago