Forum Discussion
Gentlemen,
thank you for all the usefull replies. I'll download the evaluation version of AQTime and evaluate what it can do for us.
Thanks.
Mathijs
Any time, Mathijs... :)
P.S.
I am pretty sure that you know that:
-- As any product by SmartBear, AQTime includes pretty good documentation that is worth extensive consulting with;
-- Depending on the used profiler and in order to improve performance, stability and the quality of the collected results, proper setup for Areas, Triggers and Actions might be required. Again, documentation might help to get an understanding of their difference and how they can be used to help you with profiling;
-- Questions on Aqtime can be asked at https://community.smartbear.com/t5/AQtime/bd-p/AQTime (or in this thread:) );
-- https://support.smartbear.com/articles/aqtime/ and https://support.smartbear.com/screencasts/aqtime/ are worth skimming and watching / reading what might be relevant for your task.
P.P.S.
No, I am not an expert with AQTime but this area is interesting to me and I try not to lose an opportunity to refresh my little knowledge about AQTime. :)
- mgroen210 years agoSuper Contributor
AlexKaras, Colin_McCrae, tristaanogre
I noticed latest edition of TestComplete has some very 'simple' Performance measuring capabilities.
In a sence that it can calculate how much time (in milliseconds), certain actions take.
Don't know about it's accurateness, and if it's resulting values includes TestComplete's time to signal that an action is done? I mean, are these values "solid as a rock", or just have the purpose of a "digital stopwatch"?
- AlexKaras10 years agoCommunity Hero
Hi Mathijs,
I am pretty sure these are just wrappers over the aqPerformance object (https://support.smartbear.com/testcomplete/docs/reference/program-objects/aqperformance/index.html) that already exists in TestComlete for some time and is just a "digital stopwatch".
- Colin_McCrae10 years agoCommunity Hero
^^^ I got THREE notifications of AlexKaras reply on this one!
- tristaanogre10 years agoEsteemed Contributor
mgroen2 wrote:
AlexKaras, Colin_McCrae, tristaanogre
I noticed latest edition of TestComplete has some very 'simple' Performance measuring capabilities.
In a sence that it can calculate how much time (in milliseconds), certain actions take.
Don't know about it's accurateness, and if it's resulting values includes TestComplete's time to signal that an action is done? I mean, are these values "solid as a rock", or just have the purpose of a "digital stopwatch"?
Nice catch!
There's also an analogous object called aqPerformance that is usable in script code.
However, I think your assessment is correct in that this is, in essence, a digital stop watch with the added feature of having a built in method to compare the elapsed time to an indicated upper limit and determine if the performance check passes or fails. As mentioned above, this not a show stopper. The TestComplete overhead is going to be, for the most part, constant assuming constant hardware so any performance improvements/degredations that these features would find would still be valid as an indication, even if the accurate measurement of the specific performance of the application may still be unavailable. I can check that opening the app and logging in via TestComplete needs to take less than 10 seconds... but I cannot report that the specific native time it takes to do this within the application.- mgroen210 years agoSuper Contributor
tristaanogre wrote:Nice catch!
There's also an analogous object called aqPerformance that is usable in script code.
However, I think your assessment is correct in that this is, in essence, a digital stop watch with the added feature of having a built in method to compare the elapsed time to an indicated upper limit and determine if the performance check passes or fails. As mentioned above, this not a show stopper. The TestComplete overhead is going to be, for the most part, constant assuming constant hardware so any performance improvements/degredations that these features would find would still be valid as an indication, even if the accurate measurement of the specific performance of the application may still be unavailable. I can check that opening the app and logging in via TestComplete needs to take less than 10 seconds... but I cannot report that the specific native time it takes to do this within the application.Thanks tristaanogre for clarification.
I think I see the point you are making, in a sense that these performance indicators may be of use to get an indication (in a way of serving as a 'digital stopwatch'. For example increasing CPU/memory on the machine will expected to lead to lower values as reported by the Start/Stop functions on the mentioned Performance "tab" (as seen in the picture), when the exact same testscript is executed.
What will be the use of these values when you are testing an application which uses data from an database running on other hardware?
Will re-run of the same test with upgraded hardware specs (CPU/memory) on the machine running the client app (and also has TestComplete running), also lead to significant improvement on the performance, if the database server machine remains unchanged (with respect to hardware)?
Or, consider the same scenario the other way around:
Remain the same hardware specs where the app and TestComplete are running in tact, but improve the hardware specs of the database server. What will the values of the Performance indicator expected to show? Higher performance (lower values with regards to milliseconds)?
There is still a third scenario that I could think of while writing on it: Consider leaving the hardware specs of both entities (client / TestComplete machine), and the database server intact, but increase bandwidth values between both machines. What will this lead to?
I wonder....