Forum Discussion
mgroen2 wrote:
AlexKaras, Colin_McCrae, tristaanogre
I noticed latest edition of TestComplete has some very 'simple' Performance measuring capabilities.
In a sence that it can calculate how much time (in milliseconds), certain actions take.
Don't know about it's accurateness, and if it's resulting values includes TestComplete's time to signal that an action is done? I mean, are these values "solid as a rock", or just have the purpose of a "digital stopwatch"?
Nice catch!
There's also an analogous object called aqPerformance that is usable in script code.
However, I think your assessment is correct in that this is, in essence, a digital stop watch with the added feature of having a built in method to compare the elapsed time to an indicated upper limit and determine if the performance check passes or fails. As mentioned above, this not a show stopper. The TestComplete overhead is going to be, for the most part, constant assuming constant hardware so any performance improvements/degredations that these features would find would still be valid as an indication, even if the accurate measurement of the specific performance of the application may still be unavailable. I can check that opening the app and logging in via TestComplete needs to take less than 10 seconds... but I cannot report that the specific native time it takes to do this within the application.
tristaanogre wrote:Nice catch!
There's also an analogous object called aqPerformance that is usable in script code.
However, I think your assessment is correct in that this is, in essence, a digital stop watch with the added feature of having a built in method to compare the elapsed time to an indicated upper limit and determine if the performance check passes or fails. As mentioned above, this not a show stopper. The TestComplete overhead is going to be, for the most part, constant assuming constant hardware so any performance improvements/degredations that these features would find would still be valid as an indication, even if the accurate measurement of the specific performance of the application may still be unavailable. I can check that opening the app and logging in via TestComplete needs to take less than 10 seconds... but I cannot report that the specific native time it takes to do this within the application.
Thanks tristaanogre for clarification.
I think I see the point you are making, in a sense that these performance indicators may be of use to get an indication (in a way of serving as a 'digital stopwatch'. For example increasing CPU/memory on the machine will expected to lead to lower values as reported by the Start/Stop functions on the mentioned Performance "tab" (as seen in the picture), when the exact same testscript is executed.
What will be the use of these values when you are testing an application which uses data from an database running on other hardware?
Will re-run of the same test with upgraded hardware specs (CPU/memory) on the machine running the client app (and also has TestComplete running), also lead to significant improvement on the performance, if the database server machine remains unchanged (with respect to hardware)?
Or, consider the same scenario the other way around:
Remain the same hardware specs where the app and TestComplete are running in tact, but improve the hardware specs of the database server. What will the values of the Performance indicator expected to show? Higher performance (lower values with regards to milliseconds)?
There is still a third scenario that I could think of while writing on it: Consider leaving the hardware specs of both entities (client / TestComplete machine), and the database server intact, but increase bandwidth values between both machines. What will this lead to?
I wonder....
- AlexKaras9 years agoCommunity Hero
Hi Mathijs,
If I got your reply right...
According to your scenarion description, you are working with the client-server application (two- or three-tier, though this does not matter in this case). In this case any of your scenario can occur or any their combination because the time is spent on the server to serve data request, on the network and on the client to process and display the data.
The best approach is to measure the performance on all these nodes, compare obtained values to find the slowest one, improve the performance and repeat the cycle until acceptable performance is obtained. Database server profiler (specific for any given DB) in combination with the performance counters should be used to check if the database has any performance problems, network monitor/sniffer may help with the network traffic and application profiler (like AQTime) are the tools that are recommended for the performance analysis for the client-server applications.
Note, that it is a good idea to create a dedicated (temporary) group from DB engineer (the guy who understands DB server performance and how to improve it, but not just writes queries or backups database on schedule), network engineer and QA/developer (to analyse/improve code on the application's side) to work on the performance analysis/improvement task.
- Colin_McCrae9 years agoCommunity Hero
You're getting into client/server performance areas now.
Not something I would use TC for. (I do performance testing as well)
Personally, I can see some value in using TC to profile an application locally. In a basic way. But I wouldn't use to test hitting something like a DB server.
I assume the DB server will be servicing multiple clients. So you're immediately dealing with concurrency. Which is where TC is not so good. Well, you could maybe using threading to introduce concurrency, but I don't know how much, if at all, TC supports this. I've used it with Python external to TC in the past.
But, as has been mentioned a few times already, TC is OK for a few basic performance metrics on desktop applications. But it will always have it's own overhead. (Unless using dev inserted timestamps as I mentioned previously)
If you want a thorough, detailed, performance profile of a desktop application, use profiling software.
If you want to introduce concurrency to test a remote server with multiple clients, use dedicated performance software. (I tend to use jMeter mostly. But also use python/threading/cloud swarms sometimes too.) In most cases where you're testing a remote server, it's preferable to remove the client application/site completely where possible and simply simulate and control the traffic and rate it's produced at. Not always possible, but it can usually be done.
You want to automate testing? Use TestComplete. :)
Yep. HKosova reckoned it was people editing posts you're tagged in that was triggering these multiple notifications. It appears it's not. You just have to be "in" the thread somewhere and you'll get multiple notifications if someone replies then edits it. Regardless who (if anyone) is tagged in the post. Personally, I think the solution is simple. Don't send out notifications for edits! (You'll be getting two for this post now I suspect .... although you weren't tagged in it until the edit)
- AlexKaras9 years agoCommunity Hero
Hi Colin,
> You'll be getting two for this post now I suspect .... [...]
Hm-m-m... Nope, just one notification...
- Colin_McCrae9 years agoCommunity Hero
Ha ha ha!
OK. Thats mad.
I get three notifications for you editing a post I'm NOT tagged in. (But I am participating in the thread)
You only get a single notification for me editing a post you ARE tagged in. As well as participating in the thread.
I give up. I have no idea how it's deciding how many notifications to send people! :p
- mgroen29 years agoSuper Contributor
AlexKaras wrote:Hi Mathijs,
If I got your reply right...
According to your scenarion description, you are working with the client-server application (two- or three-tier, though this does not matter in this case). In this case any of your scenario can occur or any their combination because the time is spent on the server to serve data request, on the network and on the client to process and display the data.
The best approach is to measure the performance on all these nodes, compare obtained values to find the slowest one, improve the performance and repeat the cycle until acceptable performance is obtained. Database server profiler (specific for any given DB) in combination with the performance counters should be used to check if the database has any performance problems, network monitor/sniffer may help with the network traffic and application profiler (like AQTime) are the tools that are recommended for the performance analysis for the client-server applications.
Note, that it is a good idea to create a dedicated (temporary) group from DB engineer (the guy who understands DB server performance and how to improve it, but not just writes queries or backups database on schedule), network engineer and QA/developer (to analyse/improve code on the application's side) to work on the performance analysis/improvement task.
AlexKaras thanks, for making this all clear.
- mgroen29 years agoSuper Contributor
thanks for the clarification.
Hope you don't get overwhelmed by notification alerts now ;)