I have a question about Performance Testing an application from within TestComplete.
We currently have a suite of keyword tests which call a script written in Delphi code, the script uses an object called StopWatchObj via the HISUtils plugin. I understand this object surfaces the time between events when the test runs.
There is quite a bit of focus on Performance internally at present and it got me wondering....
Is StopWatchObj a useful\valid method of testing performance in an application?
What is the difference between using StopWatchObj & Performance Counters?
I've read quite a bit about Performance Testing in TestComplete (https://support.smartbear.com/testcomplete/docs/testing-with/advanced/monitoring-performance/basic-c...) and it feels like we're doing something a little non-standard and calling it 'Performance Testing' - this might be purely my perception.
Interested in any thoughts \ advice as always.
Yes, you posted your question to the web testing part of Community, but just to double-check:
-- Are you talking about testing and performance measurements for the web application ?
OK, so we are talking about desktop application created using Delphi and you are interested to measure/evaluate/monitor its performance. Correct?
> What is the difference between using StopWatchObj & Performance Counters?
The difference is that StopWatch just measures time intervals between two calls while Performance Counters measure relevant data (CPU load, memory consumption, etc.) periodically (e.g. every 0.5 sec).
You may use StopWatch (and this is a good idea and a valid approach) to measure application's performance from the end-user point of view.
For example, this may look like this:
-- Call StopWatch to note the time;
-- Click Refresh button, check that it becomes disabled and wait until it is enabled again (which means that data was refreshed);
-- Call StopWatch to measure and report the time interval it took the user to wait for the data refresh.
Note, that the above actions will provide you with the information of how long user was waiting for data refresh, but tells nothing about what this time was spent on.
This is where Performance Counters may help. For example, assuming that your application gets data from the database, you may turn on the counter provided by your database server (talk to your database administrator (DBA) to discuss what counters are relevant) and also turn on, say, CPU and memory counters on the client machine (where the tested application is running).
After repeating the above test, you may notice that, for example, after the Refresh button was clicked, database counter displayed a lot of sequential scans in the database. And then, the client counters displayed the growth of CPU and memory consumption.
This may illustrate the problem when data is queried from the database in a non-optimal way (e.g. without using indexes) and post-processed on the client than (e.g. sorting and filtering-out data that should not be displayed).
If the above suspicion appears to be correct, you may talk to developers to query only required data (to decrease processing time and required memory on the client) and to DBA to introduce required indexes to improve query performance.
Additionally, you may consider to use profiling tool (e.g. AQtime by SmartBear - https://smartbear.com/product/aqtime-pro/overview/) to figure-out what code on the client caused increase of CPU and memory consumption. With this info, you may communicate to Development and discuss if it is possible to improve the problematic code to make it faster and/or less memory consuming, depending on your priorities.