Forum Discussion
Hi, hannecroonen
Yes, the question is valid and quite complex. Its complexity, from my point of view, is based on two factors: "Observer effect" and environment instability.
Environment - from our experience, the best thing which we invent is the local physical server with only one virtual PC. The performance of this virtual PC is rely on the Recommended System Requirements of the product. So, it's quite isolated environment. We set the OS updates regularly, but every time pay attention that Windows updates do not bring the changes into performance. If they did, we try to figure out how to deal with it (sometimes we change baselines, sometime can fix the product or environment). All other time the updates are Off (because, you know, Windows 10 can download and install them any time). Then we measure some scenarios we try to exclude the unnecessary waitings, preparations and etc from the result timings. After that we try to analyze the dispersion and increase the number of measurements which neutralize the environment fluctuations. When we try to analyze why the environment is loaded, first of all we try to see what is in Events View of Windows.
"Observer effect" - it's obvious that the measurement tool brings some mess into results. Unfortunately, it is not only for QA, but it relies on more comprehensive rules. TestComplete (like any other automation tool) tries to manipulate the app process flow and measure it. Of cause the app and OS consume more resources for this. First of all, we need to set the goal. It we want to get very precise results - I cannot recommend to use any automation tools. It's better to get such results manually. But in this case you can also get errors of measurement. Just because our eyes and fingers are also not so good as we want.
But I'd prefer the "alarm" approach. Instead of "absolute scale" we can use "relative scale", so, we get results from tool like TestComplete and compare these results from results of the same tool. So, if we understand that something is worse than previous we get this alarm and after that can investigate this deeper with some additional tools like profilers: https://smartbear.com/product/aqtime-pro/features/performance-profiler/. The idea is to compare profiler results from "good" and "bad" test runs.
For the other steps to get more stable results - they are quite the same as for "Environment" - increase the number of measurements, exclude the unnecessary steps between really important actions. Additionally, try to turn off the Extensions in TestComplete, which are not necessary for your test (it can be: MSAA, UIAutomation) - but this doesn't add a lot of progress.
Off-topic:
> after that can investigate this deeper with some additional tools like profilers: https://smartbear.com/product/aqtime-pro/features/performance-profiler/
Irony of life: AQTime (MemProof) was the initial product by SmartBear (AutomatedQA and TotalQA (if I remember it right)) and TestComplete was created to automate its testing (as it was said here somewhere in time).
Now AQTime is used to performance-tune TestComplete... 🙂
I would like to believe that performance, resources usage and memory (leaks) control and usage optimization are still of significant importance for at least some companies / projects and that AQTime is not an internal product of low importance for SmartBear.
Might it be an option to have an event like this Community Day but dedicated to AQTime sometime in the future? Maybe with some detailed and technical demos of how to use AQTime with modern .Net applications?
Related Content
- 11 months ago
- 10 months ago
- 15 years ago
- 12 years ago