Forum Discussion

sonya_m's avatar
sonya_m
SmartBear Alumni (Retired)
4 years ago

Community Day 2021 - Regression Testing of Performance

Let's move on to the next session of the day. Not to miss any sessions, subscribe to the event tag - #CommunityDay2021. We will be posting great content for the entire day today!

 

Regression Testing of Performance

by Alexandr Gubarev, TestComplete Senior Test Lead

 

The session takes a really deep dive into the subject. This talk is based on the experience of the TestComplete and ReadyAPI QA teams when they had decided to rapidly extend test suites to protect users from performance regressions.

 

This video will be useful for those who want to add performance checks to their functional tests, regardless of the type of application: Desktop, Web, Mobile, or even a web service. We will define a common approach when and for which scenarios you need to do regression performance testing, tell you about the useful features of TestComplete, learn advanced Google Sheets practices, and share recipes for quickly implementing performance checks in your tests.

 

Become a Community Day Winner!🏆

Watch the video sessions, post your questions and give Kudos to get the event points!

Read more about participation rules.

 

Watch the session:

 

The VideoRecorder extension developed by Alexandr mentioned in the interview.

 

Timestamps:

00:00 Speaker introduction

03:22 About regression and performance testing

04:45 Example of the case

06:04 Defining scenarios

09:28 Steps of tests

10:58 Using the Google Sheets for POC

11:53 TestComplete: aqPerfomance object

13:22 TestComplete: Time with Children in Log

15:15 TestComplete: Performance Counters monitoring

16:26 Demo: defining of the scenario

17:15 Demo: high-level steps

18:12 Demo: measuring

20:13 Demo: Google Sheets as storage

21:04 Demo: report table

22:43 Demo: creating web service on Google Sheets

24:51 Demo: how to publish your report for manager

25:22 Demo: work with web service from TestComplete

26:38 Demo: tests run

29:37 Some final words

 

Any questions on regression testing of performance? Ask away in the comments below!

  • Thanks for the video!  Do you have a preference when to start performance testing for a baseline of a new project.  Would you start at the beginning and see how performance is affected by each new feature or wait till the end or close to the end and then try to fix it?  I can see a case for both.

    • AlexKaras's avatar
      AlexKaras
      Champion Level 3

      Hi Marsha,

       

      Do you have a preference [...]

      If I may jump in, I would suggest intermediate suggestion - start when the foundation of solution has been established and it was decided that it is good enough and may and will be used from now on (i.e. that generic architecture of solution has been tried and it was decided that architecture (core engine) is approved). This seems to be a good moment to create initial baseline and then move on and monitor how each new feature affects performance.

      Does this sounds reasonable?

       

      P.S. Update: It appeared that I wrote practically the same that AGubarev replied three minutes before 🙂

       

    • AGubarev's avatar
      AGubarev
      SmartBear Alumni (Retired)

      Hi, Marsha_R 

      Thanks for the question.

      From my experience, when we start test projects from scratch, first of all Devs need to "understand" and implement basic functionality. And this functionality is near to POC. So, it can be changed in any moment. So, I'd concentrate all QA resources on functional tests. Maybe the only exception, if we talk about performance, is load/stress testing. I remember the case, when we as QA perform load / stress of the the raw version of client-service app, and as result developers totally change the infrastructure and solution vendors (the previous had limited scale capabilities). So, if you test project which potentially will be scaled (it has database, or server on background) and it's not typical project for your company. It's reasonable to investigate limitations on the earlier stages.

       

      Regarding to performance regression testing, I still think that QA resources are "luxury", so, product teems should use it on most critical scenarios. And it's hard to predict such scenarios if the app is not used by real customers. Of cause, there are obvious issues with performance. But from my experience even ordinary functionality tests can found them. So, to sum up, I cannot see the cases, when it reasonable to develop such kind of tests at least before the core functionality is covered with functionality tests

  • AlexKaras's avatar
    AlexKaras
    Champion Level 3

    Just off-topic note: dark theme is not the best one to be used for the demo video running on smaller monitors...

     

    • AGubarev's avatar
      AGubarev
      SmartBear Alumni (Retired)

      Hi, AlexKaras 

       

      Thanks for this info, next time I'll definitely do it lighter. 

      I'm a huge fan of dark themes everywhere (just maybe because like to code at night), so, I couldn't to stop myself to promote this dark theme in TestComplete one more time 🙂

       

  • Lee_M's avatar
    Lee_M
    Community Hero

    Alexandr,

     

    I know it is not part of this video but one thing I wanted to ask about is visibility for TestComplete

     

    I have lots of regression testing that I would like my boss and company to have to audit purposes (test X last week)

    I need to have visibility on this to show the results, how can this be done ?

     

    Currently, you can export the results in a format on internet explorer can really use (.mht)

    The log file can be limited to X amount - space saving

     

    How can I get a system to link back to, and show results a week/month/year from now without exporting ALL the logs or pulling them in Jenkins

    - bear in mind that when building tests the is a large percentage of failed/incomplete tests while building the final product

    • AGubarev's avatar
      AGubarev
      SmartBear Alumni (Retired)

      Hi, Lee_M 

      thanks for interesting question.

       

      TestComplete has two kind of results representation: ordinary log (which is more technical and mostly need to understand the cause of test failure) and summary (it contains high-level info about pass/failed scenarios).

      If you need to show some high-level report for your boss, I'd recommend to use Summary representation (and it can be exported as JUnit and exported in some test reporting system, or just printed as PDF).

      Also the next step can be to put the test set in some test orchestration system: Zephyr, AzureDevOps, or Jenkins (which is free to use).

      It can give you merged report, scalability, different kind of representations and historical data.

       

      Here some screenshots from Jenkins:

      It stores also the "technical" logs, so, you can always can see them and it can options to clean the storage by scheduler. 

       

      Another thing which I like in Summary, that you can add all test cases / scenarios which you discussed with you boss or dev. And automate it one by one, and before non-automated-yet test cases just put Runner.Stop() - all the test cases which are not automated will be in Unexecuted group.

      So, you boss can see in report the progress - how many tests already automated and what is in backlog. The technical details is usually not so important for management.   

      • cauline's avatar
        cauline
        Contributor

        There is also another way to present trends with your tests in Jenkin's.  We use an extension called 'Plots', that in our case, we use to graph page load times daily/weekly.

  • Hello, 

     

    I want to start testing performance of our desktop program but it seems that TestComplete truly has some kind of influence on this performance.

    How can I avoid the influence of TestComplete/TestExecute on our performance tests?

    (I tried to do this and got different timings daily, depending on how many programs are opened but even if it is only TC and our program to test it differs daily/hourly/...)

     

    thanks!

    • AGubarev's avatar
      AGubarev
      SmartBear Alumni (Retired)

      Hi, hannecroonen 

      Yes, the question is valid and quite complex. Its complexity, from my point of view, is based on two factors: "Observer effect" and environment instability.

       

      Environment - from our experience, the best thing which we invent is the local physical server with only one virtual PC. The performance of this virtual PC is rely on the Recommended System Requirements of the product. So, it's quite isolated environment. We set the OS updates regularly, but every time pay attention that Windows updates do not bring the changes into performance. If they did, we try to figure out how to deal with it (sometimes we change baselines, sometime can fix the product or environment). All other time the updates are Off (because, you know, Windows 10 can download and install them any time). Then we measure some scenarios we try to exclude the unnecessary waitings, preparations and etc from the result timings. After that we try to analyze the dispersion and increase the number of measurements which neutralize the environment fluctuations. When we try to analyze why the environment is loaded, first of all we try to see what is in Events View of Windows. 

       

      "Observer effect" - it's obvious that the measurement tool brings some mess into results. Unfortunately, it is not only for QA, but it relies on more comprehensive rules. TestComplete (like any other automation tool) tries to manipulate the app process flow and measure it. Of cause the app and OS consume more resources for this. First of all, we need to set the goal. It we want to get very precise results - I cannot recommend to use any automation tools. It's better to get such results manually. But in this case you can also get errors of measurement. Just because our eyes and fingers are also not so good as we want.
      But I'd prefer the "alarm" approach. Instead of "absolute scale" we can use "relative scale", so, we get results from tool like TestComplete and compare these results from results of the same tool. So, if we understand that something is worse than previous we get this alarm and after that can investigate this deeper with some additional tools like profilers: https://smartbear.com/product/aqtime-pro/features/performance-profiler/. The idea is to compare profiler results from "good" and "bad" test runs.
      For the other steps to get more stable results - they are quite the same as for "Environment" - increase the number of measurements, exclude the unnecessary steps between really important actions. Additionally, try to turn off the Extensions in TestComplete, which are not necessary for your test (it can be: MSAA, UIAutomation) - but this doesn't add a lot of progress.

      • AlexKaras's avatar
        AlexKaras
        Champion Level 3

        Off-topic:

         

        after that can investigate this deeper with some additional tools like profilers: https://smartbear.com/product/aqtime-pro/features/performance-profiler/

         

        Irony of life: AQTime (MemProof) was the initial product by SmartBear (AutomatedQA and TotalQA (if I remember it right)) and TestComplete was created to automate its testing (as it was said here somewhere in time).

        Now AQTime is used to performance-tune TestComplete... 🙂

         

        I would like to believe that performance, resources usage and memory (leaks) control and usage optimization are still of significant importance for  at least some companies / projects and that AQTime is not an internal product of low importance for SmartBear.

        Might it be an option to have an event like this Community Day but dedicated to AQTime sometime in the future? Maybe with some detailed and technical demos of how to use AQTime with modern .Net applications?