Forum Discussion

scarleberg_1's avatar
scarleberg_1
New Contributor
13 years ago

Timer event fires irregurarly

I'm trying to create a simple performance logging function in my script tests. What I want to do is to set a timer that will call this logging function regularly during the entire program run, and each time the function is called it writes CPU- and memory usage of the process to a simple text document. 



What I have now is this, in my setup function that is called in the beginning of the test:

var performanceTimer = Utils.Timers.Add(1000, "Utils_Logging.performanceLog", true);



The way I see it, the timer should fire its event once every second and call the performanceLog function every time. Or have I missed something here?



In a very stripped down version of my performance logging function, this is the only thing being done:

Log.Message("Performance log");    



What I would expect of this is to get one such log message every second. So for a test that runs for ~20 seconds I would get ~20 performance log messages. Oddly enough, this is not what happens! 

I can use my last run as an example. The test unit's start time was 9:41:04 and its end time was 9.42:06, so it ran for about a minute. From my other log messages I can see that the timer was added at 9:41.30 - about half way through the test. From this, I except the callback function to be called approximately 30 times, since it's 36 seconds left of the test at this point.



However, I don't get anywhere near 30 calls to this function! I only get three. At each call I write a note to the log and also a short string to a text file, so there's not much being done. The related output in the log consists of three messages posted at the following times: 

- 9:41:42

- 9:42:04

- 9:42:04



As you can see, two of the three messages have been sent the very same second. I just don't understand how to use the timer objects properly when they seem to operate at random. This is the one and only timer I have. The callback function just performs some simple string printing. It does not help if I increase the interval to, say 5 seconds, it's still very few calls to the function. 



Am I missing something or are the timer objects just completely useless?

8 Replies

  • AlexeyK's avatar
    AlexeyK
    SmartBear Alumni (Retired)

    Sebastian,


    I agree with Alexei. I did some investigating on my computer. I created a 500-ms timer, ran an empty loop and checked the periods when the timer fired:




    function TestFunction()

    {

      var i;

      var timePeriod = 5000;

      var periodNumber = 10;

     

      var timer = Utils.Timers.Add(timePeriod, "Unit1.TimerFunc", true);


      for(i = 0; i < periodNumber; i++)

      {

        Delay(timePeriod);

        Log.Message("i = " + i.toString());

      }

    }


    The timer worked closely to what I expected. The deviations were about 10 ms (or 2%). A 1000-ms timer gave similar results (the maximum deviation was about 20 ms, or 2%). A 5000-ms timer showed even better results (a ~20 ms deviation, or 0.4%).


    Then I changed the test conditions: in the loop, I called a script routine that simulated simple user actions with a Notepad window. 

    The deviations increased. While a 5000-ms timer worked quite stably, 500-ms and 1000-ms timers gave huge deviations (up to 40-50%).


    As far as I can see, the cause is related to the nature of timers and the Windows operating system. Timer objects are script wrappers for ordinary operating system timers. As you may know, Windows uses the event-based approach and timers post the WM_TIMER message to the message queue of a window. When an application processes the queue and comes to the timer message, it executes the timer's function. However, it's quite possible that processing of earlier messages takes longer, so the WM_TIMER message will not be processed at the moment when it is expected. 


    TestComplete processes tons of window messages, especially during test recording and playback, and there are operations that don't check the message queue at this time as they cannot be interrupted. This makes timers inappropriate means for monitoring performance and memory usage in scripts.


    You wrote that you wanted to monitor the CPU and memory usage. In this case, you can try using perfmon. To run it, simply type perfmon in the command line.


    The following code snippet demonstrates how to run perfmon and use its data from script:




    function Test()

    {

      PostProcessInfo(".", "Explorer");

    }


    function PostProcessInfo(computer, process)

    {

      var objWMIService = GetObject("winmgmts:\\\\" + computer + "\\root\\cimv2");

      var PerfProcess = objWMIService.Get(

          "Win32_PerfFormattedData_PerfProc_Process.Name='" + process + "'");


      PerfProcess.Refresh_();

      Delay(1000);

      PerfProcess.Refresh_();

     

      Log.Message("Memory: " + PerfProcess.WorkingSet);

      Log.Message("Virtual: " + PerfProcess.VirtualBytes);

      Log.Message("CPU Load: " + PerfProcess.PercentProcessorTime);

      Log.Message("Page Faults / Sec: " + PerfProcess.PageFaultsPerSec);

    }

  • AlexeyK's avatar
    AlexeyK
    SmartBear Alumni (Retired)

    Sebastian,


    How do I learn about using the WMI service in this manner? How/where do I learn how to use and what to access in the perfmon process?


    I suggest using the MSDN Library. Here are a few links I found yesterday:


    WMI Tasks: Performance Monitoring

    Performance Libraries and WMI

    Accessing Performance Data in Script

    Win32_PerfFormattedData_PerfProc_Process class

    WMI Performance Counter Types

    WMI Reference


    One more way to explore COM interfaces is to set a breakpoint in your script and when TestComplete stops on this breakpoint, press Ctrl+F7 to invoke the Evaluation dialog. In the dialog, enter the variable name that stores reference to the desired object, click Evaluate, then click Inspect. You will see a window listing properties and methods of the selected object.


    if it's possible to just start perfmon from withing TC in this manner, and then let perfmon do its thing and log data in a way that I tell it do, instead of me collecting data into variables inside the script.


    Sebastian, I cannot say for sure, I'm not a perfmon guru, sorry :-) I sought for information about this and failed to find anything. I think, collecting variables in scripts is possible if you use a rather large timeout for your timers (say, 3-5 seconds).


    I looked at the perfmon help and I think, I found a simpler solution. You can set up the needed monitoring settings visually and start/stop the monitoring from the command line. Here is the full recipe:


    * Start perfmon.

    * In the tree on the left, there is a Performance Logs and Alerts | Counter Logs item. Right-click it and select New Log Settings from the context menu.

    *  In the subsequent dialog, specify the name of the new log, say MyTestCounters. Click OK.

    * This will display a dialog where you can specify the monitoring settings:

       - On the General tab you specify --

           - The file name to which perfmon will save data.

           - The counters to be monitored.

           - The update interval.

       - On the Log Files tab you can specify the file type (comma-separated values, tab-separated or others).

         Perhaps, it's a good idea to select either the comma-separated or tab-separated format as

         then you will be able to process these files with Excel or some other program.

       - On the Schedule tab specify how you want the monitoring to be started and stopped. Select manual start and stop.


    * After you created a new log, you can command perfmon to start monitoring by executing the following command line:


    logman start "MyTestCounters"


    To stop monitoring tasks, use --

    logman stop "MyTestCounters"

  • AlexeyK's avatar
    AlexeyK
    SmartBear Alumni (Retired)

    Sebastian,


    Using events to get the performance information is a possible solution, but you need to know these events and find a way to handle them from TestComplete. TestComplete includes an Events project item that was designed to let users handle TestComplete events and events of ActiveX objects and COM servers. As far as I understood, you are testing a .NET application, and I'm not sure that it's possible to handle this application's events from within the test projects.


    A possible alternative is to create handlers for the OnLogMessage, OnLogEvent, OnLogError and other similar TestComplete events. TestComplete triggers these events when it posts a message of the appropriate type to the log. If you run a functional test, there will be a lot of log messages. Within event handlers, you can query the performance counters and post their values as a part of the message's extended text (you can see it in the Additional Information section):




    var PerfProcess;

    ...

    // Sample OnLogEvent handler

    function GeneralEvents_OnLogEvent(Sender, LogParams)

    {

       var s = LogParams.StrEx + "\n" +

               "Memory: " + PerfProcess.WorkingSet + "\n" +

               "Virtual: " + PerfProcess.VirtualBytes  + "\n" + 

               "CPU Load: " + PerfProcess.PercentProcessorTime;

       LogParams.StrEx = s;     

    }


    You can find more information on event handling in the TestComplete help - http://smartbear.com/support/viewarticle/11118/. However, I think, this approach has a number of disadvantages: you will not get performance data at regular time intervals, and it will be difficult to analyze the data.


    So, I'd suggest that you explore the approach using the logman utility. Why did you write the setup is complex? You do everything visually in a special dialog and have to define all the settings only once, then use them... If you cannot do something, you can ask about it on the Microsoft forums or ask for help here.


    Anyhow, let us know how your experiments progress. :-)

  • AlexKaras's avatar
    AlexKaras
    Champion Level 3
    Hi Sebastian,



    What you are observing is the expected behavior.

    The case is that Windows is not RTOS (real-time operating system) and thus it does not guarantee that the timer events are fired at exactly specified time moments. What is guaranteed (more or less?) is that event is fired as soon as its time interval elapsed and OS has an available time slot.

  • But how come it's so easy to get my requested functionality in any regular programming language? Because my regular programs use operating systems too. And it's possible to make this kind of timers in JavaScript. But apparently not in TC's jScript engine? I have built timers both in more high- and low level languages/environments than this one, but never seen this kind of thing happen before. 



    My expectations are based upon the common behaviour of timers in general. I know that they don't always are 100% accurate, but this timing behaviour is just embarrising IMO. 
  • AlexKaras's avatar
    AlexKaras
    Champion Level 3
    Hi,



    I don't know TC internal implementations, so it is my guess, but I think that the case is in the fact that Timer is a functionality implemented on top of the regular script engine. (BTW, TC uses JScript.dll provided by the OS, so there is no something special here.) Considering that it is quite possible that all communication between TC, OS, script engine and tested application is done via COM calls (whether local or remote (in the process intercommunication terms), it is expected for me that it takes much more time to generate and process a timer event in the scripting language than in the native code produced by the regular language.
  • Thank you very much for your reply :) 



    I'm running automated endurance/stress tests on a desktop application and want to log information about that process and the hardware utilization. So it seems like it won't be easy to get well-timed readings by using timers, based on your trying in getting correctly fired events in a simple application such as Notepad. 



    I started looking into Performance Monitor after I concluded that the timer objects won't give me the precision I need, since it's also not giving me reproducable results by firing at "random". I looked for possibilites to run perfmon from withing TC, but didn't really find anything. Therefore I did a simple automation script for starting monitoring via the GUI, which is where I'm currently at.

    However, your way of calling perfmon from within the script sure looks interesting, especially the fact that you can access measured data so easily :)



    How do I learn about using the WMI service in this manner? How/where do I learn how to use and what to access in the perfmon process? When using the perfmon GUI, there are tonnes of performance counters to choose from, and I figure that not all of these can be active...



    The most important question might be if it's possible to just start perfmon from withing TC in this manner, and then let perfmon do its thing and log data in a way that I tell it do, instead of me collecting data into variables inside the script? 'Cause I guess I'll fall back into the timer accuracy issues again if I start collecting monitoring data from the perfmon process in TC (since this would correspond to my initial approach described in the first post).

    So would it be possible to just call perfmon and set certain parameters for deciding what it should monitor, and then let it run by itself without having to gather data from it through the script? 



    I have started working with Excel sheets and charts that visualize my gathered data, so I don't really have a need for keeping the monitored data inside of TC. 





    An alternative approach that crossed my mind earlier, when dealing with the non-accurate timers, was to do an asynchronous call to a logging function that locked its control flow inside a never-ending loop that only return when the process it monitors is dead. This loop would then sleep/delay for a specific time interval and thus act as a monitor. 

    The issue about this quite ugly approach was that it didn't seem to be possible to call script unit functions - only object methods/properties. 
  • I must say that your replies are extremely helpful :) I've read quite a lot about WMI and also confirmed that all the data I need to gather is accessible through WMI objects (and I also know the name of the ones I need). 



    I have now started to look into the different ways to go:


    • Use perfmon in a GUI scripted manner

    • Use perfmon via logman from the command line

    • Use WMI to gather data inside the script, without the use of any external application.




    My conclusions regarding the above so far are the following:


    • It's easy but pretty unsafe to use the perfmon GUI, since the script is sensitive to unexpected and unfocused windows, components being disabled etc. The settings are quite complex to set ut since you either have to create script routines for that or work on the XML templates.

    • It's both easier and safer to use logman to launch perfmon, since there's no graphical interference when performaing starting and stopping actions. I guess the settings are just as complex to manage as the previous - guess it'll be have to be done in the same way. 

    • WMI objects are equally easy to use as logman - just run it from the script. The major issue lies in the fact that the WMI objects seems to need to updated explicitly and periodically in order to get monitoring data. The two previous alternatives leave that task to the perfmon process. 




    I've done some more testing with timer accuracy in my scripts and have come to the conclusion that they don't really become any more accurate with longer intervals either (tested 60 and 120 second intervals on 50 minute runs). So I have excluded timers as a part of the WMI implementation. The alternative solution I've found is to log on events in the tested application. This is something that has been requested from my implementation and it seems like a nice solution.

    The only problem here is that if the application runs without having any user interactions I guess no subscribed events will be fired and caught by TC, which will create large "blanks" in the log during i.e. data processing sessions where the application may be busy for several hours. 



    My conclusion at this moment is that WMI objects are the preferred way to go if I can find any way go get a reliable way of performing some occasional periodical testing for those times that doesn't have events that can cause logging. Timers doesn't seem to work for my testing at all - their accuracy is really terrible :) 

    Maybe it's possible to subscribe/listen to some more "rare"/"unusual" events that fire very often, but usually aren't of interest, and use them as a replacement for timer events? I guess .NET applications have hundreds of various kinds of events being fired every second, but only a fraction of them are of interest to me as a tester. But if I would take interest in some of these, they could provide me with regular logging pauses. 



    It may be worth pointing out that I don't really need super accuracte timings on my monitoring - I just want to be able to always make sure that I monitor data once in a while. But my tests with timers show that they are unreliable and therefore I need some way of getting periodic monitoring measurements.