Forum Discussion

MulgraveTester's avatar
MulgraveTester
Frequent Contributor
8 years ago

Log file size

I have a large set of tests that are executed using TestExecute. The test team are now stringing several test runs together, which makes the MHT log file very large. I am looking at ways to reduce the file size.

 

1) Is there a way to control the resolution of the screen grabs or limit sys.desktop to only capture one monitor on a multi-monitor test machine?

 

2) What I am mainly looking at is breaking up the log files as the tests are executed using the log.SaveResultsAs command but the file sizes increase in size because the log is always taken from the beginning rather than the last log.SaveresultsAs command. Is there a way to reset the log?

 

3) Even though I am using log.SaveResultsAs(filename, lsMHT, FALSE) I am still logging visualiser images. The FALSE should prevent this - shouldn't it?

 

 

Example code (VBS):

 

sub Testlog
  'experiments to reduce log size

  for j = 1 to 5
    log.message("Index j = " & j)
    for i = 1 to 5
      log.message("Index i = " & i)
      log.Error("Basic error with no specific picture.") 'Visualiser image should not be logged but is
    next

    'Not sure if explicitly specified images will be logged with Visualiser turned off
    call log.message("Message with screenshot",,,,sys.Desktop) 
    call log.Error("Error with screenshot",,,,sys.Desktop)
    call log.SaveResultsAs("C:\Temp\test" & j & ".mht", lsMHT, false)
  next
end sub

6 Replies

    • MulgraveTester's avatar
      MulgraveTester
      Frequent Contributor

      Thanks Manfred but not what I was looking for. I always have the visualiser turned off and manually post images to the log when errors or warning occur in my script.

       

      What I am really after is a solution to item (2). I want to divide up my log and spit it out as my test progresses without each new log containing the contents of the previous log. Is there a way to do that?

    • MulgraveTester's avatar
      MulgraveTester
      Frequent Contributor

      Thanks essaid but I am not recording any tests and not using the visualiser. My test script is 100% written in code.

      • Colin_McCrae's avatar
        Colin_McCrae
        Community Hero

        Do your own logging.

         

        I do.

         

        I have a Script Extension for logging (it's just a TXT file so easy to handle). A Script Extension for reading in test data + outputting results. And a Script Extension for various common control functions which are application agnostic like dealing with services, file + folder handling, etc etc. Another one for TFS integration. And a "driver" framework script unit which I can move from project to project.

         

        Each project/application then just needs it's own name map and functions specific to it.

         

        I prefer my own log simply as it gives me full control over it and how granular it is. I use global variables as control flags to switch parts of it on and off. And I can start and stop logs as I please - which I do.

         

        Kind of similar to how logging in Python works if you are familiar with that?

         

        I tend to only use the built in logs for debugging when I'm building stuff. Once I hand new test functions off to the main test teams, they use my logs which give them more application specific errors. Code errors will also be caught and put in there, but will be marked as such and handed back to me. Most tend to be application and data problems by the time the test guys are using them. Shouldn't be any/many code errors happening by the time I hand them off.