Forum Discussion

suthersons's avatar
suthersons
Occasional Contributor
13 years ago

Image difference while running the recorded script to another system

Hi Sir,


This is Sutherson.S from Syncfusion Software Pvt Ltd. We are one of the licensed user of your TestComplete tool.


Am very happy to use your tool for my automation.


For the past few days, am facing a problem with image comparison. I have created a testcomplete project and did my automation recording there. after completing this daily basis i used to playback the scripts.


Few days later, there was reason to run the recorded scripts to another automation machine. So i copied the project and pasted in the new machine and playback the scripts once again.


while validating the logs, i was faced with all image difference. But the script which was recorded in the first machine shows success,since the second machine with same configuration(Resolution is 1280*1024, windows7, Basic theme, etc) All are same configuration only. But while play backing at the second machine only i was getting difference image, but the actions are perfect.


Can you tell me the reason for this?


Can you update me the solution as earliest possible??


Once again thanks for reading this forum.

3 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    Please read the following articles.



    http://smartbear.com/support/viewarticle/11145/

    http://smartbear.com/support/viewarticle/11136/



    Essentially, any change in how the target test system renders images on screen could result in image differences.  Pixels may line up a little differently, they may be blurred, use more colors, etc.  These two articles give a pretty good run down of what kinds of differences you may run into and what you can do to mitigate them.



    To be truthful, unless there is no other way of doing so, I avoid region comparisons in automated tests.  The reason being is that you spend a lot of time building code and mitigating differences when a screen comparison can be done in just a few minutes with a manual test.  This comes under the heading of what I call "What I CAN automate is not always what I SHOULD automate."
  • Martin mentions manual comparison of screen shots. This is one option. Depending on your application under test and how open it is, you can also do your won coded checkpoints. What I am usually checking for is to ensure that the data that I am expecting is present on the screen or my output file contains the data that I expect for instance. (My advice will be of no use to you if your application deals in images!)  I will use Test Complete built in checkpoints where I can; property, object, file checkpoints. Where I can't I will use the native properties of the object to examine the data and Log.Checkpoint to the log the results. (Or Log.Error if it fails.)



    For instance, we use grids that are not supported by test complete. So I cannot use built in table compares. So I will write code to examine the raw values.



    Please note that Log.Checkpoint is not available in test Complete version 7.whatever I used to use.



    Good luck.
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    Lane makes excellent distinctions.  It is likely that, rather than having to compare visual rendered images, you might be able to achieve the same results by inspecting properties of the key objects on the screen.  Is the right text showing up in a particular label?  Is a field the correct color?  These are all object properties that can be examined as such.



    If, however, you goal is to test placement of the objects on the screen, again, this is possible using object checkpoints (each object has a Top and Left property which indicate relative placement on the parent object) but that starts to become tedious and problematic when, again, potential differences in pixel rendering and such could shift these items around by a few pixels here and there.