Forum Discussion
Thanks for all of the feedback. I am going to look into some of the other tools that were mentioned here.
Trust me I wish we didn't have to try and use image comparison as well. We have avoided it like the plague for sometime now but there are some cases with the app we are testing that give us no other real option for testing. The app generates 3D data on screen with shadows...etc and there is inherently some slight randomness to this. A pixel can be moved by just a single pixel within the image. This won't be noticed by the user and is something we don't want to flag as error in our test.
I am not sure if my original idea would will or not it was just something I was toying with to try and get decrease the amount of false positives. What I was going to try and do was add an additional tolerance parmenter that if an error occurred would search 1 pixel in every direction of the error pixel to see if it found the correct colored pixel. I will have to spend some time to see if some of these other tools will work to do that at a reasonable speed.
Smartbear may want to add a note to this section of the help file about speed, since this where I started.
Thanks again for the help.
Brian
- tristaanogre9 years agoEsteemed Contributor
Yeah, sometimes it's unavoidable... but it is a pain when you do need to use it.
Question: do you need to compare the WHOLE image or could you grab smaller chunks for verification that might exclude those "blurry" areas? That might be another solution, to limit the size and selection of the image being compared so that your tolerence factors are more predictable.
Related Content
- 3 months ago
- 10 years ago
Recent Discussions
- 16 hours ago
- 7 days ago
- 10 days ago