Forum Discussion

tinauser's avatar
tinauser
Frequent Contributor
14 years ago

testing test complete script

hallo,

I wrote several script that I use for testing an application.

I have now realized that would be nice to test these scripts as well.

Are there tips and tricks to do that?



I was thinking of a keyword test calling the functions: the problem is that sometime the expected behaviour of the function is to write an error in the log: but in the final log I would like to "see green" if the function correctly reported an error.

Any suggestion is welcome!!
  • karkadil's avatar
    karkadil
    Valued Contributor
    In my opinion, it is too much :)



    What is next? Test tests which test scripts? :D



    Usually, it is enough to run script with predefined errors once and see if it works as expected.



    For instance, if somewhere in script you are verifying that object is ENABLED then just modify this code to verify that object is DISABLED, etc. If your verifications are correct, than you will see errors in the modified version of the script. Or try interfere in the application under test during script run (close windows, etc.) and see how scripts react.
  • tinauser's avatar
    tinauser
    Frequent Contributor
    I will not test base TestComplete functions, but some of the script I wrote, which are not trivial.

    In particular, what is generally not trivial is the test preparation, more than the test itself.

    For instance, in many cases I have to test  strings, comparing values fetched from an xls table (reference), a GUI and in an XML file. I think it is important to test that values are correctly fetched (and that errors are nicely handled and reported).

    In some cases, strings comparison is "weak", ie does not consider double spaces; other time, values are custom modified (for instance the formatting of numbers (which are handled as string, but are un-formatted in the XML document, formatted in the GUI) is done on the fly); thus, custom string comparison functions are another example of function that have to be tested.



    After all, I am asking a suggestion to do exactly what you told me to do (test that tests behave as expected, both in case of expected and unexpected, and correct and incorrect input parameter); just, instead of performing this check-test manually I want to collect them in a project and possibly produce a report out of it.



    Since all of the functions I need to test return either a value or false, the idea is to run the functions with parameter which can lead to an error, and catch their output: If the output is the expected, I will write on a text file a short log message and whether the test behaved as expected or not.

    I adopt this strategy because I don´t see a way of using the log output separately for the function  and for the testing of the function outputs.



    Is what am I doing so unusual?
  • karkadil's avatar
    karkadil
    Valued Contributor
    For me - yes, it is unusual.

    Let's wait for other opinions then.
  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    I go with what Gennadiy said... I create "stub" scripts that run my code with different sets of parameters and make sure that I cover all potential code paths.   For the most part, I don't have to do a lot of this as I have my automation code limited in scope in what it does and does not do.  This does not mean that the code is not complex, just that I have limited code paths.  For that matter, making my functions more granular where, instead of trying to perform multiple tasks within a function I only do one task, it reduces the amount of code testing I need to do.  If my functions are all working as expected, then a combination of several of those functions together should be more simple to test than if all the code were in one function.
  • tinauser's avatar
    tinauser
    Frequent Contributor
    Probably I was not very clear. I do not want to do anything more that what you are suggesting, ie, writing scripts that will use all my code paths. The question is whether there are suggestion on how to do it and, more importantly, how would you make use of the results of this "stub" script: if the script you are testing interact with the log file, how can you discriminate when your stub script is testing a "normal" code path and when it is proofing an "error handling" path.



    Here an example: the datadriverSetUp return true if a ADO connection was succesful, false otherwise.



    function Test_datadriverSetUp()

    {

      Log.Message("Test Excel driver connection:");

          

      var hasWorked = datadriverSetUp("testdata\\Sample_Testcases.xls", "PC control");

      var hasFailed = (datadriverSetUp("testdata\\Sample_Testxxxxx.xls", "PC control") == false);



      if (hasWorked && hasFailed)

        Log.Checkpoint("libPrepare.datadriverSetUp() is functional");

      else

        Log.Error("libPrepare.datadriverSetUp() is corrupted.", "");

    }



    In case of an error in the connection, datadriverSetUp will write an
    error on the log. The script is working fine, and I write add a
    checkpoint to the log, but in the same log I will have the error that
    correctly my tested function raised. I would like to see whether my scripts are all working properly: I was thinking of creating a txt file where writing all the script-testing result, but maybe you have a better idea.
  • karkadil's avatar
    karkadil
    Valued Contributor
    I think I got the point.



    I follow some kind of my own agreement: if function returns something, then it should have all necessary error handlings (e.g. try...catch) and shouldn't post error messages intentionally. Returned value of such functions is used externally to decide whether error should be posted or not.



    Frankly speaking, I not always follow this rule, but prefer it in most cases.
  • karkadil's avatar
    karkadil
    Valued Contributor
    In addition.



    You can suppress posting errors to the log using OnLogError event in TestComplete. Probably it can help you.



    For instance, you can have a project variable which specifies the mode you are running your functions, and in case of "test scripts" mode it doesn't post errors.