I will not test base TestComplete functions, but some of the script I wrote, which are not trivial.
In particular, what is generally not trivial is the test preparation, more than the test itself.
For instance, in many cases I have to test strings, comparing values fetched from an xls table (reference), a GUI and in an XML file. I think it is important to test that values are correctly fetched (and that errors are nicely handled and reported).
In some cases, strings comparison is "weak", ie does not consider double spaces; other time, values are custom modified (for instance the formatting of numbers (which are handled as string, but are un-formatted in the XML document, formatted in the GUI) is done on the fly); thus, custom string comparison functions are another example of function that have to be tested.
After all, I am asking a suggestion to do exactly what you told me to do (test that tests behave as expected, both in case of expected and unexpected, and correct and incorrect input parameter); just, instead of performing this check-test manually I want to collect them in a project and possibly produce a report out of it.
Since all of the functions I need to test return either a value or false, the idea is to run the functions with parameter which can lead to an error, and catch their output: If the output is the expected, I will write on a text file a short log message and whether the test behaved as expected or not.
I adopt this strategy because I don´t see a way of using the log output separately for the function and for the testing of the function outputs.
Is what am I doing so unusual?