What would be a good approach to build a framework for data driven testing
I'm trying to build a framework and working on ways to maintain test data in a effective manner. So thought of asking some suggestions here to see if any testcomplete offers something to make this easier.
Approach1
Create an excel sheet with all the testcases and its data, and write code using testcomplete's exceldriver methods to read the excel file and call the corresponding methods. I prefer using scripting to create tests rather than keyword tests, so each testcase listed in the excel sheet is a method/function. But in this approach, if i have 100 testcases and in total there are 50 different parameters i would end up having 50 columns, which would make it very difficult to manage. And i also need to execute the same testcase with different testdata
Approach2
Use the built-in test executing framework within testcomplete, save test data into text files and have it in Stores. Read the text files to retrieve testdata. One file for each testcase
Or suggest a way which you think might be effective
To answer your first question:
I don't map the parameters of the test case to fields on the screen. I simply parse them out into individual values and pass them in to the method/function that I'm calling. So, the parameters MAY correspond to a field on the screen... or not. All depends on how the test case is coded. There's a master loop, I'm assuming, that parses the spreadsheet into the different test cases. How are you executing the test case? Is that being done in some sort of switch statement or are you using some sort of "eval" or "Runner.CallMethod" means?In any case, what I would do is use the aqString List methods (GetLength, GetListItem, etc) to populate an array of the different parameters. JavaScript then allows you do so something like
methodCall(...parameterList) where parameterList is the array. The ... auto-expands the array which, effectively, passes the parameters into the method in the order in which they are found in the array.
The framework structure I prefer is that, rather than having the data being driven by test case is that I have a two-table approach. Table 1 is the driver that gives the list of test cases to execute. Table 2 contains the "steps" mapped via testcaseID to table 1 to indicate what steps to execute and in which order to execute the test case. Something like this:
TABLE 1TestCaseID, Description
1, CreateEvent Test 101
2, CreateEvent Test 102
3, CreateNotification
TABLE 2
StepID, TestCaseID, StepName, SupportingData, Description
1, 1, OpenApp, blah|blah
2, 1, LogIn, blah|blah
3, 1, GoToEventCreate, blah|blah
4, 2, OpenApp, blah|blah
and so on....
That allows me a bit more modularity and granularity to construct test cases using something more "plain language" as well as minimizing the shear volume of parameters you may need to deal with.