Ask a Question

TE: configure test sequence in external file

Regular Contributor

TE: configure test sequence in external file

I'd like to control TE's test run by an external file.

In the file, the testing engineer selects from the available tests to execute the ones he Needs to.


Has anybody done this before?

My question: how can I tell TE where to find the config file? I don't want to hardcode it, because I Need portability.

I start TE's test run from a link located in a certain control Directory. It would be great If TE could determine this Directory and search there for the config file.

Is this possible?


An alternative would be to Hand in a free comandline Parameter, which seems not to be possible..

Community Hero

Project.Path will return the path to your project directory. So you can create some sub-folder /Config and store there configuration files, or even put your file in the project root.


Something like:

path_to_config_files = Project.Path + '/Config'


Super Contributor

This may help you ......


Create a Function/Sub in TestComplete which is read your config (Text file or excel file) data and supply it to necessary TestCases.


Lets say you have 100 TC in your TestSuite but you want to run specific data base on the execution flag mention(Y/N) in configuration file. your function should read Y/N and Supply it to TestCase.



Run that function (.vbs or .js or .py) in your TestExecute (Command Like -  


"C:\Program Files (x86)\SmartBear\TestExecute 11\Bin\TestExecute.exe" "C:\Work\My Projects\MySuite.pjs" /r /p:MyProj /t:"Script|Unit1|Main"







Function TCConfig(strDatasheetPath)


Set objExo = CreateObject("Excel.Application")

Set objWbo =

objWbo.Application.Visible = True

set objwso = objWbo.Worksheets("Sheet1")

intRows = objwso.UsedRange.Rows.Count

intCols = objwso.usedRange.Columns.Count

For intRow = 2 to intRows

'read Test Case Execution flag, test Case Execution flag in 1st column

strExecutionFlag = objwso.Cells(intRow,1)

If uCase(strExecutionFlag) = "YES" Then

'read Test Case Name here, test Case in 2nd column

strTestCaseName = objwso.Cells(intRow,2)


For introw = 1 to intRows


'Take Required action here for your test case



End If



End Function



"C:\Program Files (x86)\SmartBear\TestExecute 11\Bin\TestExecute.exe" "C:\Work\My Projects\MySuite.pjs" /r /p:MyProj /u:Unit1 /rt:Main"

I do exactly this.


All my tests are keyword and data driven from Excel sheets.


But I wrote my own framework (all in Script Extensions) to do so. Using the DDT Excel driver wasn't flexible enough for me. Everything can be toggled on or off by the user. It also does everything relative to the project path (as mentioned by @baxatob) so the whole project is completely portable. When run, it expects the project to contain a "working" directory containing and input, result and log folders. If they are not present, it creates them, logs an appropriate message in the newly created log folder and then either continues the run, or ends, depending on which parts of the working directory were missing. (Obviously, it can't continue if there is no input folder!)


But like I say, this is an entire framework built from scratch. Any project making use of it (ie - all of them!) then only has to have a single, fairly short, driver script in order to run. On top of that I have a few libraries of generic functions - ones used for application control rather then part of framework so not in script extensions. And the final layer is project specific script files and the name map.


Took a bit of work to put it all together, but it's worth it now.

The framework that I presented in the October TestComplete Academy 301 does pretty much the same thing that @Colin_McCrae described, albeit in a different implementation and not nearly so robust. Generally speaking, there are probably any number of ways of doing what you want done.

For example, let's say that you have a whole bunch of keyword tests, each one representing a particular test case. Every time you run the project, you don't necessarily want to run ALL your tests, just are targetted selection.  What you could do is create a different folder for each run configuration with either an Excel sheet or a CSV file in it.  Each row in the file would contain two columns.  Column 1 would be the name of the keyword test ('PageLogin', 'PrintFile', etc).  The second column would be a string of parameters to pass in, if necessary ("'myusername', 'mypassword'").  You would then write something like this (JScript/JavaScript).


function runMyTests(testListFile) {
    var myTestList;
    try {
        myTestList = DDT.CSVDriver(testListFile);
        while (!myTestList.EOF()) {
            if (aqObject.IsSupported(KeywordTests, myTestList.Value('TestName')) {
                eval('KeywordTests.' + myTestList.Value('TestName') + '.Run(' + myTestList.Value('ParameterString') + ')');
    catch (exception) {
        Log.Warning('Exception : ' + exception.message, exception.stack);
    finally {
        if (myTestList != undefined) {

Run this as the only TestItem in Project.TestItems for your project and, effectively, it will execute all the keyword tests in your project that you have configured in the indicated CSV file.  All you would have to do is pass in the necessary file name and path and you're good to go.


Again, this is not the ONLY way to do it but it demonstrates another methodology to do what you're asking.  This just goes to show that, depending upon your need and skill, you can implement what you're asking in a variety of different ways. It is left to you, then, to determine what meets your needs best.

Robert Martin
[Hall of Fame]
Please consider giving a Kudo if I write good stuff

Why automate?  I do automated testing because there's only so much a human being can do and remain healthy.  Sleep is a requirement.  So, while people sleep, automation that I create does what I've described above in order to make sure that nothing gets past the final defense of the testing group.
I love good food, good books, good friends, and good fun.

Mysterious Gremlin Master
Vegas Thrill Rider
Extensions available


congrats, this sounds really good. I also use my own Framework, it also has been a LOT of work and is working fine. As an example, I'm using aliases only to identify dialogs, the rest is addressed using captions. Multiple instances of a Dialog are supported, too.


Using external test control will give the tester more control on what is to be tested when using TE.

I'll consider using Excel as a config file, too. It will be better structured than the simple text file I wanted to use at first.

The appropriate Location for the config file will be Project.ConfigPath. This is what I was asking for in this thread.

Yep. I use locators which map to the Alias of a control in mine too. The Alias names follow the onscreen labels so the test guys can work 99% of them out (a few are badly labelled or nor labelled at all on screen) without having to ask me.


I take them in with pipe separators, parse them into an Alias reference, and then do some check on the object to make sure it's a valid reference before it gets used.


So a single line ref on Excel (dashes being cell separators for this example), for me, would be something like:


Y - <BLANK> - <Function_Name> - <Optional Marker(s) and/or dependency(s)> - <Parameter - Locator> - <Parameter or Expected Result> - <Parameter or Expected Result> - .... etc - it can take as many markers, dependencies, parameters and expected results as required.


The "Y" column is a Yes/No indicator to run the test or not. Will be over-written with pass/fail/error when run.


The <BLANK> column will be filled a timestamp if the row is run.


The function name will be used with all the parameter data to build a call to a function.


After that, everything is optional depending on what the function needs. But each optional part has it's own cell. Putting all the parameters in one cell could get VERY cramped! Each cell has a prefix which tells the framework what it is. P: is a parameter. E: is an expected result. 😧 is a dependency. M: is a marker. It reads the cell. Separates the prefix from the rest of the content and uses it to determine what to do with the content.


A blank cell tells it it's got all the data for that row.


Markers are noted and stored.


Dependencies check the indicated marker results.


Parameters get added to the string, along with the function name, which is built into the function call made using an "Eval" statement. The object locator is just an input parameter. Albeit a special one which I parse into an object Alies reference and validate.


Expected results are stored in a disposable array and checked as required within the called function.


Obviously, all this has a ton of checking and validation round it. It's part of the framework so it absolutely cannot crash. And, generally speaking, it doesn't. Certain conditions (webserver with the site under test going down for example) will cause it to output a boatload of fails, but it won't crash. 🙂



Her is how I'm doing it:

I use ODT to organize test runs. ODT is structured hierarchically, so it enables me to set up really complex tests.

The ODT test Definition tree is absolutely parallel to the log tree, giving me a Maximum of clarity what happened where.

I'm so sad that ODT was cancelled, as only few guys were able to really make use of it!


Have a look on the screenshot: You see a test Definition hierarchy. Test data and reference data can be defined on the desired Level and reused anywhere below. I created two Standard data types for Dialog definitions and for field definitions.

Initialization, testing and finalization can also be set up at the appropriate Level. This enables me to execute single sections, in many cases even to execute single test items.


A powerful library gives me easy Access to the test data:

As an example, in the context of the test Routine BDE.Item1.Item1, I just code

Td = PVA_0.ODT.TestDatenArr(This, 0)

to Access the test data Array of the item. To get the test data of superordinate Levels, I'd have to fill in a 1 or 2 instead of the Zero.


Once having Access, I can

- have TC get/set the text, Click the button or item for test data definitions by simply handing in the context of the control (the Frame or row object where it is actually located);

- open a Dialog by menu or by button for Dialog data definitions.


This works nearly perfect.

What I'm doing now is to make the test run more controllable from outside by configuration. I want to be able to just run BDE.Item5, for example, regardless of whether the ODT items are enabled or not.


Having never used ODT, a suspect parts of this are lost on me!


But it sounds good. And I like how it matches closely to the log. But with ODT in play, I have no idea how I would separate all this out in some sort of external control file .....

For me, the main Advantage of the Approach is the tree structure. You can also emulate this in Excel, but it is not as beautiful there.

A second Advantage is the ability to use generic subtesting for Standard test purposes. This will be very hard to get based on Excel..


About applying the control worksheet:

Of course the data is in the ODT tree (or in any store with ist key in the ODT tree), not in the Excel workbook.

So what has to be done is to execute ODT items on demand.

I Limit the granularity of control to Level 3, meaning Test.Section.Sequence, as the Items can not in every case be executed independently from each other, whereas Sections and Sequences can.


For this purpose, there are two mechanisms: ODT elements can be enabled/disabled or filtered.

There is a Scripting Interface to the ODT tree (enabling) and the ODT classes (filtering). I tried it out: to do filtring, You have to Change properties of the ODT class. Unfortunately, this does not work because of a Software bug: the Change is done, but the result is not written back to the ODT.xml file.

I tried writing the ODT elements "checked" property this way to simulate enabling. This works, I do not have to Change the XML myself.

Showing results for 
Search instead for 
Did you mean: