New Idea

TestExecute Settings Customization

Status: New Idea
by DavidE on ‎03-19-2018 11:06 AM

There needs to be some way to easily customize the settings within TestExecute without having to login to every machine that has TestExecute installed on it and manually configuring it.


Either installation parameters, more execution parameters, or a human readable tcsettings.xml file that can be rewritten to configure the installation.  Something like this.  My company has 10 licenses for TestExecute, and our lab machines are very dynamic, meaning they get rebuilt often.  We are automating the setup/configuration of that environment, but this has become a limiting factor.


There are 3 key settings that I absolutely had to modify on each installation, and therefore we have opted currently to copy this XML file across all of our clients, but because the file is not human readable, I have no idea what other things are in there, and what might cause problems because I copied the file from another machine.


Also, I think it would be a good idea to have an option to make this configuration file be at the machine level rather than the user level.  While there may be a case for user specific settings, if that is not needed, managing these settings at a machine level rather than a user level would be far more efficient and less of a headache.

We need the ability to customize the template for posting a defect to QA Complete so we can do this from Test Complete and include logs.


If I do this from an automated test in QA Complete the logs and other information don't automatically populate in the defects.

Overview: Would like to have both the TextExecute Runtime and Test Suite compiled into a single executable.


Use Case: We would like to have the ability to run the Test Suite without having to perform an installation of the software.  This would allow for greater mobility, requires less setup and less software to be deployed.  In our case it would extend the range of machines we can test against as we have some areas that would not like to have additional software installed.


Thank You

After recording a keyword test, the first thing I do is blank out the description for every new line.  Here's why:


  1. The concocted description gives no added value. You can look at the other three columns and figure things out easily.
  2. When I want to override the concocted description with my own comment, it blends with the other lines that have concocted descriptions, thus, camouflaging my descriptions that I want other testers to see easily.
  3. Having unnecessary descriptions bloats (a wee little bit at a time) files that have to be checked in to source control.

So the idea in baxatob's community post is great and would meet my needs.  A variation of this idea would be an all-time option to enable or disable.  If it is enabled, you could press an option to blank out concocted descriptions in all lines of the current test after it has been recorded (excluding lines that have already been manually edited, which is already being tagged as in the keyword test file as DescriptionEdited="True").


(One might say I should use Comments to do what I am wanting.  However, I use "Comments" to introduce a section of operations, and Description to give explanation on a specific line of code as needed. So, I use both Comments and Description)


(Note: This idea pertains only to Keyword Testing, not converting a Keyword Test to a script, which does have a mechanism to strip out Description.)

Consider a project with 100 tests which has a master project which run all the tests in this project on a remote VM.


    + Project01
    + Master
        + NetworkSuite
            + Job01
                + Task01 - Project01 (100 tests - 1-100) - VM01


I would like to add another task to Job01 to run half the tests on another VM.


+ Master
    + NetworkSuite
        + Job01
            + Task01 - Project01 (50 tests - 1-50) - VM01
            + Task02 - Project01 (50 tests - 51-100) - VM02


A test item can be specified in the command line as detailed in TestComplete Command Line (thanks @tristaanogre, @cunderw).

      /project:project_name /test:test_name


test_name can be a test item/group. But specifying a test item in the "Test" field in a network suite task is not working. I would like network suite tasks also to recognize test ietms/groups so that a project with a large number of tests can be run on multiple slaves.

Modify Keyword Test Tooltip

Status: New Idea
by Moderator Moderator on ‎01-26-2018 11:28 AM

Current Behavior: Tooltip for keyword test parameters is implemented as a "Full Name Tooltip", this makes it so that the tooltip appears over the area highlighted.


Identified Issue: For some users, it is too difficult to click the parameter for editing because of the tooltip's placement. The tooltip is directly over the area to be typed into.




Desired Behavior: Change tooltip implementation from "Full Name Tooltip" to one of the other kinds of tooltips that offset the tooltip instead of displaying it directly on top.


Kinds of tool tips here:

Support for Google Polymer framework

Status: New Idea
by Christophe on ‎01-10-2018 01:52 PM



I suggest that TestComplete supports Google Polymer framework.


Thanks in advance.

It would be helpful if the master pc can recover from lost connection.


Currently I have a test which needs to lose the network connection as part of the test and then it establishes the network connection.


Once we lose the network connection the master pc is unable to receive the results from the slave pc.


Currently, we would have to run this test locally on the slave pc as a workaround.


Attached is the screen shot of what happens on the master pc

Make Test Visualizer tool window dockable

Status: New Idea
by PBY on ‎01-25-2018 07:04 AM

With keyword tests, the screenshots saved during recording are very useful to understand what does a test at a later date. Therefore, I always has the Test Visualizer window opened, and at a size big enough to recognize most action.


But this take a lot of space, and as the Test Visualizer window isn't dockable, I cannot transfer it to another screen:




Please make it dockable, so that the keyword editor would have more space, and that the Test Visualizer can be bigger on another screen.



Currently, the mouse "Drag" method drags from a starting co-ordinate a set X distance and Y distance from that starting co-ordinate.  For most "normal" drag functionalities, this is sufficient.  I even wrote some code with help from folks up here that, given a starting object and a destitination object, will drag from one to the other.


function dragCalculatedDistance(startObject, destinationObject) {
    var startPoint, targetPoint, dragX, dragY;
    // Drag from the center of object A to the center of object B

    startPoint = startObject.WindowToScreen(startObject.Width/2, startObject.Height/2);
    targetPoint = destinationObject.WindowToScreen(destinationObject.Width/2, destinationObject.Height/2);

    dragX = targetPoint.X - startPoint.X;
    dragY = targetPoint.Y - startPoint.Y;

    startObject.Drag(-1, -1, dragX, dragY); 

The problem is this...  We have a situation where we need to drag an object that is present below a table in our application to a particular cell in the table.  In that process, when we cross the lower border of the table, the table may auto-scroll down to reveal other rows.  Because the above function just drags to a calculated set of co-ordinates, the end result is that we end up dropping the object on the wrong row in the table.


Proposed Solution: A drag functionality for TestComplete where, instead of dragging based upon a co-ordinate distance, actually drag from one object to another... something like



where the two objects are identified onscreen objects.  This would resolve the auto-scroll because it would actually drag to the particular cell we want and not to just a set of on-screen co-ordinates.


Low priority change to make... and I'm open to other code that would make this work better for us.  But I think this would be "cool" as it would mean that my custom code above wouldn't be necessary any more and we could have "smart" drag-and-drop functionality.



For now I have to count the number of automated scripts within every folder manually. That's not convenience. If you add the number next to folder, that's good for user. The high-level design is like the enclosed picture.



Frequently, I run into problems where my find()/find_child()/find_child_ex()/etc.. calls are finding the wrong objects (sometimes its hard to find uniqueness within the objects you have at your disposal). It would be AMAZING to have a 'finder' tool directly in the object browser that would allow you to pick an object to perform a 'find()' call on, give a form to fill in props and values and it would give you the object (or objects) that it is matching against without having to run any of your own code.


It's good to know right off the bat if your props and values are mapping to the object you expect. I could see it saving me a whole lot of time when developing test scripts.

Test Report summary export to Excel format

Status: New Idea
by abinash11 on ‎06-19-2017 08:22 AM

Currently, once the tests are executed in TestComplete, we can export the results and the summary into MHT format but not in Excel. Getting the results in Excel is a basic need in most projects where you have to report the results based on the number of tests run including the Passed and the Failed tests.


If not Excel, at least a CSV format should be good to start with.

We would like to be able to capture any logs from the console or network messages so that we can provide specific network errors to our developers for any intermittent bugs that may occur which are of this nature.

Currently, while we can open developer tools and access this via the object browser to get the elements which contain the log messages, we can't do this solely via an automated run in Test Complete. Is there a way we could get this information and if not could you add a feature for getting it?

I have over 700 test items grouped into groups and distributed across multiple test machines in a network suite, so the result log is long.

I know I can go directly to an error but it doesn't select the test item in the tree (another annoyance).

When I only have a few errors I have to step through the whole tree to find them.


It would be nice if there were a right click menu item in the tree to select the top level and collapse all nodes, or expand all nodes.  Then I could collapse the entire tree, do a couple of high level expansions, and easily see the failing group and test.

Our Windows application has this functionality courtesy of our DevEx tree controls.  Since TestComplete has a similar look and feel, I hope that it would be easy to add this to your existing right mouse button tree menu.



I don't know if this has been suggested yet or not.  But one of the things that the TestComplete methods indicated in the subject does is that, rather than finding child objects from the first on the page/form to the last (top down in the object browser), they instead work in reverse.


For example, if an application form has 5 buttons with similar information and you use "FindChild" to find one of those buttons, it will find the button that is the last one in the object browser rather than the first one.  FindAllChildren orders the resulting array in a similar "reverse order".


While for the most part this isn't a problem, if you want to find the "first" object on a page that matches a certain set of criteria, these functions present a challenge in that you need to account for this reversal.  I'd like to see the Find methods find objects based upon ordering on screen (top down) rather than the reverse (bottom up).

I would like to have the ability to identify from a script routine what keyword test is currently being run. Currently there is no way.  In javascript I know there is a caller method I could use, but that doesn't get me the name of the keyword test, it gets me the previous methods name.

I have a ProjectSuite with multiple projects in which keyword-test-routines of one project are referenced in multiple other projects. If an original keyword-routine is renamed, there is no other way but to remove and then readd the references after which all usages of these references in actual tests have to be found and replaced with the renamed references as well - needless to say, that this is always a quite hideous task.


It would be very helpful if renaming a keyword/script would also rename all references and all places where they are used.

Personally, I hate the new feature in 12.30 "Smarter auto-formatting. When you type a bracket or a quotation mark, the Editor now automatically inserts the corresponding closing bracket or quotation mark."


Please see community post for more details:


I would love to have an option to shut this off.

The command line generated by the Jenkins plug-in (which is otherwise great!) has several default values, which are not the desired ones. For example, there are three /ExportLog arguments (tclogx, htmlx and txt). My tests are quite long and consequently the logs can be huge. I don't want to export them (which takes too much time); I just want to be able to display them directly in TestExecute if the test fails.

Suggest Product Idea
We need your help! This is your space to provide suggestions for features or improvements for SmartBear products. Let us know what you think, or how we can help improve your experience with TestComplete.
Top Kudoed Authors