Forum Discussion

sameerjade's avatar
sameerjade
Frequent Contributor
5 years ago

How do you structure workflow tests

Hello,

I am just curious how others create their workflow tests in TestComplete and to learn if what I am doing is the most efficient way.

What I have been doing is -

1. Create various scripts and in each script I add different functions which I will need for my tests

2. In these functions I add user parameters. I don't hardcode parameter values in these scripts and functions 

3. Then, in the project page, I create a new group. This group is basically my test which will contain all the test steps

4. Under this group, I create various test items. For each test item, I call a function from a script and then pass the parameter values in the 'Parameters' column. So each group/test is a collection of test items and each test item is calling a function from one of my global scripts and passing the user paramter values (screenshot attached)

 

I recently saw another way to create tests where they had a global script with functions and parameters. But they do not create test items in the project page. Instead, they create separate test scripts and each of this test script contains a function; and this function itself is a test workflow i.e. the function inside the test script is an entire workflow where it calls the functions from the global scripts and hardcodes the paramter values inside these functions.

 

I always thought that creating workflows under a function with hardcoded input values was not efficient since it is not very re-usable and defeats the purpose of a function. But maybe I am wrong.

I also thought that a function should always be a re-usable building block for tests and not be a test itself. Is that a correct to say? Please correct me if I am wrong. Thank you!

 

What are your thoughts about both the ways? Or, if you could share, how do you create your tests. Thanks!

 

Stay safe everyone!

 

Regards,

Sam

  • BenoitB's avatar
    BenoitB
    Community Hero

    Firstly, the correct thing is the thing that make yourself more efficient, not the other.

    Then the choice of structure could depend on which kind of tests you do.

     

    As for me, we use TC for RPA and tests as functiunal acceptance, end-user scenarii, through single or multiple apps (heavy client, light client, web, mobile, mainframe, ...).

    We do only script TC.

     

    We have a commun repository for all common libraries (like system.js, services.js, database.js, cmd.js, ....) and all main software engines (like sap.js,  inforM3.js,  sageX3.js, axelor.js, oracleEBS.js, webcheck.js, ....) and all main integration engines (like jira.js, squash.js, testlink.js, redmine.js, ...).

    All engines are closure-style object.

     

    After, we have a repository of projects per customer. Each project has one or several test scenario.

     

    For example, a customer in insurance, we have one project (name of the customer) and inside, several projects suites to test rates engine, end-user prints, claims and poilicy workflow, ..

    Each project suite use libraries of the common repository and three files for the project suite itelsef; example for rate engine; 

    1. rateEngine.js which manage the loop of tests cases read from Excel, its mainly call of customer app engine functions with data  read from the excel
    2. rateEngine_context.js which manage start/end test session/case/step  integration with ticketing system and test management software
    3. rateEngine_globals which contains some usefule globals

    Inside TC we have only 3 test items;

    1. Initialization which call startTestSession inside rateEngine_context
    2. Excute which call playTest inside rateEngine, this one is called only if initialization has been successful
    3. Finalization which call endTestSession inside rateEngine_context

    Starting a new customer or a new project suite is made easily because we have a console script that create all pathes and files wanted after answering some questions (name of customer, name of project, include webapps, include which engine, ...).

     

    If customer use only apps we have already in our repositories, it's 75-90% direct reuse, the remaining part is to follow changes in business process of the customer.

    If customer use new apps the console script build us the engine for all standard functions and we have to add business process functions only, so a 50% direct reuse at least.

     

     

     

     

     

     

     

     

    • tristaanogre's avatar
      tristaanogre
      Esteemed Contributor

      I like to use a data/table driven model where the driver of what tests to run, the contents of the tests (individual steps and parameters), and the order in which they are executed are stored externally from the automation code itself.  I like to use CSV files for that data because they are portable and easily editable by any user with any variety of tool.

       

      The script code consists of "atomic" steps, each one a class in Javascript which has properties corresponding to the data needed to execute the step and a method for the actual execution.  A test case is then constructed in code from the CSV data to be an array of instances of the steps.  A test case is executed then simply by traversing the array and executing the objects' primary methods in the designated order.  Rather than using Test Items to build the report and execution, there's a minimal number of test items:

       

      1) Reads the data and builds the test cases from the CSV files

      2) Traverses the arrays and executes the tests

      3) Cleans up the automation and any finalization routines.

       

      Reporting is built from log entries using AppendFolder/PopFolder etc., to build a report based upon log rather than on the built in TestComplete reports.  Others have adapted this to also write out reports to CSV, XML, or other formats internally.

       

      The pro that I've found for this method is that you can have a few people whose job it is to maintain the highly object oriented, modularized code.  If the test case requires a minor change in work flow, rather than needing to redo a whole test case, you only need to modify code for the individual step/class.  The more "atomic" the steps, the easier it is to maintain.  Additionally, with sufficient documentation, ANYONE can build a set of test cases for execution without having to know anything about TestComplete or the code involved.  They just populate a set of CSV data files and click a desktop icon to run the tests in TestExecute.

      • BenoitB's avatar
        BenoitB
        Community Hero

        We are doing same approach  ;)

        A strong and robust core functions, a minimal environmental management and a great place for PO and BA or others to build their own test cases through data driven method.

    • sameerjade's avatar
      sameerjade
      Frequent Contributor

      Thanks BenoitB for the detailed reply! That totally makes sense, whatever works best and efficient for one is the way to go for them.

      Great that there is a  significant direct reuse (50% or more) after the initial scripting. I guess that makes the work more efficient and cleaner! :smileyhappy: