Forum Discussion

hkim5's avatar
hkim5
Staff
4 years ago

TestComplete integration with Gitlab (or in theory any CICD)

TLDR:

  1. Create or link to repository containing TestComplete Scripts
  2. Configure CI agent/test runner (self-hosted w/ auto-logon)
  3. Figure out syntax/keywords of the pipeline (usually a YAML file)
  4. Tinker with the command line arguments for the TC executable

Occasionally, I’ll get asked, “Do you integrate with X CI/CD framework?”

There is a cookie cutter answer “Yes, because TestComplete is command line accessible”. While this is a generic “template” answer, I figured it would be worth-while to outline some of the generalized steps that I took to create a simple, sample pipeline using Gitlab CI to launch TestComplete tests.

 

1- Repository

First, I created a new repository within GitLab to contain the relevant TestComplete Project Suite that I planned on running. This is the exact same as creating a GitHub Repo. I just pointed to the GitLab location instead: “git remote add origin my_repo_location

 

2- Configure CI Agent

Now, to create a shared understanding, I think it’s important to note that TestComplete is a Windows thick client application. This means that whichever CI framework we choose to work with will need to launch our TestComplete.exe (or TestExecute, TestExecuteLite, or SessionCreator) via a self-hosted agent that has access to the executable itself (which points back to the whole “command line accessible comment above”). This is most apparent when integrating with Azure DevOps pipelines (where within our documentation you will find bolded multiple times the requirements for that self-hosted agent).

Extending this line of thought, I first “investigated” (googled) Gitlab’s CICD runners (“agents” as I’ve been calling them). While they can run inside a docker container or be deployed into a Kubernetes cluster, they can also be downloaded/installed/registered/started (in that order) on any machine with the supported operating systems. These guys will run our CICD jobs. Additionally, at the time of the install, you can define the executor that you’d like for these runners to use; conveniently for us, these runners have the shell capabilities that can consume command line arguments. Additionally, you can tag these runners, such that you can define the agent/executor that you’d like to use to run your CI jobs (this is done later within your YAML file – explanation in part 3).

 

3- Figure out syntax/keywords of the pipeline

Turns out, GitLab does use a YAML file to describe the stages/jobs/steps of the pipeline to run. This is standard in the industry (i.e. jenkinsfile, travis-ci.yml, etc.) and those who have familiarities with any of the other CI frameworks should be able to pick it up quickly. I will say that I found the syntax and the keywords for the gitlab-ci.yml file to be even more intuitive than the rest, so kudos to whoever designed that! I probably cannot do GitLab CI justice with my current depth of understanding of its full features, so I will give the simplest example/explanation of how we can use it (with TestComplete).

Essentially, pipelines are comprised of jobs and stages, where jobs define what to do, whereas stages define when to do. From the documentation linked above, we can see that:

A neat behavior of the basic pipelines is that if there are multiple jobs defined within the same stage (i.e. “test”) then all the jobs will be executed in parallel. There might be some objection to this (say if certain “test” jobs have dependencies with certain “build” jobs that finish quicker, would we really want to run everything in parallel based on stages? --GitLab also provides DAG pipelines to target this very issue but that’s a whole another topic in itself), but I thought that a brilliant use of this built-in/default parallelism feature would be to leverage our Device Cloud tests (where we would want to launch all tests in parallel at the same time).

nice diagram from the same docs linked above for the pipeline behaviors

 

4- Tinker with the command line arguments

The only thing I had trouble with while creating the yaml file was the shell behavior where I couldn’t provide the full path of the TestExecuteLite.exe because it was contained within my “Program Files (x86)” directory, and apparently (didn’t know this before) Windows PowerShell really doesn’t like having spaces within full paths even with quotes surrounding the executable to trigger. I googled around for a bit for a workaround but figured it would just be easier to add the bin directory of that executable to my environment variables, so that I could trigger the executable by just by invoking “TestExecuteLite.exe” within PowerShell (the executor that our runner will use).

Finally, an example of a gitlab-ci.yml file that I created (at the root of the project directory containing my TestComplete test scripts) looks like this:

 

 

 

 

build-job:
  stage: build
  tags: 
    - gitlabTC
  script:
    - echo "Hello, $GITLAB_USER_LOGIN! Running some Device Cloud jobs in parallel"

test-job1:
  stage: test
  tags: 
    - gitlabTC
  script:
    - echo "This job runs test set 3"
    - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test3" /ExportLog:$CI_PROJECT_DIR\logs\1_log.mht /e 
  artifacts:
    when: always
    expire_in: 1 week
    paths:
      - logs\*.mht
  allow_failure: true

test-job2:
  stage: test
  tags: 
    - gitlabTC
  script:
    - echo "This job runs test set 4"
    - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test4" /ExportLog:$CI_PROJECT_DIR\logs\2_log.mht /e 
  artifacts:
    when: always
    expire_in: 1 week
    paths:
      - logs\*.mht
  allow_failure: true
  
test-job3:
  stage: test
  tags: 
    - gitlabTC
  script:
    - echo "This job runs test set 5"
    - TestExecuteLite.exe $CI_PROJECT_DIR\pella_project1.pjs /r /p:pella_project1 /t:"KeywordTests|Test5" /ExportLog:$CI_PROJECT_DIR\logs\3_log.mht /e 
  artifacts:
    when: always
    expire_in: 1 week
    paths:
      - logs\*.mht
  allow_failure: true
  

clean-up-test:
  stage: .post
  tags: 
    - gitlabTC
  script:
    - echo "This job deploys something from the $CI_COMMIT_BRANCH branch.. This cleans up workspace"

 

 

 

 

 

Concretely, the gitlab-ci.yml file above does:

  • “build-job” during the build stage, where it prints out the string “Hello, $GITLAB_USER_LOGIN! Running some Device Cloud jobs in parallel”. Typically you would be compiling things or building things that are needed for the test step
  • Since there are no other jobs in the “build” stage, we move on to the “test” stage where:
    1. test-job1, test-job2, and test-job3 are all all executed at the same time. Each test-job triggers a Device Cloud test using TestExecuteLite.exe (which in our case specifies just a single keyword test to run, but this can be changed to fit your needs)
    2. For each of the jobs run, we will also receive back an artifact (regardless of execution status), which is going to be the respective mht results file generated by the test run.
    3. We also specify that any jobs in any following stages will also be run regardless of failures (as defined by the “allow_failure: true” keyword description)
  • We go to the .post stage, where this runs at the very end of our pipelines, but typically one might expect to see some sort of a deploy stage job (to either staging or production or both!). In my case, we just print out some string values using the echo command.

The following pipeline runs whenever there is a commit to the master branch of this repository. We can see some nice visual confirmation of this in the “Pipelines” UI:

 

 

In sum, follow those “generalized” four steps up at the very top of this post to integrate with your CI framework of choice!

  • What are some CICD frameworks currently being used within your organization?
  • What kind of pre-test (build) stages/steps OR post-test (deploy) stages/steps are involved within your TestComplete pipelines?
  • Other than the mht/junit logs for artifacts, what other information are you currently collecting?
  • What do you encounter more? The need for parallelism, or the need for acyclic/asynchronous build stages/steps? Or parent-child pipelines?
No RepliesBe the first to reply