Forum Discussion

TJ33's avatar
TJ33
Occasional Contributor
5 years ago
Solved

Project suite level persistent varible to control environments. Correct approach?

I'm about 2 months into TC and a new job.  Starting to find a groove with TC but wanted to ask the community some questions.  I've spent a good amount of time reading about distributed testing but wa...
  • tristaanogre's avatar
    5 years ago

    TJ33 wrote:

    I'm about 2 months into TC and a new job.  Starting to find a groove with TC but wanted to ask the community some questions.  I've spent a good amount of time reading about distributed testing but wanted to get some more experienced opinions.

    1. Management would like to run the automation on different environments.  A section of my url obviously controls which env I hit.  Is my approach correct in creating a project suite level persistent variable to substitute for that portion of the url?

    2. Sooner than later we will be switching to netsuites for distributed testing, does the approach above work in that environment?

    3. If anyone has switched from straight keyword tests to distributed tests did you see a performance increase in the time it took to run the tests?  I know distributed is not parallel is the reason I ask.

    4. Little history, company is new to automation and QA in general.  I have a team member that is worried sick about 'when' the automation takes 24 to 48 hours to run?  How that effects the build process, how do we kick off the tests again?  Is it a viable build if automation doesn't pass?  What are the thresholds? etc. etc.  All valid questions I admit but they haven't even decided on a CI/CD tool yet.  How to I address his fears without stepping on his toes?  

     

    Appreciate any and all experienced feedback.


    1) You can use the Persistent variable, that works pretty well.  However, to have that variable be configed differently on each machine, you need to actually open the project in TestComplete on that machine, make the change, and then save.  We have a similar situation here where we have multiple environments that we may need to run the tests on.  To facilitate that, we have as part of an SQL database structure of multiple tables, an "environment" value that, when we start an automation, reads that value into a variable that is then used throughout the automation.  It makes the test more portable because, rather than having to manually reconfigure on each machine, we can just say "run environment X on machine Q" in our SQL structure and the code takes over.  You can achieve something similar with INI files or something like that where you wouldn't need TestComplete to edit the value, just Notepad to edit the contents of an INI file so that, when your automation starts, it reads the value in and executes.

     

    2) This is why I suggested my point 1... If you're doing distributed testing, especially if you're using TestExecute on the slave machines, you need some way of altering the environment value on the slave machine without opening TestComplete.  Reading the value in from an external data source (again, SQL is how we're doing it now but I've used INI in the past) is my suggested way of doing so.

     

    3) Well... distributed IS kind of parallel.  You can launch jobs on multiple machines that run independant, atomic tests that do not interact with one another.  So, let's say you have 100 tests to run that, on a single node, takes about 4 hours to run.  If you split that equally across 4 machines, now you have 25 tests running on each machine, you've knocked your runtime down by a quarter.  Now, it's not always quite so clean a calculation (some tests take longer than others) but you should see a better throughput of test cases with them spread across more than one workstation.

     

    4) Keep in mind that what you're running in TestComplete are UI driven functional tests.  While, technically, they DO run faster than a human could do it, they are going to take some time to run.  My suggestion would be to pick "critical" tests that you run with every build...a small subset of "if this fails, everything dies" kind of tests.  That will keep your CI pipeline running without too much of a bottle neck.  Then, when you're ready to take a release candidate to production, that's when you run your full suite, not necessarily as part of the build process but as oart if the release process.  It's a different phase of your release process that requires a larger set of tests run.  This way, you can say that if that "critical" subset fails then, yes, the build fails.  But if the critical subset does NOT fail, then the build passes with the understanding that there may be more work that needs to be done before release when the full suite of tests is executed.

     

    Hope these ideas/ thoughts help.