Forum Discussion

TJ33's avatar
TJ33
Occasional Contributor
5 years ago
Solved

Project suite level persistent varible to control environments. Correct approach?

I'm about 2 months into TC and a new job.  Starting to find a groove with TC but wanted to ask the community some questions.  I've spent a good amount of time reading about distributed testing but wanted to get some more experienced opinions.

1. Management would like to run the automation on different environments.  A section of my url obviously controls which env I hit.  Is my approach correct in creating a project suite level persistent variable to substitute for that portion of the url?

2. Sooner than later we will be switching to netsuites for distributed testing, does the approach above work in that environment?

3. If anyone has switched from straight keyword tests to distributed tests did you see a performance increase in the time it took to run the tests?  I know distributed is not parallel is the reason I ask.

4. Little history, company is new to automation and QA in general.  I have a team member that is worried sick about 'when' the automation takes 24 to 48 hours to run?  How that effects the build process, how do we kick off the tests again?  Is it a viable build if automation doesn't pass?  What are the thresholds? etc. etc.  All valid questions I admit but they haven't even decided on a CI/CD tool yet.  How to I address his fears without stepping on his toes?  

 

Appreciate any and all experienced feedback.


  • TJ33 wrote:

    I'm about 2 months into TC and a new job.  Starting to find a groove with TC but wanted to ask the community some questions.  I've spent a good amount of time reading about distributed testing but wanted to get some more experienced opinions.

    1. Management would like to run the automation on different environments.  A section of my url obviously controls which env I hit.  Is my approach correct in creating a project suite level persistent variable to substitute for that portion of the url?

    2. Sooner than later we will be switching to netsuites for distributed testing, does the approach above work in that environment?

    3. If anyone has switched from straight keyword tests to distributed tests did you see a performance increase in the time it took to run the tests?  I know distributed is not parallel is the reason I ask.

    4. Little history, company is new to automation and QA in general.  I have a team member that is worried sick about 'when' the automation takes 24 to 48 hours to run?  How that effects the build process, how do we kick off the tests again?  Is it a viable build if automation doesn't pass?  What are the thresholds? etc. etc.  All valid questions I admit but they haven't even decided on a CI/CD tool yet.  How to I address his fears without stepping on his toes?  

     

    Appreciate any and all experienced feedback.


    1) You can use the Persistent variable, that works pretty well.  However, to have that variable be configed differently on each machine, you need to actually open the project in TestComplete on that machine, make the change, and then save.  We have a similar situation here where we have multiple environments that we may need to run the tests on.  To facilitate that, we have as part of an SQL database structure of multiple tables, an "environment" value that, when we start an automation, reads that value into a variable that is then used throughout the automation.  It makes the test more portable because, rather than having to manually reconfigure on each machine, we can just say "run environment X on machine Q" in our SQL structure and the code takes over.  You can achieve something similar with INI files or something like that where you wouldn't need TestComplete to edit the value, just Notepad to edit the contents of an INI file so that, when your automation starts, it reads the value in and executes.

     

    2) This is why I suggested my point 1... If you're doing distributed testing, especially if you're using TestExecute on the slave machines, you need some way of altering the environment value on the slave machine without opening TestComplete.  Reading the value in from an external data source (again, SQL is how we're doing it now but I've used INI in the past) is my suggested way of doing so.

     

    3) Well... distributed IS kind of parallel.  You can launch jobs on multiple machines that run independant, atomic tests that do not interact with one another.  So, let's say you have 100 tests to run that, on a single node, takes about 4 hours to run.  If you split that equally across 4 machines, now you have 25 tests running on each machine, you've knocked your runtime down by a quarter.  Now, it's not always quite so clean a calculation (some tests take longer than others) but you should see a better throughput of test cases with them spread across more than one workstation.

     

    4) Keep in mind that what you're running in TestComplete are UI driven functional tests.  While, technically, they DO run faster than a human could do it, they are going to take some time to run.  My suggestion would be to pick "critical" tests that you run with every build...a small subset of "if this fails, everything dies" kind of tests.  That will keep your CI pipeline running without too much of a bottle neck.  Then, when you're ready to take a release candidate to production, that's when you run your full suite, not necessarily as part of the build process but as oart if the release process.  It's a different phase of your release process that requires a larger set of tests run.  This way, you can say that if that "critical" subset fails then, yes, the build fails.  But if the critical subset does NOT fail, then the build passes with the understanding that there may be more work that needs to be done before release when the full suite of tests is executed.

     

    Hope these ideas/ thoughts help.

5 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor

    TJ33 wrote:

    I'm about 2 months into TC and a new job.  Starting to find a groove with TC but wanted to ask the community some questions.  I've spent a good amount of time reading about distributed testing but wanted to get some more experienced opinions.

    1. Management would like to run the automation on different environments.  A section of my url obviously controls which env I hit.  Is my approach correct in creating a project suite level persistent variable to substitute for that portion of the url?

    2. Sooner than later we will be switching to netsuites for distributed testing, does the approach above work in that environment?

    3. If anyone has switched from straight keyword tests to distributed tests did you see a performance increase in the time it took to run the tests?  I know distributed is not parallel is the reason I ask.

    4. Little history, company is new to automation and QA in general.  I have a team member that is worried sick about 'when' the automation takes 24 to 48 hours to run?  How that effects the build process, how do we kick off the tests again?  Is it a viable build if automation doesn't pass?  What are the thresholds? etc. etc.  All valid questions I admit but they haven't even decided on a CI/CD tool yet.  How to I address his fears without stepping on his toes?  

     

    Appreciate any and all experienced feedback.


    1) You can use the Persistent variable, that works pretty well.  However, to have that variable be configed differently on each machine, you need to actually open the project in TestComplete on that machine, make the change, and then save.  We have a similar situation here where we have multiple environments that we may need to run the tests on.  To facilitate that, we have as part of an SQL database structure of multiple tables, an "environment" value that, when we start an automation, reads that value into a variable that is then used throughout the automation.  It makes the test more portable because, rather than having to manually reconfigure on each machine, we can just say "run environment X on machine Q" in our SQL structure and the code takes over.  You can achieve something similar with INI files or something like that where you wouldn't need TestComplete to edit the value, just Notepad to edit the contents of an INI file so that, when your automation starts, it reads the value in and executes.

     

    2) This is why I suggested my point 1... If you're doing distributed testing, especially if you're using TestExecute on the slave machines, you need some way of altering the environment value on the slave machine without opening TestComplete.  Reading the value in from an external data source (again, SQL is how we're doing it now but I've used INI in the past) is my suggested way of doing so.

     

    3) Well... distributed IS kind of parallel.  You can launch jobs on multiple machines that run independant, atomic tests that do not interact with one another.  So, let's say you have 100 tests to run that, on a single node, takes about 4 hours to run.  If you split that equally across 4 machines, now you have 25 tests running on each machine, you've knocked your runtime down by a quarter.  Now, it's not always quite so clean a calculation (some tests take longer than others) but you should see a better throughput of test cases with them spread across more than one workstation.

     

    4) Keep in mind that what you're running in TestComplete are UI driven functional tests.  While, technically, they DO run faster than a human could do it, they are going to take some time to run.  My suggestion would be to pick "critical" tests that you run with every build...a small subset of "if this fails, everything dies" kind of tests.  That will keep your CI pipeline running without too much of a bottle neck.  Then, when you're ready to take a release candidate to production, that's when you run your full suite, not necessarily as part of the build process but as oart if the release process.  It's a different phase of your release process that requires a larger set of tests run.  This way, you can say that if that "critical" subset fails then, yes, the build fails.  But if the critical subset does NOT fail, then the build passes with the understanding that there may be more work that needs to be done before release when the full suite of tests is executed.

     

    Hope these ideas/ thoughts help.

    • AlexKaras's avatar
      AlexKaras
      Champion Level 3

      Hi,

       

      Actually, Robert perfectly replied to all your questions.

       

      Just a note that I would like to add for question #4:

      It is my opinion, that the higher the test is in the testing pyramid, the more its result shifts from the binary 'continue if passed / stop if failed' to be the information for the further consideration.

      Majority of TestComplete tests are on the top of the testing pyramid and thus their results usually should be reviewed and considered manually.

      For example, the simple basic test that verifies if complex tested application can be started and closed, can be considered of integration type and can be included in the CI/CD pipeline. Obviously, if the application cannot start, there is from little to no reason to move it further through the pipeline and build process most probably must be stopped.

      On the other hand, let's assume a complex high-level functional end-to-end test for bookkeeping software. And this test, after performing a lot of actions to follow the human user business flow, reveals some problem with the annual report. Is this a reason to stop CI/CD, testing and other processes and abandon delivery of the software? It may depend on a lot of factors. For example, if you are releasing at the end of the year when all your customers will prepare this annual report for their tax authorities, then obviously this is a blocker. But if you are releasing in June and release includes some new useful feature and you know that one more release is planned in a month, then your management well may make a decision to release.

       

      Another example:

      Assume you are working in a transport company garage.

      You are in a process of replacement of the head light in a car. The head light to be installed appears not working. Is this a blocker? Yes. Should you continue with other checks? No, you may stop.

      Now you are performing daily routine check for the car before letting it to leave the garage and figuring out that the head light is not working. Is this a blocker? Maybe. But maybe not. Should you continue with other checks? Yes. In order to get the whole picture of the state of the given car.

      When you finish with all checks you will be able to provide the manager with the results, thus making him/her able to make a decision. For example: this given car cannot be sent out of the city and after the sunset. But it is OK to use it within the city during the day time if we have a lot of orders and badly need every car. And, additionally, we can schedule a mechanic to take a look at this car in the evening so the car is ready by the next day.

       

      • TJ33's avatar
        TJ33
        Occasional Contributor

        Much appreciated Alex.

        Soaking it all in!

    • TJ33's avatar
      TJ33
      Occasional Contributor

      Really appreicate you taking the time to answer those tristaanogre 
      I know they are rather subjective