Champions Articles

2 Posts

Top Content

The Pain Point (a story we’ve all lived) It’s Friday afternoon, and your team is on the eleventh-hour push to ship a release. Everything has been working all week — the build is green, the smoke tests passed, and you’re finally feeling the relief of a job well done. Then it happens. A test suite that has been stable for weeks suddenly fails in dozens of places, and the failures don’t make sense. You didn’t change anything. The application didn’t change. The environment didn’t change — or so you thought. But somewhere in the stack, something moved. Maybe the browser auto-updated overnight. Maybe a Windows patch was applied. Maybe the CI agent updated. Maybe the automation tool itself installed a minor version update without you realizing. Now the release is blocked. The team is waiting. The deadline is looming. And you’re left in the middle of a nightmare: “Everything was fine… and now nothing works.” That’s not a automation tool problem. That’s not a browser problem. That’s not even an automation script problem. It’s a dependency chain problem — and it’s the most common cause of “suddenly broken” automation. The Real Issue: You’re Managing a Chain, Not a Single Product Automation tools are only one link in a longer chain that includes: OS or security updates Browsers and runtimes Test tools (TestComplete, ReadyAPI, etc.) Drivers, plugins, integrations The application under test CI/CD agents and infrastructure When one link moves ahead, the chain breaks. So a “successful upgrade” isn’t just installing a new version — it’s ensuring the rest of the chain can support it. The Goal: Upgrade Intentionally, Not Accidentally Upgrades should be treated as controlled change, not something that “just happens.” The upgrade decision should be based on: Need: Do we require a feature or fix? Risk: What can break? Environment readiness: Is the stack aligned? Rollback plan: Can we recover quickly? If the answer is “no” or “not sure,” it’s perfectly valid to wait. Early Adoption: Not a Risk — a Strategy Many teams avoid early adoption entirely, waiting for others to report issues. That’s passive. A stronger approach is: Create internal early adopters intentionally. What that means: A pilot environment is designated for early upgrades Early adoption is expected, not accidental Findings are documented and shared This turns early adoption into risk discovery, not risk exposure. The One True Anchor: Test Environments Across all team sizes, the same rule applies: If you don’t control your test environment, you don’t control upgrade risk. The test environment is the contract that defines what “works.” Environment best practices: Lock OS, browser, and runtime versions per environment Disable auto-updates where possible Treat environments as managed assets Separate: Sandbox / pilot Test / QA CI/CD execution When environments are stable, upgrades become predictable. Scaling Upgrade Practices by Team Size Upgrade strategy changes with organizational scope. Here’s how: Stand-alone automation users You are the early adopter Backups are essential A spare VM can be your pilot environment Rollback must be quick and simple Small teams Use a pilot machine Freeze browser versions used in automation Stagger upgrades across the team Keep a lightweight shared change log Large teams / enterprises Environments are contracts CI/CD agents should upgrade last Forced upgrades (security, browsers) are inevitable Change visibility matters more than speed Browser Updates: The Frequent Weak Link Browsers often update faster than automation tools officially support. Mitigation strategies: Pin browser versions used in automation Validate browser upgrades separately from tool upgrades Expect compatibility lag Use community feedback to spot issues early Browser drift is predictable — and therefore manageable. A Simple Upgrade Flow That Works Everywhere Here’s a repeatable pattern that scales from solo users to large teams: Review version history and known issues Back up projects and configurations Upgrade in a pilot environment Run smoke tests first Observe and document findings Roll out gradually Upgrade CI/CD last This flow keeps upgrades controlled and prevents cascading failures. How to Start (if you have no process yet) If your team has never formalized upgrade strategy, start small. You don’t need heavy change management — just a lightweight, repeatable approach. First 30 days: Choose a pilot environment (VM, spare machine, or sandbox) Create a simple change log (spreadsheet is fine) Define one smoke test suite that represents core workflows Disable auto-updates where possible Next 60 days: Add an upgrade checklist to your workflow Upgrade one component at a time (browser, tool, CI) Share findings with the team Decide on a cadence (monthly, quarterly, or as-needed) By 90 days: Your process becomes repeatable You’ll have real data on what breaks first You’ll stop being surprised by “suddenly broken” automation Printable Checklist Upgrade Readiness Checklist Review release notes and version history Confirm need for upgrade (feature, fix, security, compatibility) Backup all projects and configurations Identify pilot environment Disable auto-updates where possible Run smoke tests in pilot Document findings and workarounds Stagger rollout to team Upgrade CI/CD last Final Thought The goal isn’t to always upgrade first — or always upgrade last. The goal is to upgrade intentionally, with visibility, feedback, and control. When upgrades are planned around environments, team size, and deliberate early adoption, automation becomes resilient instead of fragile.
2 months ago
I’m a 10-year TestComplete veteran in the United States, working for a mid-sized manufacturing company. We develop and maintain our enterprise software in-house. I started out as a test automation developer working with internal order management applications. I am now a Development Analyst and Agile Project Owner for that same team. Since I did most of my automation work there, the transition was a natural one. Test Case Automation We started using Keyword Tests about 11 years ago — about a year before I joined the company. We then moved to scripting with VBScript and C#, and eventually settled on JavaScript. Our developers primarily used TypeScript, so JavaScript made sense from an in-house support and learning-curve perspective for both the team and the department. We dabbled in Python several years ago, but at the time it was still fairly new to our team, so we stayed with JavaScript. Now I’m learning Python again. It has become more popular with our developers and in the broader community. We may move to Python at some point, but for now all of our scripts remain in JavaScript. Our tests run through DevOps pipelines, so I’m familiar with the process, although it was never mine to own or configure. We use Git with DevOps for source control. I love Git — it has saved me more than once! Other SmartBear Products I’ve also experimented with integrating SoapUI and TestComplete, as well as running a trials of ReadyAPI and Zephyr.  We don’t currently do a large amount of automated API testing, but I expect that may change as we continue modernizing and moving away from legacy applications.  Test Case management is done in Azure Devops mostly due to the tight integration with the DevOps pipelines and a strong history with Azure Devops.  I like Zephyr and ReadyAPI however for the seamless integration on the QA side.  We currently have a pretty smooth operation with TestComplete and Azure DevOps and everyone is familiar. Upgrades Over the years, I’ve seen many TestComplete upgrades, and our team has developed a workflow to mitigate issues. We incorporate many of the concepts outlined in Hassan_Ballan​'s excellent article... Surviving Upgrades: How to Manage Change Across Test Automation Toolchains. For the most part, we are not early adopters. Our test lead installs the latest version he considers “stable” based on version numbers, release notes, and research in the forum. He typically runs it against multiple projects for a few weeks before rolling it out to the rest of the team.  These are the core practices. AI Yes — I use AI. In fact, I’ll probably use it to proof this post.  🔍 Our organization provides licensed AI tools and actively encourages responsible use. I use AI as a pair programmer and productivity assistant. Sometimes it’s a bit of a love-hate relationship. I don’t typically copy and paste code directly, but it gives me solid ideas and helps me quickly find relevant information. I don’t script in TestComplete as much as I used to, but I still write the occasional utility script — things like speed checking data entry, repeatability, validating order limits, or generating testing data,  things that wouldn’t fit neatly into a regression suite or would be too time-consuming for manual testing. As an application owner and development analyst, I also use AI to help collect information from work tracking and collaboration systems to support backlog grooming and better understand how our software is being used. I’ve worked with tools like Claude, ChatGPT, and Grok. Each tends to be better suited for different purposes. Certifications & Learning I don’t have a long list of fancy certifications. I’ve mostly learned on the job and through online courses. My LinkedIn profile reflects a number of course completions and certifications including many from SmartBear. I don’t typically connect with just anyone — my network consists of people I know personally, follow closely, or have built a long-standing professional relationship with. These days, it’s important to be intentional about who you associate yourself with online. If you’re a Champion, let’s hear your story! I may even start linking to mine as a tagline in my forum posts. Cheers!
18 days ago

Recent Blogs

About Champions Articles

2 PostsCreated: 2 years agoLatest Activity: 18 days ago
18 Following