If your execution time jumped from ~8 hrs to 13–14 hrs, something in the environment, configuration, or application has likely changed. Here are a few things worth checking:
- Application Updates
- Has the AUT been updated recently? Even small UI or control changes can affect how quickly TestComplete identifies objects.
- If possible, run tests against an older build to confirm whether the slowdown originates in the app.
- Environment / VM Setup
- Any recent Azure VM changes (size, SKU, region, or storage type)?
- Windows updates or patches installed around the time the issue began?
- Any new antivirus, endpoint protection, or background processes?
- Note: Azure Standard SSDs have IOPS caps — heavy read/write during logging or screen capture can trigger throttling and extend test times.
- TestComplete/TestExecute Configuration
- Check your project’s Log settings — excessive screenshots or detailed event logging can add significant overhead.
- Review SmartWait and NameMapping behavior. If objects take longer to resolve, small delays multiply across long test suites.
- Resource Monitoring
- During execution, use Task Manager, PerfMon, or Azure Monitor to track CPU, memory, and disk usage.
- Sustained high utilization often points to throttling or contention at the VM level.
- Baseline Comparison
- Run a small subset of tests on a local workstation or a freshly created VM with identical specs.
- If those runs complete in normal time, the slowdown is most likely tied to Azure infrastructure or resource configuration.
If you can share:
- TestComplete/TestExecute version numbers
- Azure VM SKU (e.g., D4s_v3)
- Whether the AUT or OS was updated recently
For reference, you might also review:
🤖 AI-assisted response
👍 Found it helpful? Click Like
✅ Issue resolved? Click Mark as Solution