I have a case open on this, but I am querying the TestComplete community as well.
Distributed Testing ( Yes it is deprecated ... known)
But there are still users out there, like myself that rely heavily on this until a migration to, in my case, Azure DevOps.
I am querying the TestComplete users to see if this has been seen by anyone and possible areas to look into for debugging and correcting.
I have Distributed Testing setup for a Master to 5 Slaves.
I use a Win10 Hyper-V setup.
TC\TE 14.71 ... tried 14.80 also
All of my Distributed Testing has been running fine until lately.
Microsoft patch affecting me ?
Hyper-V cluster\node affecting me ?
Networking issue ?
This issue being seen is when I have started my Distributed Testing
either from a Team City Job or Manually from the Master, during the
scripts being executed by the Slaves there is some unknown ???
that causes the Master to tell all the Slaves to stop all execution.
Looking at the logs from all the Slaves, the stop is done at the same timestamp.
I am NOT manually doing a stop from the Master.
This issue is frustrating because I once receiving completed script executions before, but now
I get Distributed testing runs that are incomplete runs missing scripts at the end.
This happens with a run of 12 scripts or runs with over 100 scripts and the stop comes at random times.
I am trying to figure out a way to monitor the port that the Master uses talks to the Slaves to see what\might something is sent to stop them. Not sure how to do this just yet. Suggestions ?
Hope you are well.
Are the errors you get when kicking off manually and Team City the same?
Are you able to share the console logs?
This may useful for community members to see and then advise
I am not well versed in this but it would be interesting, hearing other views on this matter