jhohenstein
14 years agoNew Contributor
Statistics Problems Running 20+ Agents
I have loadUI 2.10. We are attempting to generating a large amount of load. At the moment, we are using 20 EC2 instances. These instances have been manually updated to use the 2.1.0 loadUI agent. We did this because we were having the problem(s) described below with the 2.0.2 agent and runner.
We are running a SOAP UI test with around 25 test steps. When we run, we successfully generate load but we have a series of problems:
0) At first, we were getting errors from the agents stored locally (ewwww) on the agent machines. These were caused by Hibernate OutOfMemoryException. I "solved" this by adding some command line parameters to the JVM in the agent's /etc/init.d script.
1) We get lots of statistics being dropped due to time drift. It looks like the time drift gets progressively (systematically?) worse as the test continues with more and more of the test data being dropped.
2) Despite abort being specified at the command line, the test continues to execute for a long time (sometimes as long as 10 minutes for a test that should take no longer than 2 minutes) before completing.
3) Many (most) of the statistics we get back from a test scheduled to run for 900 seconds only collect for a short window of 10-30 seconds.
4) The test runner doesn't so much as complete as throw an error. We have had some luck altering the local JVM settings but it doesn't solve all the issues.
5) The command line runner is told to generate summary status but most of the data is empty.
What am I doing wrong? The tests definitely generate load. We can see this from our monitoring and performance systems. We just don't get meaningful results locally.
On a related note, what I would *really* like would be to generate csv files for the detailed stats, not just the summary. Then I get dig into data up to my elbows more easily than through the Statistics Workbench.
Thanks for the help,
Jeff
We are running a SOAP UI test with around 25 test steps. When we run, we successfully generate load but we have a series of problems:
0) At first, we were getting errors from the agents stored locally (ewwww) on the agent machines. These were caused by Hibernate OutOfMemoryException. I "solved" this by adding some command line parameters to the JVM in the agent's /etc/init.d script.
1) We get lots of statistics being dropped due to time drift. It looks like the time drift gets progressively (systematically?) worse as the test continues with more and more of the test data being dropped.
2) Despite abort being specified at the command line, the test continues to execute for a long time (sometimes as long as 10 minutes for a test that should take no longer than 2 minutes) before completing.
3) Many (most) of the statistics we get back from a test scheduled to run for 900 seconds only collect for a short window of 10-30 seconds.
4) The test runner doesn't so much as complete as throw an error. We have had some luck altering the local JVM settings but it doesn't solve all the issues.
5) The command line runner is told to generate summary status but most of the data is empty.
What am I doing wrong? The tests definitely generate load. We can see this from our monitoring and performance systems. We just don't get meaningful results locally.
On a related note, what I would *really* like would be to generate csv files for the detailed stats, not just the summary. Then I get dig into data up to my elbows more easily than through the Statistics Workbench.
Thanks for the help,
Jeff