ContributionsMost RecentMost LikesSolutionsRe: Difference in reported test duration and time on graphsMike Thank you for your response. That helps to clarify the differences we were seeing in the times. Ron L Difference in reported test duration and time on graphsI am running Load Complete 2.10 and I have noticed that there is a difference between the test duration time reported on a test report's summary tab and the maximum time on the X axes of the graphs on the infrastructure and Response Time tabs. In addition, I am seeing that the difference is increasing as I increase the number of virtual users in my test case. As an example, when I run a test with 40 virtual users, the test duration time is reported as 8m 2.81s but when I look at the graphs, the maximum time on the X axes is 4m 9s. Can anyone explain the reason for this difference? TIA Ron L Re: Test script cannot find Alert boxesRobert We believe we have found the solution. The issue appears to have been twofold: IE8 & IE9 opened the alert window in a new process, so our alias to iexplore didn't find the proper process, and the caption for the window was different in IE6 than it was in IE8/9. The following code appears to resolve both problems: var caption if (getBrowserVersion() == 6) { caption = "Microsoft Internet Explorer"; } else if (getBrowserVersion() == 8) { vcaption = "Message from webpage"; } var dlg = Sys.Find(["WndClass", "WndCaption"], ["#32770", caption], 10); Delay(500); var btn = dlg.Find(["WndClass", "WndCaption"], ["Button", "OK"], 5); btn.Click(); Where getBrowserVersion is defined as: // The purpose of this function is to determine the version of the browser // to be used when opening up a dialog box. // Valid return values will be either 6, 8 or 9. 0 will be returned for an unidentified version function getBrowserVersion() { var verstring = aqString.substring(Aliases.iexplore.FileVersionInfo, 0, 1); if (verstring == "9") { return 9; } else if (verstring == "8") { return 8; } else if (verstring == "6") { return 6; } else { return 0; } Thanks for your help, Ron L Re: Test script cannot find Alert boxesRobert We tried the same script on a system with IE8 this morning, and IE8 exhibits the same problem. Does this suggest any other possibilities to you? I am hesitant to change registry settings since we plan on running the tests using TestExecute at a later date and I don't want to have to keep track of configuration issues there. Thank you, Ron LRe: Test script cannot find Alert boxesRobert I tried your suggestion but it didn't fare any better. I have confirmed (multiple times) that there is only one instance of the dialog at the time of testing. A coworker looked at it and when we changed it to the following code it works with IE 6 on XP but still does NOT work with IE 9 on Win 7 Delay(500); //var confirmDlg = Aliases.iexplore.WaitAliasChild("dlgMessageFromWebPage", 60000) var confirmDlg = iexplore.Find(["WndCaption", "WndClass"], ["OK", "Button"], 10); if (confirmDlg.Exists) { confirmDlg.Click(); } else { Log.Error("Could not find the Confirmation dialog"); } So it appears that this is being introduced by either IE 9 or Win 7. Do you have any similarly configured systems you can test against? Thank you, Ron L Re: Test script cannot find Alert boxesRobert Sorry, I didn't understand what you were asking for before. I have attached a screen shot of the ObjectSpy dialog of the dialog. Thank you, Ron LRe: Test script cannot find Alert boxesRobert Thank you for the response. I changed my code to the following: var confirmDlg = Aliases.iexplore.Find(["WndClass", "WndCaption"], ["#32770", "Message from webpage"], 10); confirmDlg.Wait(); and the confirmDlg.Wait(); line still causes an error stating "The object does not exist..." Do you have any other suggestions? Thank you, Ron L Test script cannot find Alert boxesWe are working on writing TestComplete scripts to test a large web app which is in ongoing development. In the app, there are a number of areas which pop up an alert box to, for example, confirm that the user really wants to delete an entry. When I record a script, TestComplete records something like: iexplore = Aliases.iexplore; iexplore.dlgMessageFromWebpage.btnOK.ClickButton(); However when I run the recorded script it shows an error stating "ambiguous recognition of tested object". If I use the find method as follows: var confirmDlg = Aliases.iexplore.Find(["MappedName"], ["*iexplore.dlgMessageFromWebpage"], 10); confirmDlg.Wait(); var okButton = confirmDlg.Find(["MappedName"], ["*dlgMessageFromWebpage.btnOK"], 10); okButton.Click(); The confirmDlg.Wait() line causes an "object does not exist" error. I have also tried finding the object by passing "Aliases.iexplore.dlgMessageFromWebpage" as the MappedName. Can anyone tell me the proper way to reference this dialog so that I can access the OK button on it? TIA, Ron L. Re: Project Suite SizingRobert Thank you for your response, I will take a look at the book you suggest. We are planning on using data-driven testing; our concern is simply that with the large number of pages in our application, we will have a large number of tests - even if we use libraries, etc. Are there guidelines for how many projects may be included in a suite and how many tests may included in a project without significantly impacting the performance? Thank you, Ron LProject Suite SizingWe are in the process of setting up TestComplete tests for a large web application. The application consists of 60 WAR files with 700+ JSPs; we anticipate that there will eventually be between 700 and 1000 test scripts for the application. We are looking for guidance on how we should set up the TestComplete environment for this test set. Should we have a single test suite with separate projects for each WAR, or should we have separate suites for each WAR and make projects within the suites as necessary? We anticipate that there will be a couple of the WARs that will have scripts that are used by many other tests. (E.G. the tests to login/logout will be needed by all other tests.) In particular, we are concerned about the performance of running a large number of tests. We anticipate running these tests as part of a nightly automated build. We are using version 8.0 of TestComplete. Any input would be appreciated. Thanks in advance, Ron L