I would like LoadComplete to support integration with versioning systems (like SVN, other systems might be welcome as well). Just like TestComplete has support for SVN.
Please vote if you would love to see this implemented as well.
The scenario contains x pages. Each page contains y requests.
E.g. I need to add a custom header to each first request on the each page.
The idea is to make an option to edit specific requests in a certain way.
Something like this:
On the image above I want to select Requests 189, 191, 192 and then update them, adding some specific header: Parameter / Value
I would like to be able to select/include a company logo picture (including Browse function) in the PDF test log.
This image printed on each page in the header of the page (size can be limited - should be checked on by LoadComplete)
My Load test framework contains several loops. Currently, LoadComplete does not take into account the progress within each loop, when displaying the progress indicator. I would like to see some form of progress calculation within the currently active loop so I can see how progress is within the loop
I would like LoadComplete's Loop function to be improved so that it's value can be randomized (determined at runtime), between a set min and max value.
The reason for that is that I want my user(s) to perform recurring tasks (thats the reason why I use Loop function) in different loads (for example: simulated user 1 performs the tasks in the loop 4 times, simulated user 2 performs these same tasks 8 times, simulated user 3 14 times, etc.
I see LoadComplete enhanced so that min and max values can be set, and a checkbox that can be enabled/disabled to calculate random values (at runtime) and executes the tests (the loop) with these pre-testrun calculated loop values.
Currently it is not possible to combine random Page times with fixed think times.
For example, I have following scenario (see attached screenshot):
I have 2 Pages, I want the first page to have fixed think time of 6000ms,
I want the second page to have a random think time between 10 and 500ms.
Note: There might also be created an option to switch between fixed/ random times on the Test Editor:
Currently, in the Test Editor, you can only select out of a few pre-defined Connection Speed settings. Better simulation of connection speeds would be possible if this could be updated to specifying specific Upload/ Download bandwith settings (in Kb/s, MB/s, etc).
Download speed: 40 MB/s
Upload speed: 10 MB/s
In the Details section of the test log, you can export to CSV and XML. However, these functions do not export Duration, ThinkTime, and Response Time columns and values.
Could LoadComplete please be upgraded so that these values are included in the export (for both CSV and XML files exports) ?
Having created multiple test items, I noticed that the test conditions (load profile/continuous load settings/think time settings/QoS etc are not on the level of specific test items, but are on the global level.... (for all test items).
Is this logical? I mean I can think of an approach you want different load profiles for different test items.
See attached screenshot.
I have created 2 test items, but I can only set the settings on the general level...
I want LC to be improved so that I can set different load scenario's for different test items. Same counts for Think Time as illustrated in the screenshot.
I know it's possible to create different tests and have different Load scenarios for them but I am specifically pointing the fact on running tests in parallel.
I run the test on 4 connected stations. (using 4 instances of LC's Remote Agent).
In the report I can't make any distinction on response speed/transfer speed/ TTFB/TTLB on specific machine. I am not reffering to server metrics like CPU, but more on connection info, and graphs which can make distinctions between the tested machines (Remote Agents).
For example I want to see difference between average Page Load time for machine 1 compared to machine 2.
Another example: I want to compare values of average TTFB of machine 1 and average TTFB on machine 3,
Currently, when you disable the option to launch a specific browser, you cannot select option "record traffic from this browser only" . (it's only enabled if you enable the launch option).
Consider this scenario: I don't want LC to start a browser, however I only want to record data from browser Chrome.
In current setup, this is not possible. So, I want LC to be updated to support this scenario.
I am missing a feature to easily enable/disable created lines in validations / data extractor modules.
By creating a feature that easily enable/disables them (using checkboxes) it's easier to work with LC.
See attached screenshot for illustration.
For the application we need to test to work well, we need to start browser Chrome with a setup option: " --allow-running-insecure-content"
Right now, this scenario is not supported in LoadComplete, since you can only select caching mode:
Currently, there is not an option to set any chrome startup flags, so if you enable the option "Launch Web browser", Chrome is started (by LoadComplete), there is no way to control Chrome (or any other browser) setup behaviour.
Currently, this window is fixed in size. Meaning you have to scroll left and right to view/edit the found parameters.
Could be easier if this popup window could be maximed / made bigger.
Currently you can only set QoS (value for page load time, TTFB) values for tests.
However I need to be able to set QoS on specific pages (or even better) specific requests/responses.
So in my opinion LC could be improved by implementing QoS values on requests/responses. See attached screenshot.
Currently, it's not possible to quickly enable/disable specific entries you have specified in the filter sections. Please include enable/disable checkboxes (on line level), so that enable/disabling can be done more easily.
See attached screenshot.
Currently, there is no way to easy enable/disable a test item in Test Editor.
Only way to disable an item is to delete an item, or add an item to enable it again.
See attached screenshot. I'd suggest to add an item Enabled, with checkboxes for each test item in the list, so that quickly enabling/disabling of test items is possible.
Currently (version 4.6), when you rename an already created scenario, the new name is not automatically inherited.
Resulting in errors, just because you rename a scenario.
Please support renaming scenarios as it should be.