Status:
Community Feedback Requested
Submitted on
04-17-2016
08:29 AM
Submitted by
SagarShroff
on
04-17-2016
08:29 AM
Hi, This would really be useful for any quick replacement in the script. I realized the need for this feature when i had to re-record test-scripts in my previous release. My product request contains a value in path which changes in each release. I also did one time investment of development by replacing all my request constant value with variable. But later when a fix arised, the new release requests were changed. So i had to re-record all my request inorder to incorporate request changes. A search and replace functionality will be great add-on to this very nice tool!
... View more
Currently you can only set 1 offset value for the complete test. This is not sufficient because I need to use different Offset values for different Virtual User Groups (to be able to simulate slow and fast working users) (this is because our application uses a timestamp in the users' response body to calculate how fast/slow a user is working with the application). I need to be able to set different offset values, and link this to Virtual User Groups. Somehow it would need to be possible to link Offset values to Virtual User Groups, to simulate "fast" users and "slow working" users.
... View more
Status:
New Idea
Submitted on
09-26-2017
08:30 AM
Submitted by
jose_pjoseph
on
09-26-2017
08:30 AM
I would like an option to exclude a particular page from being included in the Top 10 > Slow Pages (Average) section. Use Case: I have a test with 10 virtual users, each signing into an app and doing different things. The problem I am facing is that the sign in page (or landing page after user signs in) is one of the slow pages and after a load test, 6-8 entries in the Slow Pages (Average) list will be filled with the sign in page data from different users and the Top 10 list would look like this: Scenario1: Page 0003 (Action XXXX) Scenario3: Page 0002 (Sign in) Scenario2: Page 0002 (Sign in) Scenario5: Page 0002 (Sign in) Scenario4: Page 0002 (Sign in) Scenario9: Page 0002 (Sign in) Scenario6: Page 0002 (Sign in) Scenario8: Page 0008 (Action XXXX) Scenario10: Page 0002 (Sign in) Scenario7: Page 0002 (Sign in) Because of this, I think I am missing information related to other slow pages. I would like to either: Show "Sign in" page only once (the one that took the longest) in the list OR: After I see the above scenario, be able to exclude "Sign in" from the Top 10 list in subsequent test iterations. Also from @AlexKaras: "I think that it must be considered how to clearly indicate that the given report lists not Top 10 Slow Pages, but Top 10 Slow *Filtered* Pages, so that it is clear to report viewer that there are even more slow pages than those reported but for some reason these pages (provide names?) were filtered out."
... View more
Status:
Accepted for Discussion
Submitted on
09-04-2017
02:00 AM
Submitted by
mgroen2
on
09-04-2017
02:00 AM
I would like LoadComplete to support integration with versioning systems (like SVN, other systems might be welcome as well). Just like TestComplete has support for SVN. Please vote if you would love to see this implemented as well.
... View more
Status:
Community Feedback Requested
Submitted on
08-02-2017
05:58 AM
Submitted by
mgroen2
on
08-02-2017
05:58 AM
Currently, in the Test Editor, you can only select out of a few pre-defined Connection Speed settings. Better simulation of connection speeds would be possible if this could be updated to specifying specific Upload/ Download bandwith settings (in Kb/s, MB/s, etc). E.g. Download speed: 40 MB/s Upload speed: 10 MB/s
... View more
Status:
Accepted for Discussion
Submitted on
06-14-2017
04:13 AM
Submitted by
mgroen2
on
06-14-2017
04:13 AM
Having created multiple test items, I noticed that the test conditions (load profile/continuous load settings/think time settings/QoS etc are not on the level of specific test items, but are on the global level.... (for all test items). Is this logical? I mean I can think of an approach you want different load profiles for different test items. See attached screenshot. I have created 2 test items, but I can only set the settings on the general level... I want LC to be improved so that I can set different load scenario's for different test items. Same counts for Think Time as illustrated in the screenshot. I know it's possible to create different tests and have different Load scenarios for them but I am specifically pointing the fact on running tests in parallel.
... View more
Status:
Accepted for Discussion
Submitted on
06-13-2017
08:21 AM
Submitted by
mgroen2
on
06-13-2017
08:21 AM
I run the test on 4 connected stations. (using 4 instances of LC's Remote Agent). In the report I can't make any distinction on response speed/transfer speed/ TTFB/TTLB on specific machine. I am not reffering to server metrics like CPU, but more on connection info, and graphs which can make distinctions between the tested machines (Remote Agents). For example I want to see difference between average Page Load time for machine 1 compared to machine 2. Another example: I want to compare values of average TTFB of machine 1 and average TTFB on machine 3, etc..
... View more
Status:
Accepted for Discussion
Submitted on
06-13-2017
05:27 AM
Submitted by
mgroen2
on
06-13-2017
05:27 AM
Currently, when you disable the option to launch a specific browser, you cannot select option "record traffic from this browser only" . (it's only enabled if you enable the launch option). Consider this scenario: I don't want LC to start a browser, however I only want to record data from browser Chrome. In current setup, this is not possible. So, I want LC to be updated to support this scenario.
... View more
Status:
Accepted for Discussion
Submitted on
06-02-2017
05:29 AM
Submitted by
mgroen2
on
06-02-2017
05:29 AM
I am missing a feature to easily enable/disable created lines in validations / data extractor modules. By creating a feature that easily enable/disables them (using checkboxes) it's easier to work with LC. See attached screenshot for illustration.
... View more
Currently, this window is fixed in size. Meaning you have to scroll left and right to view/edit the found parameters. Could be easier if this popup window could be maximed / made bigger.
... View more
Status:
Implemented
Submitted on
03-13-2015
01:09 PM
Submitted by
RyanHeidorn
on
03-13-2015
01:09 PM

In the Compare Results interface, it would be nice if users could easily specify which report will be Report A and which will be Report B. As it stands, whichever report is selected first becomes Report A, and it's confusing to figure out how to rearrange test results.
... View more
When simulating 2 virtual user groups in parallel, it executes segment by segment sequentially. But its better to execute simultaneously to analyze the performance. Thanks
... View more
Now when correlating data, it is possible to replace only the whole value of the parameter been correlated with the variable value.
Though it is quite a common case when only partial data replacement is needed.
At the moment, this can be solved with the help of the Set Variable Value operation (often, with the non-trivial expression).
But this workaround:
a) cannot be used in the scenarios that use Call Scenario operation; and
b) clogs up the scenario logic (especially, if more than one variable must be set).
It would be great if it is possible to use variable itself within the data correlation expression. And I beleive that this will not decrease the performance of scenario simulation.
Please see the https://community.smartbear.com/t5/LoadComplete/how-to-pass-variable-directly-in-the-Value-from-the-Request-body/td-p/174481 thread as an example that illustrates this request. It would be handy if the approach that the author of the thread tried initially is possible.
... View more
My company runs LoadComplete daily across all of our sites to check response times. Since we have over 30 websites, it's easy to see why we would like to automate this task. Via command line, we want to be able to run a test and export the results to a web page (HTML), but the only reporting options there are via command line are PDF, MHT, or XML. Although MHT is similar to what we want, most modern browsers don't know how to render them without some kind of plugin, so HTML would be the best option. I know LoadComplete can export reports in HTML in the software, so I would assume it wouldn't be too hard to add this to the command line option. If you could do this for us, that would be great.
Thank you.
... View more
See more ideas labeled with:
In the case where the result body contains a list of items returned, allow selection of multiple occurances of the ID or unique item for each row in the result body. Randomly select one of the result and return that result in a variable to be used in a later request.
... View more
This feature request was submitted on behalf of Ramyam Intelligence Lab Pvt. Ltd
Report the overall number of requests that were processed by the application server per second.
Customer's comment: This will help us tune the application's server parameters.
Thanks,
Yimy
... View more
Use case: My load tests use lots of variables. These variables often influence test circumstances, and I want to have the variables used mentioned in the Load Test Report. A good way (I think) would be to implement a new tab named Variables and create a list of used variables and their values in that section. See below for example: Note: also make this info available in the Exports (PDF, HTML)
... View more
Right now you can only set a value for hour (or parts of hour) for Offset values. For example for a 1-minute offset you have set (1/60th parts of 1) - 1 has the value of 1 hour. For more fractional parts you have define even smaller parts. Also you cannot set offset values for year/month/days Please improve this by implementing offset values for year/months/days etc in seperate input fields.
... View more
On a regular basis, Amazon images instantiated by LoadComplete crash. After test this is error message is displayed. For better analysis could the instance ID be displayed in the error message?
... View more