Currently, in the Test Editor, you can only select out of a few pre-defined Connection Speed settings. Better simulation of connection speeds would be possible if this could be updated to specifying specific Upload/ Download bandwith settings (in Kb/s, MB/s, etc).
Download speed: 40 MB/s
Upload speed: 10 MB/s
In the Details section of the test log, you can export to CSV and XML. However, these functions do not export Duration, ThinkTime, and Response Time columns and values.
Could LoadComplete please be upgraded so that these values are included in the export (for both CSV and XML files exports) ?
Having created multiple test items, I noticed that the test conditions (load profile/continuous load settings/think time settings/QoS etc are not on the level of specific test items, but are on the global level.... (for all test items).
Is this logical? I mean I can think of an approach you want different load profiles for different test items.
See attached screenshot.
I have created 2 test items, but I can only set the settings on the general level...
I want LC to be improved so that I can set different load scenario's for different test items. Same counts for Think Time as illustrated in the screenshot.
I know it's possible to create different tests and have different Load scenarios for them but I am specifically pointing the fact on running tests in parallel.
I run the test on 4 connected stations. (using 4 instances of LC's Remote Agent).
In the report I can't make any distinction on response speed/transfer speed/ TTFB/TTLB on specific machine. I am not reffering to server metrics like CPU, but more on connection info, and graphs which can make distinctions between the tested machines (Remote Agents).
For example I want to see difference between average Page Load time for machine 1 compared to machine 2.
Another example: I want to compare values of average TTFB of machine 1 and average TTFB on machine 3,
Currently, when you disable the option to launch a specific browser, you cannot select option "record traffic from this browser only" . (it's only enabled if you enable the launch option).
Consider this scenario: I don't want LC to start a browser, however I only want to record data from browser Chrome.
In current setup, this is not possible. So, I want LC to be updated to support this scenario.
For the application we need to test to work well, we need to start browser Chrome with a setup option: " --allow-running-insecure-content"
Right now, this scenario is not supported in LoadComplete, since you can only select caching mode:
Currently, there is not an option to set any chrome startup flags, so if you enable the option "Launch Web browser", Chrome is started (by LoadComplete), there is no way to control Chrome (or any other browser) setup behaviour.
I am missing a feature to easily enable/disable created lines in validations / data extractor modules.
By creating a feature that easily enable/disables them (using checkboxes) it's easier to work with LC.
See attached screenshot for illustration.
Currently you can only set QoS (value for page load time, TTFB) values for tests.
However I need to be able to set QoS on specific pages (or even better) specific requests/responses.
So in my opinion LC could be improved by implementing QoS values on requests/responses. See attached screenshot.
Currently, it's not possible to quickly enable/disable specific entries you have specified in the filter sections. Please include enable/disable checkboxes (on line level), so that enable/disabling can be done more easily.
See attached screenshot.
Currently, there is no way to easy enable/disable a test item in Test Editor.
Only way to disable an item is to delete an item, or add an item to enable it again.
See attached screenshot. I'd suggest to add an item Enabled, with checkboxes for each test item in the list, so that quickly enabling/disabling of test items is possible.
Currently, this window is fixed in size. Meaning you have to scroll left and right to view/edit the found parameters.
Could be easier if this popup window could be maximed / made bigger.
Currently (version 4.6), when you rename an already created scenario, the new name is not automatically inherited.
Resulting in errors, just because you rename a scenario.
Please support renaming scenarios as it should be.
This feature request is based on following:
I need to implement a validation, based on value of ContentLength. The test should fail if ContentLength equals 0.
I have read the info about validation here. But this only works for validation on strings.
I want to have a validation based on numeric value of a response. Of course, the value of the Contenlength changes for every request (it's not a fixed value). Only thing that counts is that validation should pass if ContentLength value > 0, and that it should fail in all other cases.
There is currently no easy way to implement such validation. I would like LoadComplete to be improved with such easy validation. See attached screenshot for clarification.
I am unable to record scenario for media streaming. Please assist me
For more information download the video : https://www.dropbox.com/s/ca3zv5wq63n2ujv/4-18-2017%203-40-26%20PM.avi?dl=0
As suggest by Alexey Kryuchkov submit a feature request to the following Community.
Consider this use case as a sample: login - get one page with items list - order one item - purchase ordered item - logout. The root idea is that because of reusability and maintenance considerations, it is better to record and execute not a lot of similar complex end-to-end scenarios, but several smaller separate stand-alone ones and combine them using the Call Scenario operation.
It is expected, that using this approach it will be possible to create, for example, these additional scenarios:
-- login - logout;
-- login - get random number of pages with items list - logout;
-- login - order random number of items - purchase items - logout.
From the reusability point of view it is expected that if, for example, the login scenario changes, only this simple scenario has to be recorded and correlated a-new but not all affected large and complex scenarios. It should be noted here that while it is possible to refresh the correlation rules for the already existing scenario, this does not apply to scenarios launched from the Call Scenario operation.
The above considerations are, probably, the major cornerstone for my feature request, so please let me know if my initial idea is wrong, not recommended or is not adopted by real practitioners because of some reason.
If the above idea is an acceptable one, it is obvious that data correlation will be required when combining separate scenarios to work together. I believe that this can be done more or less easily with the existing functionality of LoadComplete when complete scenario is recorded initially and is split later to several smaller stand-alone ones.
Now the problem:
Assume that in the course of the project, the page that displays items list (Items List page) was modified and requires different set of requests to be displayed. This means that the scenario that obtains this page must be re-recorded and correlated with the existing scenarios.
In its turn this means that (some) of the Data Selectors and Data Replacers that existed in the initial version of the Items List page must be recreated for the updated version of the page.
In order to make it more easy and convenient to find Data Selectors and Data Replacers in the initial version of the page, create them (only the needed ones, probably, with some modifications) in the new version of the page and apply at proper places, some functionality for side-by-side comparison of the requests that belong to two different pages with the possibility to modify requests of the target page should be useful and helpful.
Could you please consider this?
Thank you for your consideration.
P.S. This feature request is based on the Case #00222701 and might be related to the https://community.smartbear.com/t5/LoadComplete/Transform-functional-TestComplete-test-into-Loadtest... thread.
This would really be useful for any quick replacement in the script.
I realized the need for this feature when i had to re-record test-scripts in my previous release.
My product request contains a value in path which changes in each release. I also did one time investment of development by replacing all my request constant value with variable. But later when a fix arised, the new release requests were changed.
So i had to re-record all my request inorder to incorporate request changes.
A search and replace functionality will be great add-on to this very nice tool!