Forum Discussion

stevenerat's avatar
stevenerat
Occasional Contributor
13 years ago

Thoughts about LoadComplete - Trial User

Hello,



I'm a trial user evaluating LoadComplete.  Here are some observations and thoughts about my experience so far.  Please note that I have only lightly consulted the documentation at this point and have not read it end to end.




  1. Quality of Service: Max Page Load Time (ms) does not allow
    more than 100000. (100 sec).  Why shouldn't I be able to increase it beyond that value?

  2. Expected HTTP Responses:  I would like to be able to globally configure HTTP Response Status Codes to mark them as OK, rather than having to go through the Scenarios, request by request marking 302s acceptable as 200 OK.

  3. The scale of the load test graph is not configurable.   I would like to see the full,
    compressed graph all in one view instead of having to scroll through the graph bit by bit.

  4. When viewing a scenario, with NO grouping of Requests as Pages, I would
    like to sort Requests by extension type.  The available columns are only Request and ThinkTime.  I'd like an Extension column that is sortable to I can group and then delete undesired extensions in one swoop.

  5. I would like to have a GLOBAL option to exclude recording or
    testing of list of file extensions.  In other words, I'd like an option to exclude all images, all javascript, and/or all style sheets from the test scenario, as well as by specific file extension or pattern.

  6. After adding new parameters, if I accidentally click
    "Edit one more parameter" again unintentionally then there is no option to just Finish or Go Back.  I must either Cancel or create a new Variable.  I'm afraid that clicking Cancel will cancel all the variables I previously added.

  7. When renaming a Scenario, the Run button drop down list does
    not update to reflect the new name, and cannot play the old named Scenario

  8. I would like scenarios to be more modular, so I can record a login, then a
    logout, and then later record actions that occur after login and then plug it
    into the middle between login/logout portion.  It seems I can only append to the end of a scenario, not add to the middle.

  9. I'd like to have an option to see load test run so I can observe the
    generated HTML of the response for each request rendered in a built-in browser as the test runs.  I want to visually see what's
    happening in real time DURING the load test.  Naturally if I have 0 think time and more than 1 VU that is impractical, but for debugging 1 VU with reasonable think time, I'd like to watch the recording to make sure its playing back the way I think it is.

  10. In many cases when editing a
    Test/Scenario there is no UNDO or CTRL-Z.  If I screwed up then I have to remember how to undo what I just did.

  11. Our app doesn't use favicons nor even embed links to them.  I think its a default browser behavior to automatically request the default URI for a favicon.  This is leading to a lot of 404s in my recording everytime I change pages.  Please give me a way to make that stop.

  12. For a while, the Play button showed as disabled (gray'd out) even though I have a
    scenario that passed Verify Scenario.  I had to relaunch LoadComplete to get the play button to work again.

  13. (Posted in another of my threads by itself) Some Form Parameters in Requests not showing up.  I can see
    in Request's Form tab that the form var exists with value, but the Parameter
    wizard does not see it.  There is no way to add a Form parameter that does not yet
    exist.  Must use recorded params only.

  14. I
    want to be able to complete several performance tests repeatedly, then
    have LoadComplete average the results of all trial runs as a group.  Then I want to
    modify my application, and run another group of performance tests,
    average those.  Finally, I want to compare the different test groups and see JUST the
    delta of changes between the two test groups.  

  15. I record our app over its SSL port. When I play back a scenario with very few requests and only a couple VUs I often get a couple errors in the test results for SSL Negotiation Failed.  Can you give me a retry option for SSL errors?  I'd like it to try to repeat that request say 3x to see if it can fix itself on the next request.


That's all (for now).


Thanks for any comments or feedback.

5 Replies

  • AlexeyKryuchkov's avatar
    AlexeyKryuchkov
    SmartBear Alumni (Retired)
    Hi Steven,



    Thank you for loads of feedback! Here are our answers:



    1. This is the expected restriction. If a page is expected to be loaded for more than 100 s, we recommend that you avoid using the QoS criteria in tests.



    2. What do you mean by "globally"? Do you mean the scenario or project level? IAC, if LoadComplete was configured to treat 302 responses as OK, it would fail to point out real mistakes in the log. Can you share a use case for such a setting?



    3. I have registered your request in our DB as a suggestion.



    4 and 5. Could you please give us a use case for excluding such requests from scenarios?



    6. Do you think that renaming the button to "Finish and edit one more parameter" will be OK? Or, do you have some other vision?



    7. I'm not sure how to reproduce the issue. Can you give me more details along with images that will demonstrate the steps to reproduce it?



    8. To accomplish such tasks, we recommend recording these test "modules" as individual scenarios. After that, such scenarios can be executed as one complex scenario.



    9. We do not have plans to implement built-in browsers in LoadComplete, because it will toughen up system requirements and reduce the amount of virtual users that can be simulated on one machine.



    10. I have registered this request as another suggestion in our DB.



    11. You're right - it depends on the browser. From LoadComplete's side, you can try deleting the requests, or mark 404 responses as OK in the Expected HTTP Responses table.



    12. Does this problem persist on Beta 2? If it does, can you share the steps to reproduce it along with your test project? Please note that you can [url=http://smartbear.com/support/message/?prod=LoadComplete]submit[/rul] an individual support case.



    13. Let's continue the discussion in the approriate thread.



    14. Again, could you please share a use case?



    15. Please make sure that getting SSL-related errors persists on LoadComplete 2.50 Beta 2 - some related issues have been fixed there.
  • stevenerat's avatar
    stevenerat
    Occasional Contributor


    Mike, I appreciate the time you spent to review all my observations.  Some of them were my mistake for not reading all the documentation (such as on complex scenarios).  Otherwise, my replies follow.



    Q:


    Expected
    HTTP Responses:  I would like to be able to globally configure HTTP
    Response Status Codes to mark them as OK, rather than having to go
    through the Scenarios, request by request marking 302s acceptable as 200
    OK.




    A:

    What
    do you mean by "globally"? Do you mean the scenario or project level?
    IAC, if LoadComplete was configured to treat 302 responses as OK, it
    would fail to point out real mistakes in the log. Can you share a use
    case for such a setting?




    Comment: After recording relatively brief scenarios and then running them, I typically end up with 5 or 6 warnings for 302.  The recording acts as expected based on viewing the user traversal via the request/response pairs, so I'm fine with treating the 302s as OK.  However, as far as I know, I have to go click on each warning after it was run the first time and manually choose to treat 302 as OK.  I would like to have a way to dismiss all 302 warnings as OK in one step.



    Q:

    When
    viewing a scenario, with NO grouping of Requests as Pages, I would
    like to sort Requests by extension type.  The available columns are
    only Request and ThinkTime.  I'd like an Extension column that is
    sortable to I can group and then delete undesired extensions in one
    swoop.



    I would like to have a GLOBAL option to exclude
    recording or
    testing of list of file extensions.  In other words, I'd like an
    option to exclude all images, all javascript, and/or all style sheets
    from the test scenario, as well as by specific file extension or
    pattern.




    A:

    Could you please give us a use case for excluding such requests from scenarios?



    Comment: I only care about testing the web application server (i.e. ColdFusion), not the web server (IIS).  I don't want to load test requests for JavaScript, Images, or CSS sheets.  I want to easily eliminate from my recorded scenario all request types that I don't care about.  To facilitate that, I would like to be able to sort requests from a recorded scenario so that I can group the extensions, the select by groups, and delete those I don't care about.  Presently, I go to Page view, then expand the page, expand the "Other Files", visually inspect the extensions listed there, then delete them.  Then I go to the next Page view, Page by Page.  In Request view I can see all requests for all pages at once, so it makes sense to sort/group extensions there for easier removal.



    To avoid having to edit a recording to remove unwanted extensions from scenarios, a global setting to exclude by extension type (.gif, .jpg, .css, .js) and another to exclude by category (images, css, javascript). That would let me record without those file types without having to go back and remove them later.



    Q:

    After
    adding new parameters, if I accidentally click
    "Edit one more parameter" again unintentionally then there is no
    option to just Finish or Go Back.  I must either Cancel or create a new
    Variable.  I'm afraid that clicking Cancel will cancel all the variables
    I previously added.



    A:

    Do you think that renaming the button to "Finish and edit one more parameter" will be OK? Or, do you have some other vision?



    Comment: After clicking Edit One More Parameter, I think a Go Back button would be best.  When using it, I realized I had made a mistake and I wanted to go back then click Finish instead, but I couldn't go back.



    Q:

    When renaming a Scenario, the Run button drop down list does
    not update to reflect the new name, and cannot play the old named Scenario




    A:

    I'm
    not sure how to reproduce the issue. Can you give me more details along
    with images that will demonstrate the steps to reproduce it?



    Comment: The issue exists, but I didn't describe it accurately.  When multiple "Tests" exist, the green button for Run will have a drop down option list showing the test names.  If I rename a Scenario then choose a Test from that Scenario in the Run button drop down list, then I get a popup alert for "Unable to find the "XXXXX" scenario" where XXXXX is the original name of the scenario before it was renamed.



    Q:

    For a while, the Play button showed as disabled (gray'd out) even though I have a
    scenario that passed Verify Scenario.  I had to relaunch LoadComplete to get the play button to work again.




    A:

    Does this problem persist on Beta 2? If it does, can you share the steps to reproduce it along with your test project?



    Comment: I only encountered this problem once and haven't seen it again.  If I do, I'll try to document the behavior better.



    Q:

    I
    want to be able to complete several performance tests repeatedly, then
    have LoadComplete average the results of all trial runs as a group.  Then I want to
    modify my application, and run another group of performance tests,
    average those.  Finally, I want to compare the different test groups and see JUST the
    delta of changes between the two test groups. 



    A:

    Again, could you please share a use case?



    Comment: I want to know the effect on performance of one change to my web application.  I would like to have a statistical analysis to tell me if the change to the web app had a statistically significant impact on performance.  In order to perform valid statistical analysis, I need to have a reasonable sample size of test results before the change AND test results after the change.  Then I can apply a test like Chi Squared (as explained here) to tell me definitively if the change that was made really impacted performance or not.



    To do this properly, it would be great to have a load test tool that can repeat a load test N number of times, then let me make a change to my web app and reset the test environment, then have the load test tool run the same load test the same N number of times.  Ideally the load test tool would automatically apply the chi squared formula to the test results and show me in simple terms if the change had an impact on performance (in either direction, better or worse).  You could hide the complex math explanation from the user, and just provide an easy to use wizard with a simple graphical metaphor to indicate to the end user if the change was significant or not.



    Thanks again for all your replies.








  • AlexeyKryuchkov's avatar
    AlexeyKryuchkov
    SmartBear Alumni (Retired)
    Hi Steven,



    After recording relatively brief scenarios and then running them, I typically end up with 5 or 6 warnings for 302.  The recording acts as expected based on viewing the user traversal via the request/response pairs, so I'm fine with treating the 302s as OK.  However, as far as I know, I have to go click on each warning after it was run the first time and manually choose to treat 302 as OK.  I would like to have a way to dismiss all 302 warnings as OK in one step.


    For such a situation, I'd recommend that you try handling the 302 responses as the "Code: 302 Found, 304 Not Modified" section of the "Typical Response Codes" help topic describes:

    Problem: This is the effect of conditional GET requests. The browser retrieves data from the cache instead of sending requests to and receiving responses from the server.



    Solution: Clear the browser's cache, cookies and temporary files and re-record the scenario. If you use Internet Explorer or Firefox, you can enable the Clear browser data option in the Record User Scenario dialog that appears when you start recording. See Managing Cookies help topic for details.




    Comment: I only care about testing the web application server (i.e. ColdFusion), not the web server (IIS).  I don't want to load test requests for JavaScript, Images, or CSS sheets.  I want to easily eliminate from my recorded scenario all request types that I don't care about.  To facilitate that, I would like to be able to sort requests from a recorded scenario so that I can group the extensions, the select by groups, and delete those I don't care about.  Presently, I go to Page view, then expand the page, expand the "Other Files", visually inspect the extensions listed there, then delete them.  Then I go to the next Page view, Page by Page.  In Request view I can see all requests for all pages at once, so it makes sense to sort/group extensions there for easier removal.



    To avoid having to edit a recording to remove unwanted extensions from scenarios, a global setting to exclude by extension type (.gif, .jpg, .css, .js) and another to exclude by category (images, css, javascript). That would let me record without those file types without having to go back and remove them later.


    Thanks for the details. I've added two more suggestions to our DB - regarding recording only the needed requests and the possibility to sort them by the requested file extension in the editor.



    Comment: After clicking Edit One More Parameter, I think a Go Back button would be best.  When using it, I realized I had made a mistake and I wanted to go back then click Finish instead, but I couldn't go back.


    It is impossible to implement your request, because the wizard is reset after clicking the Finish button.



    Comment: The issue exists, but I didn't describe it accurately.  When multiple "Tests" exist, the green button for Run will have a drop down option list showing the test names.  If I rename a Scenario then choose a Test from that Scenario in the Run button drop down list, then I get a popup alert for "Unable to find the "XXXXX" scenario" where XXXXX is the original name of the scenario before it was renamed.


    Thank you for the clarification! I've written down this problem to our issue-tracking DB.



    Comment: I only encountered this problem once and haven't seen it again.  If I do, I'll try to document the behavior better.


    Ok, please try to collect more information if the issue occurs again.



    Comment: I want to know the effect on performance of one change to my web application.  I would like to have a statistical analysis to tell me if the change to the web app had a statistically significant impact on performance.  In order to perform valid statistical analysis, I need to have a reasonable sample size of test results before the change AND test results after the change.  Then I can apply a test like Chi Squared (as explained here) to tell me definitively if the change that was made really impacted performance or not.



    To do this properly, it would be great to have a load test tool that can repeat a load test N number of times, then let me make a change to my web app and reset the test environment, then have the load test tool run the same load test the same N number of times.  Ideally the load test tool would automatically apply the chi squared formula to the test results and show me in simple terms if the change had an impact on performance (in either direction, better or worse).  You could hide the complex math explanation from the user, and just provide an easy to use wizard with a simple graphical metaphor to indicate to the end user if the change was significant or not.


    Thanks for the clarification. We don't plan to implement such functionality in LoadComplete.
  • stevenerat's avatar
    stevenerat
    Occasional Contributor
    I installed LC 2.5 Beta 2 on another client machine.  There, I reproduced the issue where after creating a simple test and saving it, the green Run button on the top menu bar was not enabled and appeared gray.  This was a simple recording with about 20 requests (after I cleaned up all the images and other static files).  There was no parameterization done yet.  At the same time, if I right click on a test name then the fly out menu did have a green Run button and I was able to use that.



    So the workaround is to use the flyout Run button off the test instead of the top menu bar.  If I close and reopen the project in LC, the top menu bar continues to show the Run button as disabled/gray.  I'm attaching a screenshot.





    ---

    Edited to read the top menu bar button was gray (not green).



    Also I am now uninstalling LC2.5 Beta 2 completely and will install 2.5 Final that I just downloaded.
  • AlexeyKryuchkov's avatar
    AlexeyKryuchkov
    SmartBear Alumni (Retired)
    Hi Steven,



    Thanks for the feedback. Could you please send me your LoadComplete test project so that we could try to reproduce the issue in our lab?