Forum Discussion
Hi Mathijs,
You are asking really good question...
Unfortunately, I cannot be of much help here as I did not participate in a real large-scale performance project and clauses below are based on my considerations and experience obtained from the proof-of-concept demos. All clauses can be disputed and this is welcome.
It is my understanding that the load project can be implemented in one of two ways:
a) The test that consists of one long scenario that implements some end-to-end use case. This use case may use dynamic data (e.g. obtained from external data source), but separate scenario is recorded for any other use case. Even if this another use case if quite similar to the already existing one. The load project consists of tests each of which contains one end-to-end scenario. If the tested application is changed in some area in a way that the network communication changes between this area and the server, all relevant scenarios must be re-recorded.
b) The project contains a set of small scenarios, one scenario per 'atomic' user action (e.g.: login, browse items list, order item(s), purchase item(s), logout). The 'actual test' consists from a set of Call Scenario operations that implement different use cases. E.g.:
-- login - logout;
-- login - get random number of pages with items list - logout;
-- login - order random number of items - purchase items - logout.
And the load project consists of a set of these 'actual tests'.
Approach a) looks as if it requires more maintenance efforts: a lot of long scenarios must be re-recorded with a data correlation applied a-new. And this must be done frequently if the tested application is evolving. On the other hand, if in TestComplete you already have automated functional tests that cover required end-to-end scenarios, the efforts required to re-record scenarios might be much lower (this is where the integration you are talking about is used), data correlation can be done using the means provided by LoadComplete with the reuse of the already existing variables that feed load tests with data from external sources.
Approach b) looks as if it potentially requires less maintenance efforts and provides better scenarios reusability because only a small 'atomic' scenarios should be re-recorded. However, it requires data correlation between different scenarios executed sequentially within one test via a set of Call Scenario operations. And this is something that seems to be lacking integrated support from LoadComplete at the moment. There is a feature request that talks about this approach (https://community.smartbear.com/t5/LoadComplete-Feature-Requests/Make-it-possible-to-compare-and-synchronize-requests-for/idi-p/138803) and comments are welcome.
As a bottom line: I am not sure at the moment which of the mentioned approaches is more preferable in a long-term run as I did not tried them in a real project.
Maybe others with a serious real experience with LoadComplete or some other similar tool (tristaanogre ?) can comment more on this.
Anyway, I hope that the above text may appear to be helpful for you in some way...
- tristaanogre8 years agoEsteemed Contributor
It's been a LOOOONG time since I've had a chance to play around with LoadComplete... it was version 2 last time I touched it.
As for the two approaches that AlexKaras mentions, I've used both and, honestly, it depends a bit on what you're trying to do, the application under test, etc. As with functional testing in TestComplete, the needs of the project many times dictate the approach in creating the automation. As mentioned, there is coordination that needs to happen in the scenario with the traffic broken up into multiple pieces to make sure that you are doing things the same way every time.As for the feature to turn a functional test into a load test, I have not used it. I've done all my work in the past directly in LoadComplete (or, when it was still part of the tool, TestComplete). The thing to remember is that LoadComplete is all about GET and PUT requests. The actual UI, clicking on the buttons, links, etc., really doesn't come into play. So, the feature mentioned is simply a way of taking a functional test and feeding it through the LoadComplete transponder. From what I understand, it's more of a convenience thing. You have a scenario created in TestComplete that mirrors a scenario you want to use for load testing. In the past, what is described in that article, was a more manual process. You would start the recording in LoadComplete then load up TestComplete and run the test. The integration that is present now just removes some of that manual stuff. But at the core, what you are doing is having your workstation send web traffic to your web server and having LoadComplete record that traffic for playback. Whether you do it by running a TestComplete functional test or manually is immaterial in this end result.
I do wish I had that feature back when I was doing this because the convenience factor would have been VERY nice. Recording a load test means a requirement of a lot of accuracy and repeatability. If I need to re-record a load test, I would have had to remember in the past EXACTLY what steps I executed and make sure I did them the same way EVERY time if I wanted to compare one set of load tests to another. If the set of transponder traffic changes between load tests, you can't necessarily compare the results 1 to 1. By having an automated script in TestComplete to build my load test, I could guarentee that EVERY time I needed to record the load test I could have the same traffic recorded.
- mgroen28 years agoSuper Contributor
AlexKaras, tristaanogre thank you both for sharing your thoughts on this.
I am pretty new to LoadComplete and I am particularly interested in transforming data driven TestComplete tests, to LoadComplete tests. Do I need to create a link between each GET and PUT requests to the data store? Or is this recorded by LoadComplete itself (during TC's test playback).
- AlexKaras8 years agoChampion Level 3
Mathijs,
There is a difference between (automated end-to-end) functional tests and load tests.
Quite often, functional tests are more of less sophisticated ones. They usually do a lot of verifications and the flow of the given test may change depending on the test data. (For example, the flow to purchase some general-purpose medicine may differ from the flow when a drug-containing one is requested. In the test the implementation may be done via an 'if' switch and test code may be branched appropriately.)
On the contrary, in order to be able to create as significant load as possible, load tests must be as simple as possible. This means that while it may be possible technically, the implementation of load test must avoid complex verifications, a big number of verifications, etc.
So usually load test just fires a request and checks that the server responded with the expected code. Verification that server responded with correct expected data is usually not performed and is left for functional testing. As an example, it is fine for the load test if the server responds with the items list that does not correspond to the requested filter condition. The goal here is to check that the server was able to process the given number of requests for the filtered items list and respond with some data. Obviously, functional test must fail if the returned list does not correspond to the requested filter.
That is why it is better to create several load scenarios for the flows that depend on test data than to try to incorporate business logic into load scenario and branch its flow based on the used test data.
Another difference:
Functional UI tests usually depend on the UI design. This means that you may be required to correct your test if, say, option button control was replaced with the combo-box one.
Because load tests just replay the traffic between the client and the server, they do not care about UI changes as long as these changes do not affect requests (both their number, order and data they exchange with) and expected server responses.
Which means that your load test may require no corrections if option button control is replaced with the combo-box one.
But at the same time, it is quite possible that some internal change done by the front-end developer in the script code executed in the browser on the client side, will change the number and the content of some request(s) sent by the browser. And this can happen without any impact on the UI.
The bad news here is that I talked to several front-end developers and they said that they are not aware about whether or not their corrections result in traffic change. Because such information is not provided by frameworks they use. (And they are not really interested about this.) This fact means that potentially you may need to correct your load test for every new build of your application. And the worst thing is that I am not aware about the tools/means that let you know in advance whether or not some change in UI results in traffic change. (This was the reason for the request I mentioned earlier.)
Considering the above, the approach with TestComplete looks promising because it: a) simplifies recording in LoadComplete; and b) ensures that recorded traffic corresponds to the one that is actually generated by the application.
The drawback is that complete scenario is overwritten and requires data correlation to be done a-new. Which may appear to be a time-consuming and not trivial task depending on the internal design and functional complexity of the tested application.
Related Content
- 3 years ago
Recent Discussions
- 3 years ago
- 3 years ago
- 4 years ago