Forum Discussion
AlexKaras
15 years agoCommunity Hero
Hi Johny,
You did not provide any information about the tested application, so my answers and considerations are quite generic ones.
> what di you mean by response time ?
I think that this should be specified in the requirements for the test as the test plan / test design should define what should be measured and how. For example, it may be required that *any* request must be responded within not more than one second. In this case you should measure the time for every request/response. Alternatively, the requirement may be that it must take not more than one second for the end-user to see the page in the browser. In this case you should sum the time for all requests/responses that are executed for the given page(es) (also consider the time that it takes for the scripts to execute on the client's computer) and check that the total does not exceed required one second. In this case it does not matter how long does it take for one separate request/response to execute but only the total should be counted.
> [...] number of request failed, and the "fail" word.
> In my test, not all but when I start to increase, few requests start to fail, so which percentage is dangerous
I understand failure as TestComplete treats it - this is the difference between recorded and actual result code for the response.
What failures consider to be critical and what percentage is dangerous - again, depends on the requirements and what exact requests failed. For example, it may be acceptable that the server fails to return some logo images under high load. But most probably it is not acceptable if the user cannot login. In the latter case, there must be requirement that specifies what percentage of login failures is acceptable. For example, it may be required that up to 50 users must be able to login *simultaneously*. And it may be acceptable that no more that 5 login failures may occur if the number of *simultaneous* logins is between 51 and 200.
> And for 500 status code error, we can not give our customer detailed answer when they want to learn real reason for the problem..
And this is, actually, the task for you and/ or project team. You must figure out the reason of this problem - whether this is TestComplete that fails at some point, or this is a failure of the tested application code executed on the server, or this is the problem of the server itself (e.g. low memory), etc. And only after you find the reason of the problem you will be able to provide the customer with some definite information like: 'there is a problem in the application code and it will be fixed in two weeks', or 'do not allow more than 500 concurrent users otherwise they will not be able to login', etc. In the worst case, if you fail to find the reason of the problem, you may, for example, recommend customer not to execute application on the server with less than 2GB of RAM and limit the number of concurrent users to 300, though I am not sure that the customer will be completely happy. But anyway, some information is better than no information at all. :)
You did not provide any information about the tested application, so my answers and considerations are quite generic ones.
> what di you mean by response time ?
I think that this should be specified in the requirements for the test as the test plan / test design should define what should be measured and how. For example, it may be required that *any* request must be responded within not more than one second. In this case you should measure the time for every request/response. Alternatively, the requirement may be that it must take not more than one second for the end-user to see the page in the browser. In this case you should sum the time for all requests/responses that are executed for the given page(es) (also consider the time that it takes for the scripts to execute on the client's computer) and check that the total does not exceed required one second. In this case it does not matter how long does it take for one separate request/response to execute but only the total should be counted.
> [...] number of request failed, and the "fail" word.
> In my test, not all but when I start to increase, few requests start to fail, so which percentage is dangerous
I understand failure as TestComplete treats it - this is the difference between recorded and actual result code for the response.
What failures consider to be critical and what percentage is dangerous - again, depends on the requirements and what exact requests failed. For example, it may be acceptable that the server fails to return some logo images under high load. But most probably it is not acceptable if the user cannot login. In the latter case, there must be requirement that specifies what percentage of login failures is acceptable. For example, it may be required that up to 50 users must be able to login *simultaneously*. And it may be acceptable that no more that 5 login failures may occur if the number of *simultaneous* logins is between 51 and 200.
> And for 500 status code error, we can not give our customer detailed answer when they want to learn real reason for the problem..
And this is, actually, the task for you and/ or project team. You must figure out the reason of this problem - whether this is TestComplete that fails at some point, or this is a failure of the tested application code executed on the server, or this is the problem of the server itself (e.g. low memory), etc. And only after you find the reason of the problem you will be able to provide the customer with some definite information like: 'there is a problem in the application code and it will be fixed in two weeks', or 'do not allow more than 500 concurrent users otherwise they will not be able to login', etc. In the worst case, if you fail to find the reason of the problem, you may, for example, recommend customer not to execute application on the server with less than 2GB of RAM and limit the number of concurrent users to 300, though I am not sure that the customer will be completely happy. But anyway, some information is better than no information at all. :)