Forum Discussion
Hi Richie,
Please see below:
I may be misunderstanding so bare with me - but you have 4 requests with different content - each subsequent .json request has >=1 name/value pair removed. YES
Are you saying that some of the name/value pairs being removed are optional and some are mandatory? YES
When you say the content of the response will change - what are you trying to test within your requests - if the content is changing but not the verification points (the purpose behind the test) then that shouldn;t be a problem - however if what you are asserting against changes in your responses I don't see a straightforward way to have a single test case. If the assertions to your responses are quite generic (http status = 200 response, etc.) then yuo could do this with a single test case - but if what you are testing/purpose behind your tests/corresponding verification points/assertion points differ based on the response then I don't see how you can do this easily in 1 test case. Good question - assertions will be different ; 2 requests will receive a positive message while the last 2 will receive a decline;
Hey oanab
Apologies - I never got back to you on this - I got ill so I haven't really participated on the board for the last couple of weeks.
Right - now I understand your issue a lot better.
You could have just a single test to cover all scenarios - your test contains 5 POST steps - 1 POST for each scenario when you remove 1 of the 4 attributes and 1 POST where none of the attributes are removed.
Considering you're testing scenarios appear to include both positive tests (when optional attributes are removed and when none of the attributes are removed) and negative tests (when mandatory attributes are removed) - I think I'd split this into 2 separate test cases. 1 for positive scenarios and 1 for negative scenarios
The trouble with lumping separate tests together into 1 or more tests is that it can skew youre reporting. If you had 1 test case to cover all scenarios (so 5 individual POSTs included in your 1 test) then if 'some' of the steps fail and the purpose of these steps are essentially tests in themselves it's difficult from a reporting perspective to indicate what has actually failed. Its ok for the tester who is running the steps but from a management perspective - they just know a test has failed - and can impact confidence.
Anyway - that last paragraph is just my personal experience.
For the scenario you are talking about you have 5 POSTS in total, I'd personally split them up into 2 separate test cases having 1 positive test case and 1 negative test case - but there's nothing wrong in lumping them all together in 1, of have 5 individual test cases - personally unless there's a good reason, I'd just go with postive and negative test case for this scenario.
Cheers,
richie
Related Content
- 5 years agoSandro2020
- 8 months agoMW_Didata
Recent Discussions
- 19 hours agogroovyguy