Hey
swapnils,
Have you got code or do you intend to use Virts? Ive only played with Virts before now as they only really come into their own for automation not manual testing and by the time my tests have been created ive always had code to test against.
By definition if you have code to test against the AUT will include validators.
What is the mechanism for consumption of the HL7 messages? REST APIs?
Some ETL file load process like Oracle's sql loader?
How are the files built in the live system....what is the ETL process used to create the files?
Is there any transformation occurring AFTER the etl extraction process?
Is the message exchange pattern synchronous or asynchronous?
When you talk about validating your messages are you talking about validating the testdata files youve created BEFORE you start testing or are you talking about validating your messages during testing? Theoretically i disagree with adding schema validation assertions into soapui tests anyway if youre testing code rather than a Virt anyway cos if youre testing code your submissions are always validated by your endpoint anyway, so if the message returns a successful response, the request was valid. Im not saying here dont include tests for each valid and invalid scenario cos depending on the risk and prioritisation i always do, its just that if a message returns a successful response when hitting code, the request must be valid.
If you are testing against code rather than Virts, depending on the answers to my questions above the developers can create additional apis that hook into your genuine endpoints to provide you responses that allow you to test. If youre using Virts your developers code add in the validators to your Virts to verify your messages.
If youre testing code as i say above and its synchronous MEP (so you get a response to work with) and you get a successful response and your Post/Put/Patch (if RESTful) updates the database successfully then by definition your request must have been valid relative to your endpoint api schema and also conforms with your db constraints (null allowed/not allowed, datatypes and field length). Yes i appreciate a request may get through the endpoints validator but then get rejected when attempting to update the database (say if field length in validator is 10, but db field length is 8 or inconsistent datatypes or null not allowed issues, etc.), but prior to testiny id expect a deskcheck/static test of the endpoints validator against the dbs data dictionary to get the validator/db aligned.
You mention what 'tool' i used to validate my messages, but i verified the validation by successful and unsuccessful writes to the database by using ReadyAPI! to inject the messages. It was asynch, so i needed the additional test apis to hook into to provide my responses. If your project uses REST and is synch (so you get responses to assert against) you dont necessarily need to verify the validation, youll do it by successfully and unsuccessfully injecting messages known to be valid and invalid.
I know ive talked at length above, but its just that all the above considerations feed into my approach used whenever i test anything. I do testing but my principal job for the most part is defining the integration/api testing strategy that a company should adopt across the programme for each of its projects and so i needed to highlight that you need a lot more info before you can come up with a testing approach for the project youre working on.
Hope this helps,
Rich