Forum Discussion

marcl's avatar
marcl
Occasional Contributor
10 years ago

Failing "message content assertions" in reports

Hello,

 

I've got some test suites which are run nightly (using Jenkins), and for whose I receive a report every morning. This report is generated using "testrunner.bat -j", which puts the data in a JUnit-like XML file.

In my tests, I use a lot of "message content assertions". But when one of those fail, I can't seems to find the reason of the failure in the report. It basically looks like this:

 

Failing due to failed test step

My Test Step Failed

[My Test Step] Message Content Assertion failed : failed/compared = 1/160
Status: FAILED
Time Taken: 63
Size: 37917
Timestamp: Tue Oct 27 21:57:06 UTC 2015
TestStep: My Test Step

----------------- Messages ------------------------------
[My Test Step] Message Content Assertion failed : failed/compared = 1/160

----------------- Properties ------------------------------
HTTP Version: HTTP/1.1
Endpoint: http://myhost:12345
Encoding: UTF-8
Method: GET
StatusCode: 200
URL: http://myhost:12345/db/foo?bar=42&filter=all

---------------- Request ---------------------------
Connection: [Keep-Alive]
User-Agent: [Apache-HttpClient/4.3.1 (java 1.5)]
Host: [myhost:12345]
Accept-Encoding: [gzip,deflate]

GET http://myhost:12345/db/foo?bar=42&filter=all HTTP/1.1
Accept-Encoding: gzip,deflate
Host: cas01:38022
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.3.1 (java 1.5)


---------------- Response --------------------------
Connection: [keep-alive]
#status#: [HTTP/1.1 200 OK]
Content-Length: [37917]
Content-Type: [application/json]

{"some":"data", "I":"obfuscated"}

As you can see, we know that one check in the MCA failed, but we have no idea which one.

Is there any way I can get this information without having to re-play the test afterwards from a GUI instance of SoapUI?

 

  • ChristianThomas's avatar
    ChristianThomas
    Occasional Visitor

    Hi Marcl and All

     

    I appriciate this was nearly three years ago! But I came across this post looking for the same answer to Marcl.

     

    To my avail I managed to work out something that worked for me to have the assertion to spit out the Value when it fails, so that you can clearly see which value is failing when load testing massive amount of data.

     

    All you need to do is set a "count" assertion and have it look for the: ${DataSource#}

    This in turn spits the "missing token" that is the value / data in the LoadUI log

     

    Provided this will only work if response contains the value your entering!

     

    For me this works perfectly as I am testing over 610k values aka pieces of data.

     

    Hopefully this helps someone somewhere!

     

    • TanyaYatskovska's avatar
      TanyaYatskovska
      SmartBear Alumni (Retired)

      Hi Christian,

      Thanks for sharing your solutions with us.

      This helps the Community a lot!

  • nmrao's avatar
    nmrao
    Champion Level 3

    Let us assume, the following assertions exists for a test step and you have not renamed or changed default names for the same.

     

    When user adds the same type of assertion of Contains type, then it is some thing as below:

     

    • Contains
    • Contains 1

    Here there is no issue when every thing goes fine.

    Problem comes only if there is multiple failure.

     

    Have a unique name for the assertion so that it can be identified by just overview.

     

    For example, Contains assertion name can be:

     

    • Is mystep's response contains value Black?
    • Is mystep's response contains value MyCustomer?

    Hope this is helpful.

    • marcl's avatar
      marcl
      Occasional Contributor

      Thanks for your suggestion, however I'm specifically asking about "message content" assertions, not "contains" assertions.

      Replacing our "message content" by "contains" assertions is not an option, as we would have to create hundreds of new assertions for most of our test cases.

       

      The problem is not to understand which assertion fails, but to know:

      • which check(s) in the assertion itself failed (in the example above, 1 check amongst 160 failed but we don't know which one)
      • which values were expected, and actually received
      • nmrao's avatar
        nmrao
        Champion Level 3

        I just took that one as an example, and the same can be used for other assertions too.

        I can understand that you have too many to rename. But you may better rename the assertions to avoid the issue posted by.

        So you are free to choose what is priority/needed.

         

        In fact you can use the use the values(expected) as well in the name it self which might be appropriate at times.