Forum Discussion

AAB's avatar
Regular Contributor
4 years ago

ReadyAPI- download a textfile from a URL with groovyscript



I have looked at a dozen of possibilities online to download a textfile from a website, but none is actually OK for my case.

What I need to do:

Go to the website that contains a bunch of logfiles (extension = paldata but actually can be saved as textfile)

search for a file with an ID in the string equal to, let's say, 


e.g. 20200730-XyLQpIfwhDv90uazxe1kwQAAABc.paldata

Open the file and read it.

Search for "fsbTransactionid".

If it's not present --> error message 


if it's present read it and stash it in a variable

I would like to do that for different ID's in the file.

Now, as I found a lot "BuilderHTTPs" methods, I/O Read and Write methods that I've tried, it seems that I'm not able to handle the total picture. and the installation of a standalone selenium didn't work for me neither. (cfr. to another post of mine)

Does somebody have another idea please?

nmrao ? groovyguy ? NBorovykh Anastasia ?

Thanks in advance


  • AAB's avatar
    4 years ago

    Hi all,


    Thanks for your help. and..... I've managed to do this with all of your suggestions 😉 I resolved this issue like this:

    Create testcase X, I've called it "CheckHeaders".

    In this testcase setup this framework:

    * REST Request

    * Groovy script1 (I've called it 'GetIDsFromRawRequest)

    * Delay

    * HTTP request1 (I've called it GetFileFromE2E)

    * Groovy script2 (I've called it Get FileFromURL)

    * HTTP request2 (I've called it GetFile Content)


    REST Request: This request is pointing to the REST Request that you want answers from. This request will give you the ID's you're searching for.

    Groovy script1: 

    import groovy.json.JsonSlurper
    def jsonSlurper
    testRunner.testCase.getTestStepByName("REST Request").testRequest.response.responseHeaders.each { 
    	if (it.key == "X-BOSA-ServiceInfo")
    		jsonSlurper = new JsonSlurper().parseText(it.value)
    def applicationID =  jsonSlurper.ApplicationID
    def fsbTransID = jsonSlurper.fsbTransactionId
    def providerID = jsonSlurper.ProviderID
    def backendTime = jsonSlurper.BackendTime
    assert applicationID != "" : "applicationID is blank"
    assert applicationID != null : "applicationID is null"
    assert fsbTransID != "" : "applicationID is blank"
    assert fsbTransID != null : "applicationID is null"
    assert providerID != "" : "applicationID is blank"
    assert providerID != null : "applicationID is null"
    assert backendTime != "" : "applicationID is blank"
    assert backendTime != null : "applicationID is null"
    testRunner.testCase.setPropertyValue("backendTime",backendTime.toString()) applicationID fsbTransID providerID backendTime


    Delay:  this speaks for itself. In my case it takes a while before the logfiles are created and can be downloaded from the E2E.

    HTTP Request1: 

    This request points to the website where you want to download the files. This is a "GET" request.

    Groovy script2:

    import groovy.xml.XmlSlurper
    import groovy.json.JsonSlurper
    def getID = testRunner.testCase.getPropertyValue("fsbTransId") getID
    //read the http body without parsing as text. without headers
    def response = context.expand( '${GetFileFromE2E#Response}' ) response
    //read text as xml file, then with jsonparser find folder
    { "file found"
    def xml = new XmlSlurper().parseText(response)
    //lookup the pageproperty where the file is stashed and have a look how the webpage was
    //created. follow the div's and all its elements in it.
    xml.body.div[0].div[0].div[1].table.'*'.each {
    	if (it.text().contains(getID) && it.text().contains(".paldata")) { "Found it!!: surf to:" + it.text()
    		def url = "" + it.text()
    //put the url on testCase level in a parameter
    	testRunner.testCase.setPropertyValue("url", url)
    //create other step to send the url, in that step assert that all id's are present

    HTTP Request2:

    For this request use as Endpoint the parameter that you've saved on testcase level as such:  ${#TestCase#url}

    Add a script assertion to this request to get and compare the parameters.

    import groovy.json.JsonSlurper
    def holder = new XmlHolder( messageExchange.responseContentAsXml )
    def nodegetID = holder.getNodeValue( "//data[1]/RequestHTTPHeader-data[1]/fsbTransactionId[1]" )
    //get results from ResponseHTTPHeader up to ServiceInfo
    def nodeServiceInfo = holder.getNodeValue( "//data[1]/ResponseHTTPHeader-data[1]/ServiceInfo[1]" ) nodeServiceInfo
    //use a jsonSlurper to read out the content of ServiceInfo
    def jsonSlurper = new JsonSlurper()
    //parse the response
    def parsedJson = jsonSlurper.parseText(nodeServiceInfo) "ProviderID: " + parsedJson.ProviderID
    def getproviderID = parsedJson.ProviderID
    def getbackendTime = parsedJson.BackendTime
    def getapplicationID = parsedJson.ApplicationID
    //get the parameters from TestCase level
    def getID = context.expand( '${#TestCase#fsbTransId}' )
    def providerID = context.expand( '${#TestCase#providerID}' )
    def backendTime = context.expand( '${#TestCase#backendTime}' )
    def appId = context.expand( '${#TestCase#appId}' )
    assert nodegetID != null
    assert nodegetID ==  getID
    assert providerID != null
    assert providerID == getproviderID
    assert backendTime != null
    assert backendTime == getbackendTime
    assert appId != null
    assert appId == getapplicationID


    And for me, this does the job! 🙂 


    Happy testing  😉 !

22 Replies

  • richie's avatar
    Community Hero
    Hey AAB,

    In case the other lads/ladies dont come up with anything, i was wondering what youre front end automation skills are like as this equates to what is pretty standard fair front end auto....i.e. im thinking about doing it in java which obviously ReadyAPI! does support.

    If youre not sure how to do it, you could record the screen actions using something like seleniumIDE or Katalon (browser plugin not the app), then export the record test as java code. Youve then got the basics of your search a page for a link, select the link on the page and download a file.
    You could add some groovy ( or java) to parse the file, and find what youre looking for and assert against it. Obviously in your test case before the groovy step containing the extracted java and groovy youd need a GET request to retrieve the relevant page.

    I know it sounds a bit of a bodge, and i'm pretty sure you might need some front end auto import declarations in your script etc., but if you have any front end auto experience this shouldnt be too difficult at fairly confident even i could do this and your coding appears better than mine.

    Just a suggestion!

    • groovyguy's avatar
      Champion Level 1

      Another option might be using an "HTTP Request" test step to browse to and retrieve the file, not 100% sure if that'd work. From there, if it does, you could potentially use groovy to parse the text file from the HTTP response?

      • richie's avatar
        Community Hero
        http request step? If that'll help then great, but i always thought the http request step was very similar to the REST step....i.e. youre constrained to the http verb methods????
        If it works, can you post how you managed it? Cos i always like being educated!


    • AAB's avatar
      Regular Contributor


      Thanks buddy, do agreed to do this with recording tools but Katalon gave me a headache as it was complaining about the version of my Chrome.  I've looked for that on the internet and I was not the only one. As result I gave up that thought and put my problem here 🙂 

      Nevertheless, I'll give it a try next week. I'm working on different projects for the moment, so no time today. I'll come back if I have a solution in anyway.




    • AAB's avatar
      Regular Contributor

      Hello HimanshuTayal 


      Thanks for the link. I'll give it a try next week.