Forum Discussion

tristaanogre's avatar
tristaanogre
Esteemed Contributor
13 years ago

StoryQ and TestComplete

I'm looking at figuring out a way to align functional requirements for an application in use in a regulated industry with the automated tests created within TestComplete.  It was suggested that I look at a BDD tool like StoryQ or SpecFlow.



Does anyone have any experience with using either of these two tools within TestComplete?  If so, I'd like to hear/see what you've done.

4 Replies

  • tristaanogre's avatar
    tristaanogre
    Esteemed Contributor
    I've actually sat in on a couple of Webcasts for ALMComplete and, quite personally, I LOVE it...  the problem is getting buy in from the company that I'm working for (their currently using TFS).



    I think StoryQ is more on the lines of documenting those kinds of story related things internally to the test code rather than linking to an ALM system but I haven't played with it yet.
  • I am also looking for Test Complete integration with BDD tools like SpecFlow. Is this linking possible ?
  • AlexKaras's avatar
    AlexKaras
    Champion Level 3
    Later update:

    Oh-h, I did not mention that the original question from Robert dated 2011... :(

    I'll leave my reply as still hope that it might appear to be useful for somebody...







    Hi Robert,



    The below might or might not be related to the problem you are trying to solve, but maybe it will inspire others...



    Well, I am sure that all of you are aware about usual problem with automated tests:

    -- Requirements exist for the application;

    -- Manual tests have been created and used for verification;

    -- Automated tests were created based on the manual tests;

    -- Some requirement was changed (but not necessarily documented), the change was implemented by Development, but this was not propagated to the test team;

    -- During the test manual tester figures-out that the application's behaviour changed. Finally, he/she figures out that this is not a problem but the result of the code modification;

    -- Corresponding manual test is corrected, but test automators are not notified.

    -- Finally, we are getting the situation, when we have:

      -- The tested application;

      -- Manual tests that not necessarily correspond to the requirements;

      -- Automated tests that: a) not necessarily correspond to the manual tests used for their creation; and b) because of poor logging, it is not easy to say what exactly this or that automated test does.



    In order to make this problem more manageable, for my last several projects I am trying this approach:

    1) All interactions with all UI elements in automated tests are done via library functions. These functions are responsible for making a log record of the interaction using the language as close to human as possible. For example, the function responsible for the interaction with the text field may log this: "Enter 'myLogin' into the field labelled as 'Login'"; or function responsible for the interaction with the option button control may log something like this: "Select 'Visa' option for the 'Pay By' option group"; or function that is responsible for interaction with the menu may log this: "Select 'File|Print' command from the main menu"; etc.;



    2) Every test must have at least one Log.Checkpoint() line that clearly logs what was checked. For example, the code may look like this:



        If (aqObject.CompareProperty( _

            oWFAuditTrailTable.Cell(oWFAuditTrailTable.RowCount - 1, 0).contentText, _

            BuiltIn.cmpEqual, cLastWFStepName, False, BuiltIn.lmNone)) Then

          Call Log.Checkpoint(aqString.Format( _

              "End the test as the final '%s' workflow step was reached", _

              cLastWFStepName), cLastWFStepName, , , page.PagePicture)

          Exit Sub

        End If





    Note that checkpoint must post the relevant screenshot. This provides a kind of evidence that the test code not just logged that check succeeded, but provides you with something that can be included in the report or used for futher analysis if the check was performed correctly or it checks not what was assumed by the requirement and requires correction.



    3) No additional logging must be done or this logging must be easily switched-off. All supplimentary information that may be useful for analysis must be logged as Extended information by the library functions that interact with UI. Examples of such forbidden useless logging (that is quite popular though): "Button clicked OK"; "File was saved"; "Transaction completed"; etc.



    As a result of the above requirements, automated test generates a log that:

    a) Can be exported, printed, handed to the manual tester and the manual tester can repeat all actions done by the automated test using the data used during the given test run;

    b) Checkpoints provide a clear and easily identifiable list of what was checked by this given test;

    c) Having test log in the form of manual test steps it is much easier to compare test steps with the steps from the user story or requirement and decide if this automated test is still relevant or must be corrected because of changes in the tested application;

    d) After the automated test was verified and approved, you may archive the original manual test and use test logs for the manual testing. This may solve the problem of synchronization between manual and automated tests because if manual tester finds that now application behaves differently, he/she will report now to you, making you to be aware that the test must be corrected and, after the test was corrected and new test log is generated, manual tester will get updated manual test 'automatically'.



    Example of the log generated by the test:



    ...

    For the 'Tools:' combo-box control select the 'Create Project' option

    Click 'Go' button

    For the 'Project Name *' text control set the value to '20140702_1801_4fmLyT'a'LNk*!'

    For the 'Project # *' text control set the value to '123456789*12*15*'

    For the 'Workflow Template' combo-box control select the 'Sanity test WF' option

    Click 'Next' button

    Click 'Assign Team' form tab

    For the 'Project Owner' combo-box control select the 'TestUser14032014141729' option

    ...







    Hope, that above might help you or someone else...