Forum Discussion

avs262's avatar
New Contributor
6 years ago

Desktop recorder

I'm interested in automating the test case creation process. Test Completes Recorder is great but instead of figuring out the test cases my team needs to build by asking end users and reading application documents, I'd like to install an agent on each end-users machine to record the actions they perform within our application on a daily basis. The recorded actions would then generate a log which in turn we'd convert into a scripted test case. Does anyone know of any tools/agents which can record actions (perhaps even see application objects being interacted with) performed within a desktop application?

2 Replies

  • MulgraveTester's avatar
    Frequent Contributor

    Windows 10 has a built in steps recorder. Just hit the windows key and type "steps recorder".

  • AlexKaras's avatar
    Champion Level 3



    Just of top of my head:


    > a log which in turn we'd convert into a scripted test 

    By definition, a test is a formal action to verify that the actual result of certain predefined actions or events corresponds to the expected required one.


    With the above definition in mind:

    -- Can you guarantee that your users do required actions? Note, the question is not whether they do correct or incorrect actions (this is another problem), but whether they do actions that were required to be done when the given functionality of the tested application was designed and implemented?

    -- Can you guarantee that all actions required for the certain test are executed via clicks and keypresses? For example, when using the calculator, the user can press '2', '+' and '1' keys. Then note somewhere on the paper that the result was '3' and continue with the next input. How are you going to deal with the cases like this?

    -- Can you guarantee that your interpretation of the user's actions obtained from recorder corresponds to the actual task that the user had been working on? And can you guarantee that the result expected by you corresponds to the expectations of this user?

    -- Assuming the 'blind' automation of recorded actions, will you be able to tell your management what functionality was tested, what functionality was not tested, why it was not tested, what is your estimation of the risks for both, tested and not tested functionality and what were your pass/fail criteria for automated actions?

    -- Assuming 'intellectual' automation, don't you think that it will require more efforts to analyse and generalize several logs, derive actions and expected results from there and then spend time with the given end-user explaining to him your findings and clarifying missed details then just to speak with him without any logging?

    -- Do you have an acceptable explanation to the management and other stakeholders if they ask you why do you check things that are different and do this differently than described in the application documents?