Hi,
Just of top of my head:
> a log which in turn we'd convert into a scripted test
By definition, a test is a formal action to verify that the actual result of certain predefined actions or events corresponds to the expected required one.
With the above definition in mind:
-- Can you guarantee that your users do required actions? Note, the question is not whether they do correct or incorrect actions (this is another problem), but whether they do actions that were required to be done when the given functionality of the tested application was designed and implemented?
-- Can you guarantee that all actions required for the certain test are executed via clicks and keypresses? For example, when using the calculator, the user can press '2', '+' and '1' keys. Then note somewhere on the paper that the result was '3' and continue with the next input. How are you going to deal with the cases like this?
-- Can you guarantee that your interpretation of the user's actions obtained from recorder corresponds to the actual task that the user had been working on? And can you guarantee that the result expected by you corresponds to the expectations of this user?
-- Assuming the 'blind' automation of recorded actions, will you be able to tell your management what functionality was tested, what functionality was not tested, why it was not tested, what is your estimation of the risks for both, tested and not tested functionality and what were your pass/fail criteria for automated actions?
-- Assuming 'intellectual' automation, don't you think that it will require more efforts to analyse and generalize several logs, derive actions and expected results from there and then spend time with the given end-user explaining to him your findings and clarifying missed details then just to speak with him without any logging?
-- Do you have an acceptable explanation to the management and other stakeholders if they ask you why do you check things that are different and do this differently than described in the application documents?