One thing of note on the "waste of time" part of documenting test cases. Let's use your example of driving to work.
For you, it's natural. Heck, when I drive to work, sometimes I kind of go, mentally, on auto-pilot. I know where all the turns are, where all the exits are, etc. I know EXACTLY what to do every time.
HOWEVER... if I have a friend that I want to have visit me at the office and they've NEVER been here before. Even worse, they have never been in this part of the country EVER. They want to get to my place of employment from my home town, I need to communicate how to get here. And, for that matter, I'd better be pretty clear on the directions because in the 90 minute drive I have, there are a LOT of turns, exits, roads, etc. Not to mention contingencies of "if it's snowing, forget that route and take this alternate route". Somehow, that communication needs to happen. Even if we go techno-modern and tell them "just use the GPS, here's the address"... well, SOMEONE needed to input all that data into the GPS system in the first place, right?
Consider test plans, analysis, and so on from this perspective. They are the means of communicating to someone else how to do what you just did. The possibility exists for you to get hit by a bus tonight and SOMEONE needs to know a) what you've already tested and b) what still needs to be tested and c) how to go about doing so because that SOMEONE has to ramp up pretty quickly. Even exploratory testing requires some sort of documentation to this effect. If I've spent 5 hours testing an application and someone comes up and says, "How far are you from being done?" what do you respond with? If you don't have a measurement point to say, "Well, out of 40 requirements, I've fully tested 29, partially tested 10, and have one yet to address", you're basically going to answer with "Um... I dunno".
In Agile, also, there's this idea of "the definition of done". You need some sort of metric to decide when done means done. If there's no analysis or measurement, then you can't define done accurately. Again, the level of documentation may be different from project to project or environment to environment, but somehow the definition needs to be made.
Let's go one step further.. back to driving to work. I get a call after half an hour, "Dude, I'm like TOTALLY lost. There was an accident on the route and I got detoured and have NO idea where to go from here." My answer is "What is the last road that you were on in the directions and where did you go from there?" If my friend doesn't have that information, I'm just as lost as they are and have no real way of getting them to my office.
Test plans, test cases, test scenarios, test scripts provide that necessary documentation to evaluate "OK, something unexpected happened. How did we get here so we can figure out what went wrong?" Again, exploratory testing SHOULD include documenting what you did. If there's no documentation, how do you report the reproduction steps of the bug?
Too often, Agile testing becomes "ad-hoc"... just do it, do it fast, and get it done. I find that to be very dangerous. I do think that sometimes the amount of documentation that is done as part of the testing process is excessive. But I think throwing out the documentation part of the process entirely is just as dangerous.
One more thing: with software increasingly becoming integrated into all levels of society, there are legal implications of what happens when software goes wrong. In those legal implications, there is regulation... PCI, Sarbanes-Oxley, HIPPA, etc. are just some examples. With those becomes a necessity of being able to answer an audit. Some regulator comes by and says, "Show me how you are HIPPA compliant in your storage and access of patient records." If you have no documentation of how your tests verified that compliance... well... ....