We use extensions in a lot of our test cases as well... however, we don't use them during record time. We add calls to the extensions via design time when creating the test code.
Truthfully, 99.9999% of the time, when you "record" a test, you're going to have to go back and make modifications anyways to a) improve timing, b) add loops and searches, c) parameterize data entry etc. You're going to be editing the test case anyways, so might as well shift the Record-time script extensions to design time.
Additionally, we've found that recording is good primarily for research... detecting what the general workflow is in a particular set of action, doing some initial namemapping,, etc. Recording a test case from beginning to end rarely works in more complicated applications so we don't really bother. Record chunks and segments, add them in, modify, lather, rinse, repeat.
While I'm with you that I don't generally like deprecation (for example, see my comment concerning the removal of the object checkpoint type), consider your own application under test. There are features, functions, bells, whistles, etc., that probably get added and removed all the time that your own end users don't like... and yet you still do it because the product roadmap you're aiming for requires the changes.
I can't speak for the decision processes at SmartBear. I don't like the object checkpoint depracation. But I have some useful workarounds that allow me to continue to function and still create tests the way I need to. So, while record time extensions are gone, design time still exists and that's where I find the most use anyways.