Hmm. Not specifically.
But I know a couple of the people on here make extensive use of them:
https://smartbear.com/account/profile?userid=27549 (Ryan Moran)
and
https://smartbear.com/account/profile?userid=8817 (Jose Pita)
It came up in a thread a while back ...
http://smartbear.com/forums/f74/t91186/core-layout-change-effected-dom-most-items-are/#91215And one of Jose's posts links to:
https://gist.github.com/jpita/9954138Which outlines a lot of the helper functions he uses.
I know Ryan uses central ones which he passes a top level object, and some identification properties to, which then searches the object tree below it looking for the object in question using the properties supplied and series of filters. But you need to exercise some level of control. If you search from the top of the tree for every single object, I can't see that being good for performance.
When you use such functions is another thing you have to factor in. You can simply search out objects on the fly during the run. Or you can have a "setup" function which searches out all the objects the run will require in advance and stores the objects for use during the run. The second approach requires more work and forethought, but would probably have less of a performance impact during the actual test part of the run.
I make some use of such things. But I also do tend to use name maps (as you'll see from the thread I linked to above) unless something is hugely dynamic. In those cases, I have used helper functions but they tend to be fairly application specific (so far, mine have had to use combinations of properties of several sub elements, which return a higher level element, which is then stored and has further tests run against it and it's child objects) so wouldn't be much help to you beyond the concept I've just outlined.
Object identification in automated testing is always a hot potato and, as far as I'm concerned anyway, I've yet to find an answer I'd be happy to use in every single scenario.