(On some days, you have to wonder whether everyone in the field
Over the past few months, I've stumbled across a few papers (both for animation tools, and now for my research in HCI) which have essentially validated what I've believed for quite a long time in each case: that my intuitions about what projects and approaches we should really be directing our effort into as researchers are indeed not only really great (that is, when these ideas are actually completed/developed, provide significant boosts to users who really need it), but that they are actually achievable (albeit after doing a few backflips with some complex techniques that we aren't using yet) instead of being flops. Heck, after going through a patch where I thought that maybe my intuitions on these things may be off (e.g. thumb RSI from texting, drag and drop being evil (in certain cases)), it just turns out that I was probably a few years ahead of where people would eventually figure out that these things were a bit dodgy (e.g. feedback needed on certain aspects of the Blender 2.5 anim system).
This evening, I came across one of these papers, which finally provided me with some of the supporting evidence needed to effectively validate a lot of what I contend is one of two serious, nay, critical extinction event weaknesses/blindspots that the field has collectively had, in their approaches to dealing with what I'm currently working on (i.e. highlighting techniques). As I once had to put it,
"We are like cavemen stumbling around in the dark" when it comes to our approaches to these things. Relying on trial and error, designer "intuition", lengthy/costly and dodgy cycles of "user testing" (and in some cases, consisting of n=2 - the developer + his buddy in the next cubicle), and making designers try to interpret lawyer-ish nests of "design guidelines" (no matter how nicely phrased each of those may be individually) is complete and utter rubbish. Just ask anyone who's ever worked on a design team and ended up getting into a heated argument with the other members about which design is better (where each side just musters up a bunch of their favourite justifications from whatever psychology they can draw on, yet no one can really reach an agreement or have an objective mediator of the "ground-truth" of what's going on). "The current situation is perfectly fine... here's another list of design guidelines for you to mull over" is a valid thing to say... yeah right! (TM - Tui Billboard) ;)
I don't completely agree with one of their 2 main conclusions in this paper (as stated in the abstract) - IMO, the reason they reached that conclusion was because they didn't go far enough and use tech that appears to be rather foreign to the field (at least from what I've seen) coupled with other existing stuff and hooked up in a way that I've been dreaming of since first stumbling across some of those attempts last year. Of course, I may eventually spot their reasoning in the paper after re-reading it a few more times, but from what I've seen so far, there isn't really enough there to reach that conclusion and bail out so early. There's just "one more step" there that, once taken with the right combination of stuff will work.
In any case, now I can reinstate that point I had decided to earlier de-emphasize, as it didn't seem likely to be easily explainable. Mwahaha... with this, I can now properly punch its lights out...
Phew! That was great to blast out... Now back to death-marching this bloody bloated draft to completion again. Just.... argh, still too many ... sections to wrap up.