Sunday, May 8, 2016

CHI2016 - Sketching Papers, and General Discussion of Interesting Research Directions

The annual CHI (Human-Computer Interaction) conference is on this week in San Jose. As one of the "big + important" conferences in Computer Science research, it's always interesting/important to keep an eye on what's happening there to see if there any interesting things come out of it. So, I duly started checking out the accepted papers, before stumbling across the "sketching" section.

My first thought was, "woah... they have a section on interfaces for sketching tools?!", followed quickly by, "I wonder if there's anything of interest there...?" It turns out that there are two papers here, both of which fall quite squarely into the frame of the type and style of research that I love doing most (i.e. the "fun stuff" I'm doing with Grease Pencil + Pose Sculpting/Sketching, vs the empirical work I currently do for my PhD).

So, what were these papers?
1)  "Skuid: Sketching Dynamic Illustrations Using the Principles of 2D Animation"
2)  "Storeoboard: Sketching Stereoscopic Storyboards"




1) Skuid
This paper introduces the idea of "motion amplifiers" - basically, a set of pre-built motion effects, which can be applied to 2D sketches and act like controllable "building blocks" for building up simple looping animations on these objects.

Perhaps most interesting are the effects themselves: these are named after common animation principles from the 9 Old Men - e.g. "Squash and Stretch", "Anticipation", "Follow Through", etc. - and that they've managed to encapsulate these into a series of algorithms for animating 2D objects, with parameters to control the extent of how these are applied.

For me, this work is quite interesting on several levels:
      i) One of the things I'm really interested in is looking into new and faster ways to help artists to capture/express their ideas before they go away, and ideally, make it so that the computer can play a more collaborative role (i.e. entering into a feedback loop with the user to let them experiment with ideas and come up with even better work). 

    Currently, there's still a massive gap between having an idea, trying to "get it down" in some form (crude sketches on paper are often still the best for capturing the raw energy... video footage probably does a good job too), and then being able to see a preview of this on a computer. Ideally, some of the creative energy which already gets lost trying to translate the ideas in your head to a static/2D sketch that doesn't even come close to capturing most of what you were trying to do.

    For example, imagine drawing a rough line of how you want some motion, say a swinging arm, should move, and having the computer be able to interpret the velocity + general arc of the line to infer an initial motion on the rig; and furthermore, imagine you can then tweak this motion with parameters to further push certain aspects (e.g. how high, fast, or the amount of delay in various parts of the motion). 

    Or, for another example, being able to quickly dot out the main melody + rhythm of a song, and then scribble in the backing textures (i.e. rhythm of the accompaniment, density of sound, "color" of the sound - open/closed/melancholic/gritty/heavy+pounding/light+bouncy/robotic-strict/etc.)... this raw input would be kept around for later, so that the composer can refer back to these notes as a kind of inspiration/roadmap of what they were trying to achieve, ... and the computer would quickly come up with suggestions of stuff to fit that outline so that the user can start previewing what they've got.

   (If anyone is interested in seeing either of these two ideas become a reality, I'd love to hear from you - especially if you'd like to help fund or work on these... While it's all really pie-in-the-sky dreams atm, I'll admit that I'm actually quite tempted to try set up a little research institute here and start working on this sort of stuff full time sometime next year once my PhD studies are over. Then again, the big question for getting something like that off the ground is finding sufficient financing to make that all work out ;)

   Anyways, this paper is interesting for this purpose as it tries to define a toolset to allow animators to describe and manipulate their sketches in a way that uses a lot of the vocabulary and concepts that animators already use to think about their work, and make provides these as tools that they can then use. In certain senses, these are basically my "breakdowner" tools on steroids, if you think about it :)  (An in a way, it's pretty much what I was trying to get at with the "push" and "pull" tools which are based on the breakdowner technology)


      ii) The work in this paper is also interesting for the what it does for defining this concept of parameterised motion construction (which can be layered and combined in different ways to create new and more complicated effects). In other words, it's a really f***ing awesome reference/case study that I can shove into a chapter of my thesis where I basically try to outline to the HCI community at large what they've been missing when it comes to <redacted>, and all because it seems like everyone's just been stuck in the mindset passed down from generations of design curriculums.  (* Note, "redacted" the name of the thing here to hopefully make it a little harder to join the dots together on what I'm referring to here... I'd still like to publish a paper on this ok... thank you!)


-------------------------

2) Storeoboard
This paper is in my sights for different reasons: basically, they presented an "alternative approach" to one of the many things that Grease Pencil does - freehand storyboarding work, while being able to preview stereo/depth/camera effects.

I'll get the two main points out of the way first:
   i) Yes, I was in fact initially a bit grumpy/disappointed to see that they had not cited my paper.  Well... "a bit grumpy" may be understating it a bit... ;)   But upon reflection, I realised that the paper deadline for CHI was back in September last year (yes, for those unfamiliar with academia, there's basically a 6-12 month lead time on stuff getting published... the work however may have happened anywhere from 1-2 years prior to that, or even longer if the author had a backlog of writing to do). My paper was only really accepted around that time (i.e. about 2-3 weeks before the CHI paper deadline - which is basically crunch time for nearly all the research teams grad students doing the research and who are submitting to that conference) , and wouldn't become available until about 2 months later - pretty much after the reviewer feedback would have come through/or have been due (in the internal peer review process) on that. In other words, it's probably understandable that they didn't pick up on it yet...

   That said, any future papers moving forward which cite the Storeoboard paper better be prepared to cite mine too :P  (#shamelessplug)   Casual googling should hopefully now at least get both of these to show up together in the search results, so there shouldn't really be any excuse that they "didn't know" about other related work... hehehe  :D

   ii) From what I gather from the paper/video, it feels like the Grease Pencil approach should ultimately offer more flexibility + power at the end of the day, while the Storeoboard may currently have a bit of a benefit from being a bit closer to existing 2D workflows  (i.e. the tool is optimised for 2D drawing, vs GP being one of several tools that Blender has for different workflows).   BUT, it's not really something that a bit of custom Blender-UI skinning magic (aka "Blender 2.8 Workflow Project" --> Dedicated GPencil storyboarding edition) won't be able to take care of  ;)



No comments:

Post a Comment