While working on some recent improvements for the Breakdowner, I started daydreaming (as you do, when in full feature development mode, which is one of the things I love most about this process/work ;) about some ideas for how we could make these things even more awesome than they were already.
In case you haven't kept up with the latest developments in Blender, there are now two sets of tools with Breakdowner-like functionality:
1) The Pose Breakdowner (and the related "Push" and "Relax" tools)
2) The GPencil Interpolate and Interpolate Sequence operators
This post looks into what we could do for both of these - first common features, and then stuff that's more specific to the GPencil case (since it's newer functionality, which also has heaps more room that we can explore).
Oh, and just in case/before any gets confused, THE FOLLOWING CONTENTS OF THIS POST ARE ALL ENTIRELY HYPOTHETICAL/IDEAS ONLY. There's no guarantee than any of this will be implemented, though I'm curious how well they would turn out if I did :)
1) Common Feature: Interpolation Weights
In the beginning, the idea of the breakdowner came to me back in 2010, while standing in a bookstore leafing through a copy of Richard William's iconic book - The Animators Survival Kit. (In hindsight, I'm kicking myself a bit that I didn't just buy that copy right then and there; same goes for the "Sintel" edition of the 3D World Magazine, but I digress...)
I remember flipping through that inspiring book, looking at all the wonderful drawings, and feeling the energy and I suppose, the "directness" that a 2D style of animation workflow allows the animator (minus of course, the fear many a novice - myself included - feels about trying to keep their drawings "on model" so to speak; it's only years later that I recognise that it is exactly this ability to take things "off-model" that are the greatest strength of traditional 2D).
Anyway, one of the things I remember about the topic of breakdowns (and particular with regard to secondary motion) is the notion that you want some parts of the body to "lag behind" the movements of the rest. Recently, this got me thinking: What if we could try and translate some of these ideas into our animation tools?
At the most basic level, we could do this for just making certain bones/parts of strokes respond "slower" to the breakdowner's interpolation between the two surrounding keyframes. For example, if we could have a new sculpt brush that, instead of painting thickness/strength/smoothness, it instead painted "weight" - bones/stroke points with heavier weight (or maybe the other way around) takes more effort to finally respond to the interpolation, while those with a lower weight respond more readily. I'm guessing that this could in theory allow for some nice drag effects of floppy ears/tails/hair, or maybe for some "stretchy" smears?
Something more interesting (and more far-fetched for now though) is that perhaps we could do something similar for helping animators get the body mechanics they want/need by painting the distribution of weights on a character for that shot (e.g. more weight in the tummy or the upper back, in the fingertips, and in the tip of the nose), and then this is factored into the rig's IK/posing-time behaviour. I suppose it's always been a bit of a pipe dream of mine to find a way to communicate to the computer more efficiently the intents in a piece of animation - things like being able to click on a region of the body, and then draw a stroke (or sequence of overlapping/refining strokes) which quickly capture: 1) The general Timing (or rather, the overall rhythm - fast-slow-slow-fast or slow-fast-slow), and 2) A Spatial or Force-based encapsulation of where that region needs to move, and the way in which it should approach that (e.g. the arc that it should take, and the amount of "force/energy" with which it powers through certain parts). It'd kindof be like "performance capture" in that the animator is performing the intended motion, and the computer captures that as the basis of the final performance, with the key difference being that instead of viewing this capture as the "skeleton to the polished" it's more of a set of "guiding intents" for guiding the computer to make the right choices when helper the animator pose the character (e.g. this may include some sort of automated "optimisation-based" approach to block out the intended movements). Anyway, that was a bit of "moonshot" territory there - in the past 3 years, we've finally started making progress on this front again, after a good 20 year hiatus in practically any research being carried out on this stuff AFAIK.
2a) GP Interpolation: Separate interpolation curves for X and Y (or some equivalent)
One thing that struck me when trying to make the "bouncing ball" demo with the interpolation tool was that maybe you don't really want the sketch blending in the X and Y axes in the same constant speed (e.g. maybe this might allow some kinds of interesting "squeezing" effects?)
2b) GP Interpolation: Interpolate along a path
Another approach could be to allow blending the motion being interpolated between the keyframes (e.g. a vertical up and down bounce), to be combined/controlled via a separate stroke (on a nominated layer). For example, maybe drawing that line across the screen will cause the ball to also travel horizontally as it descends...
Err... maybe scratch that... this may be too much engineering for a special case that animator's wouldn't actually use in practice.
3) GP Interpolation: Paperman-style "attached/projected to surface"
Personally, these days I'm increasingly fond of an approach where more of the "smart action" takes place in the tools (i.e. user controlled, offline, easily updatable without breaking old files/work) vs in the evaluation engine (i.e. procedural/hard-baked into the file format + maintenance burden that imposes when adding new functionality).
In other words, the evaluation engine should be able to just take the data saved and just display it (i.e. basically, Grease Pencil, before the layer-based parenting was introduced), without having to run some complex constraint systems + sims + interpolation calculations. Instead, all your interpolation stuff takes place in the tools/keyframing stage -- much like Anzovin Studio's recent work on a "curve-with-non-hierarchical-points" rigs, where the point is that instead of the computer trying to interpolate things, it's the animator's job to explicitly specify how the motion should look, and that they can get there any way they like - the role of the tools is to be there to help smooth out the journey (e.g. helping them create temporary pose-time-only joint hierarchy relationships to carry out a particular set of point manipulations easily).
There are several key benefits (that I'm sure many animators will actually be able to agree with once explained). Maybe not :
1) Faster Playback - By making the evaluation engine "dumb" (i.e. just read the current pose info for frame x and apply it to the relevant places), less time is spent waiting recalculating a whole bunch of stuff each time
2) Less File Breakage - When your animation is dependent on something that needs to be evaluated (e.g. some constraint), you have to: (a) hope the implementation of that constraint doesn't change, (b) hope the rig doesn't change
3) It's easier for us to add new features - New capabilities can be bolted on, and chosen to be used where they matter, without having to worry about whether said new feature plays nice with the 10 options/modes on the widgets, and the 100's of ways in which those options interact. In other words, "avoiding maintainance woes from combinatorial explosion"
Anyway, I was thinking about how the "Reproject to Surface" could do well as an option for the Interpolation tool (or maybe it's the other way around), allowing us to have any easy way of strokes that "deform" with some underlying geometry. Done this way, it's a lot easier to do a one-time match-up between geometry and strokes (e.g. detect contact points, track contact points, apply relevant deformations to strokes while interpolation between keyframes) than if we had to have some modifier that calculates this stuff on the fly, and has to do so without knowing much about the preceeding/following history.
Of course, this effort could be moot if we end up just implementing the "deforming modifiers" approach, where all tools will need to be made aware they may be operating on a proxy-version of the geometry with all deformations applied. While I guess that approach may be slightly nicer for artists (i.e. no need to remember to manually force a recalc), it does make several other things a lot harder (e.g. the following idea)
4) GP Editing: Multi-Frame Editing
This isn't strictly to do with the Breakdowner/Interpolation operators, but more to do with editing GP animation in general. Basically, currently there's a bit of a bottleneck with the GP workflow in that if you've got an animation sequence, and then you decide you want to modify bits of the linework (e.g. thickening parts, or changing the appearance of some details), you'll need to reapply all these changes across multiple frames manually.
AFAIK, one of the .jp anime community members out there have got an addon with a tool that suspiciously looks like it might implement some functionality to propagate such changes across frames. I can't remember the link offhand, but I remember seeing this in a screenshot on Twitter a few months back. So, clearly something like this is quite handy/useful to have.
Thinking about this problem, I realised that perhaps there is a role for adding a toggle that makes all sculpt tools just operate over strokes in all frames, instead of just those on the active frame. Maybe we'd want to confine it to only what's visible via Onion Skinning (and only doing all frames if no onionskins are visible, in which case a dedicated visualisation is used instead to help you see what you're doing). But, in theory, it's probably not that hard to get this working for the sculpt tools (IIRC, I'd just have modify one function, to make it feed more strokes to brush callback for processing, and then you've got this feature). For other editing tools though, it's probably a bit more of an overhead (though probably also manageable via the context iterators).
However, there is a big BUT here. Our ability to do this (and to do it efficiently) is only really possible if we can assume that the point positions as stored in the file are where the points will be. That is, these assumptions fall over when trying to do this with parented layers, since you'll have to recalculate parts of the scene to figure out what sort of transform correction is needed for each frame, in order to know where things are going to end up on different frames. (Speaking of which, I really need to check on whether Onion Skinning actually behaves correctly now if we have a parented layer, and the actually parent moves around on the surrounding frames :/ If it doesn't, we can add that on the pile with the crashes you'll currently get when trying to paste strokes from one datablock to another due to missing palette colours....)