Saturday, April 14, 2018

Future of Animation Tools - Some WIP Thoughts and Ideas (+ the Role of Pose Sculpting in All This)

As a follow-up on my earlier brief discussion about the general direction for Motion Path editing in Blender, I figured that it's probably time to outline a bit more my longer-term vision for the future of the animation tools (mainly in Blender, but also the wider animation industry at large) to give a bit more context/motivation to my earlier comments.



I had originally intended to release this all a bit later once I had a few more key pieces in place, but, inspired by Raf Anzovin's enlightening blog, Daniel M Lara's Easy Rigging stuff, the exciting developments in the VR-based animation space, and also the 2.8 code quest, I decided to put this out earlier.

In this article, I'm going to try to "throw the ball further" (as Emirates Team NZ would say) and take a more comprehensive view of where we need to going in the medium to long term, as well as what steps we can take now.

(NOTE: I originally started writing this article in NZ, but in between preparing a new laptop for travel, and travelling to/settling in Amsterdam for Code Quest, I've only had time to really work on finishing this in the past day or so).

1) Ground Rules - What are some of the big issues we face?
First up, what exactly is "wrong" with the current state of animation tools? In no particular order, here are a few:
  1. "Production Quality" rigs are hideously complex. "BlenRig" (by JP-Bouza and used in all the recent Open Movie projects) is a prime example of this (sorry JP, no offense intended). Some of this is perhaps a symptom that there's stuff that the core system isn't doing (otherwise, you wouldn't have needed to chain up a whole bunch of bones to do it), while others actually call into question our assumptions about what a rig should be, and how we should actually be managing cases where rigs effectively have different modes. (In this regard, the more I learn about what people in stop motion are doing, the more I realise that we may be over-complicating some things). Some of the big conflicts we have here are:
    • The tradeoff between having a rig that is "easy to keep on model + quick blocking" vs "need for applying fine tuning" - On one hand, animators want to able to have fine control over the final shape of everything (to get the perfect pose/silhouettes/shapes), while on the other hand, you don't want to have to work with too many controls straight out of the box (or else, you'd go slowly mad).
    • The need for different configurations of the rig, to suit animation in different setups. For example, this includes stuff like IK/FK switching, pivots/parents, and even the directionality of chains. As Raf pointed out recently, all of this imposes a layer of additional cognitive overhead in trying to figure out how to get the pose you want, vs just going in and manipulating the character into shape.
    • Technical issues solving all the constraints and counter-correcting rotations/etc. to get the desired poses complicate matters, when the user may only be thinking about adjusting a set of points. On sufficiently complicated rigs, it can be nightmarish trying to fight the constraints to give you the desired pose.
    • Viewport clutter + Complexity of Learning New Rigs - Learning to work with the rigs for a particular show may often end up being like learning a whole other program on top of the basic animation software. In contrast, 2D artists can effectively learn to draw once, and then, they can draw whatever they want, using just the basic tools they have.
  2. Animating long rope/string/hair strands. It's well known that it's hard to animate long stringy things. Even Pixar have struggled with this (e.g. there's an old quote out there about Victor Navone had to counter-animate the bones controlling the hose of a vacuum cleaner in one of the old Wall-E ads;  then, more recently, they had a team working on creating some new spline rig that was used for Hank's tentacles in Finding Dory).
  3. Animating physical contact situations (character <-> prop interactions, character <-> character touch/interactions), and particularly when ropes/strings/hair are involved.
    • Character prop interactions in Blender are somewhat complicated by the Object/Armature split, which causes challenges when animating props that need to be in contact with different characters at different times.  Often, TD's currently have to resort to dumping the rig for every single prop a character may ever encounter during the project into the main character rig object, further complicating/weighing things down.
    • For the ultimate test of hand/finger interaction with a length of string, I've dreamed up a little short film. See here for details.
    • Visualising and manipulating how motions play out over time. A lot of our animation tools fall into two categories right now - 1) On-character pose blocking, and 2) Abstract F-Curve tweaking.  There's clearly room for improvements here, especially after vast improvements in computing power since these ideas were first invented. Are there ways to better help animators capture the energy and timing that they're trying to convey?  For example,
      1. Perhaps we could have tools to reflow timing / edit multiple frames simultaneously / etc. However, to implement these tools, we would need to be able to 
      2. Applying "other motion sources" (e.g. physics sims) to an existing animation, to add some extra detail/depth. See physics sims notes below.
      3. Cleaning up and "enhancing" motion capture animation.
      4. Perhaps we could have an even more experimental approach, where we instead focus on allowing animators to define the motions they are trying to achieve - i.e. the paths traced out, speed/acceleration and direction of movement, and the overall "scope + energy" of the movements. For instance, this could work well if we made this a two-step process, where
        1. Animators first define the poses they want to hit (as they do now), using whatever precision plotting/placement tools work best
        2. Then, we combine a mocap-like approach (e.g. similar to the "performance based timing" tools that were demonstrated at BConf a few years back) to let animators craft the transitions in an easier and faster manner.
    • Interactive or Event-driven Animation. Truth be told, I've always been more focussed/interested on character animation (and particularly prefer stuff that doesn't have "too many legs (TM)" or isn't too horribly slimy/grotesque). That said, after working on some interactive/event-driven stuff for my thesis where it would've been really handy to have used Blender as a frontend editor to create mockups, I have since come around on the idea of investigating how we can build in some interactive/event-driven stuff. Examples of things we can do include:
      • Defining "action transition graphs" or things like that where you can define a node graph describing how different actions can be linked to others. With suitable tool support, you can then test how different clusters of animations flow together and interact with each other as they would in a game engine/simulation, allowing you to check how well everything fits together.
      • Letting such ATG graphs be used to power crowd/particle simulation behaviour, and for other things too where you want to again work in a more declarative way (e.g. demolition/fracture/etc. physics sims, where you can build in behaviours that should be triggered under different conditions)
    • Taking Advantage of Physics Sims - Making Sims Art Directable, or Is There Another Way?  Anyone who has worked with physics sims will know what a pain they can be sometimes, while also being thankful that they exist to make some otherwise tricky stuff easier to manage (e.g. wind interactions). Traditionally, the approach has been to try to make the sims more controllable by adding constraints to them. However, is it possible that we can take another approach, and instead use physics sims as tools that we can layer/paint/stencil-onto our work (e.g. use a Pose Sculpting brush to "paint" animation onto the character over time), to keep the best of both worlds? This way, the control is ultimately in the hands of the artist.
    • A focus on creating a simpler approach for animators to manipulate their characters, with more of the freedom that 2D animators have (e.g. just block out whatever shapes you need by drawing/placing them), while keeping the benefits of 3D software (keeping things on model, helping to manipulate things into place when you can't draw them for whatever reason, and dealing with all the fine details (e.g. ruffles/buttons/links/etc. joining costumes together).
    • Working on scenes with lots of characters -  Regardless of the medium/software, it's always tricky when you've got multiple characters in a scene that all need to be managed. Nevertheless, general Blender 2.8 development should be taking care of this part already.
    • Deformation quality and/or the need for corrective shapes - Judging from the depth of research in these areas, there is no shortage of work being done to try to reduce the problems faced by riggers when trying to get things to deform in a plausible and aesthetically pleasing manner. While I'm personally not too interested/involved in working on these issues, we often hear of people working on new approaches here, so hopefully there will soon be further news about new approaches that may be floating around.
      • Improving standard bone-skinning (especially around things like elbows).  Things like the Bone Glow and Implicit Skinning approaches easily come to mind here, as this is a problem commonly tackled by academic researchers.
      • Finding new ways to rig fat/blobby/flexible "mass deforming" volumes - The Mesh Deform Modifier approach is already an important part of many fancy rigs, but things can still be a lot simpler. For instance, the mesh deform modifier still ends up needing a whole bunch of helper bones (i.e. one per on-the-go heater/access you used). What if we could deform rigs without needing those mesh deform cages?
      • "Muscle" Systems - TBH, I *still* don't really know what exactly people mean when they say they want a "muscle system". What exactly does that entail? I mean, are you after a simulation of skin/flesh volumes moving over some solid/hard/fixed shell (e.g. face shapes sliding over the skull), or is it all about physics sims adding little bits of jiggle whenever things change?
      • Non-Hookean deforms, etc. -  <insert latest buzzword/super-solution here> :P
    • Animation Layering and/or Motion Cleanup - From talking with some animators who do a lot of motion capture cleanup work, it appears that "animation layers" functionality is often used a lot for taking the raw captured data and applying a layer of customisations/polish on top of that. There are quite a lot of interesting opportunities and challenges available with regard to developing more advanced tools for how we can manage and.or clean up such dense motion captured data too.
      • Note: For Blender, it was always intended that the NLA system would act as a bit of an Animation Layering system. What's currently missing to support this is some internal code to keep track of all the channels that are currently being animated, what values they have, and a way to set/control what the default values for these channels will be (i.e. when they are not being animated). Since this is such a frequently asked question/request, I thought I'd address this up-front here.
    With these out of the way, let's look a bit more in detail about some of these things, and my current thoughts about each.


    2) The Role of the Rig
    As mentioned above, I'm increasingly leaning towards pursuing/pushing the Blender rig style towards some of the ideas being presented by Raf. Specifically, the key takeaways from this are:
    • Undirected or Bidirectional Bones, without a fixed hierarchy baked the rig. Instead, everything becomes more focussed on just transforming points (i.e. joint positions), which is generally simpler to work with in general.
    • Allowing riggers to define multiple "manipulation convenience modes" - dynamic hierarchies/groupings/selections to accomplish a lot of the things like IK-FK switching / switchable pivot points / etc. that seem to be a major pain point. From a rigger perspective, they might still be defining heaps of crazy setups and so forth for giving animators different control schemes. The difference though, is that less of this is baked into the evaluation of the rig (i.e. not all of these components need to get evaluated on each animation step/update), with the calculations for different rig components only really happening when you go to edit.  (The only challenge is how we optimise all this enough that interactivity while editing won't suffer too much) 
    • Reducing the importance / hold that "interpolation" has on the way we approach things - While we probably cannot do without it entirely (i.e. even things like the breakdowner are still very much dependent on the endpoints being able to interpolate nicely, in order to choose nice in-betweens), there's a lot to be said about how our need to be able to always interpolate between one state to another causes a lot of headaches.
    There are also a few other related things I've been thinking about a lot recently. In particular, I had some epiphanies while watching Adam Savage's series about the work the folk at Aardman have been doing on Early Man, and how similar approaches/mindsets may be useful to port back to CGI:
    • "Mix-Ins" for Rigs - That is, in a shot file, you can mix and match different prebuilt "rig components" into your rig to customise the functionality you have available in a particular shot file. (To be clear, these "rig components" can either be rig elements that are bundled with the default Blender distribution - much like Rigify - or perhaps they could be assets provided by the wider Blender community, or they may also be shot/show-specific rigs that were designed by a production's rigging team to deal with certain character/shot specific requirements).
      • IIRC, I mentioned something like this a few years ago in one of these posts (or maybe in a reply to someone's comment). The idea came about in response to the challenges with sharing props between characters, when I started pondering the viability of allowing artists to dynamically "fuse" several different rigs within a shot file as needed (e.g. Frank and the rope rig from Cosmos Laundromat couldn't been done this way instead).  
      • Thinking more about this now, this can also be useful for allowing riggers to provide different configurations for different shot types in a more modular way - in effect, replacing the current mindset of "the rig always contains all the modes for all different situations", to allowing a more freeform "we can easily switch out the arm/spine/face rig with something that works better for this specific shot"
    • Quick "On-Model" Posing/Blocking vs Fine-Grained Control over Final Shape - Closely related to the previous point is the tension between having the rig to allow animators to quickly block out poses (and having the results stay "on-model"), versus the need that animators often have of being able to have fine grained control over the shape of the deformed meshes (necessary for achieving particular silhouettes and shapes to satisfy particular aesthetic requirements).
      • On current rigs, this is often achieved by having multiple tiers of controls - larger/rougher controls that provide the broad/coarse control needed to quickly block out a shot, and then a network of smaller controls (often starting to approach grid-like coverage of the geometry in places) providing animators with the ability to tweak the shapes with a high degree of control (while not having direct control over specific vertices still).
      • On the Aardman films, these tradeoffs were managed by largely building the puppets from relatively rigid latex parts for the less flexible-but-quickly-poseable parts (e.g. arms, legs, torso's) and using more malleable plastacine on all the places where animators need fine control (e.g. eyebrows, around mouths, and in one shot, the entire torso) with the understanding that although animators would take longer to work on those shots, they could also have a much higher level of control/flexibility to do as they needed to.
      • Extrapolating from these ideas, we could make it easier for animators to achieve fine control over rigs by perhaps allowing direct manipulation of rig geometry in certain areas. We could make such rigs intuitive by defaulting to using a sculpt focussed, (visible)-controller-less setup. That is, to pose the characters, animators use a sculpt-based approach to pose the character geometry, then, by using facemaps and/or more geometry/mesh specific sculpting tools, the more malleable regions can be hand sculpted with a greater degree of control.

    3) Evaluation and Editing
    In the past, I've always been somewhat against the use of evaluation caches. My main concerns about these center around the memory requirements (i.e lots) and overall code complexity (i.e. you have to keep these things in sync and/or manage invalidation events, etc.). Both of these are known to be fairly technically tricky problems to get right (and to a certain degree, cannot ever really be "perfect"). That said, my current position is evolving. As Pixar (with Presto) and Dreamworks (with Premo) have demonstrated, it is probably necessary and even beneficial to now be caching geometry, allowing for faster playback without the hassles of trying to evaluate all the geometry in real time, and worrying about low performance/framerates due to all the constant duplicate recalculation of things going on.

    What sort of benefits would caching stuff on each frame have? Well, judging by how much easier it often is to edit Grease Pencil data (well, before a lot of the layer parenting and object transform stuff was added), and also the growing popularity of geometry caches (e.g. Alembic) in big production houses), it's reasonable to assume that caching the animation rigs/geometry would allow us to do a lot of currently-hard things much easier. For instance, the following things suddenly become easily feasible/doable:
      * Multi Frame Editing of armatures (just like with Grease Pencil) - Maybe there'd still be some complications, but it'd be a lot easier than now
      * Onion Skinning for animated meshes - This would become feasible, as we wouldn't get bogged down with evaluating n+1 frames for each frame drawn, greatly simplifying/speeding up the whole process! Of course, there are physical limits to how much performance we could get (since we're still rendering n+1 copies of the geometry), but all of a sudden, it all becomes possible to do!

    This note leads us nicely into our next big area...

    4) Editing Motion Across Time
    By and large, all animation tools these days are currently focussed around creating and editing a single frame's state at a time. This is largely a consequence of the hardware limitations in place when much of the major software packages used in the industry were first created. However, there have been massive advancements over the past few decades in terms of raw computing power and also availability, meaning that it's time to re-evaluate whether we can be making our tools do more for animators!

    First, let's remember that animation is all about motion - or change over time. However, our tools don't do a great job at helping us do this. Instead, they're more orientated towards creating a series of still images! This is because we've currently got effectively two ways of working with our data:
       1) Isolated snapshots of the state of the scene at particular points in time, but with the ability to view/edit this in full 3D, and
       2) Abstract views of the timing/change data, but only in 2D, and in a separate view, represented separately from the stuff it's affecting

    What we really need is a way to have tools that can work across time (i.e. on a wide range of frames). The current single-frame behaviour then becomes like a special/sub-case of this more general approach. For instance, one thing we could do is that instead of having a "current frame" indicator in the timeline that looks like a line, we could instead have a "multiframe range" indicator - a box with start/end range controls + a "current frame" control/line within that range (see mockup below) - that could be used for both multiframe editing and/or visualising the Onion Skinning range:

    The next obvious question to ask is: What should we be able to do now that we have a way of specifying a range of frames to edit?

    There are several options I'm particularly keen to explore:
    1. Motion Sculpt (3D) - Using sculpt tools to apply time contraction/expansion/smoothing effects to motion path points
      • Time Expansion/Contraction - The basic idea here is that the brush would simply adjust F-Curve handles and/or keyframe spacing so that more/less frames are allocated to parts of the curve around the keyframes where the brushes were used
      • Path Smoothing - Instead of going in by hand to try to reduce kinks, perhaps it'd be worth seeing if we can find a more general/semi-automated solution?
      • Performance-Based Timing - Given a set of keyframes, we record the speed of the animator's mouse movements as the trace out the path, using this info to adjust the spacing/timing of keyframes. (This is similar to a presentation at BConf a few years ago). In essence, this approach lets us combine the best of both the motion capture world (i.e. dynamic performance/energy capture), vs retaining the precision of traditional keyframe animation control.
    2. Motion Stencils / Effect Masking - Using sculpt tools to "paint" animation from one source onto a character. In particular, several key examples of this include:
      • Painting Ragdoll Physics Sims onto body parts - The idea is that you can "paint" ragdoll physics sim effects onto bones to get nice secondary motion effects, and/or realistic "relaxing/settling/contact" with less effort than running a full sim, or hand animating the effect yourself. This way, we can allow animators to make more use of physics sims to help create an initial rough version of some fiddly physics-based effects (e.g. hair blowing in wind, or accurate hand-object contact) and still be able to "art direct" the effect. (From the Spring weekly's, this sounds like something that could be quite useful for animating Spring's hair strands for example)
      • Painting Motion Captured data on to another rig - The process of cleaning up motion capture data is often a matter of taking what works from the original performance data, ignoring noise/floaty stuff, and then adding our own hand-animated stuff on top. A stencil/masking approach may be a useful interface for allowing animators to be able to filter down the key parts of the performance much easier. We could also develop this further to include stuff like the Push/Relax pose tools into this, to help exaggerate/dampen aspects of the performance we're masking.
      • Selectively mask/paint effect of Breakdowner/Pose Library Tools onto bones by different amounts - Currently the Breakdowner/Pose Library tools get applied to all bones by the same amount. However, what if we could use a paint/brush approach to control the influence of these tools regarding how much they affect different bones? For example, being able to selectively paint a "smile" or "frown" expression on to parts of the character's face, instead of having the full pose applied each time (or having to select different bone subsets first).
    3. Using "Multi Frame" editing tools to apply a single change across a wide range of frames - There are many different examples of how this could be useful. For example:
      • In-Viewport Falloff/Drag Animation - See the Grease Pencil demo for details
      • Adjusting the pose of a body part/prop that doesn't move that much/far over a wide range of frames - By selecting a range of frames and then selecting the controls of interest across those frames, we could modify that pose across all frames, instead of having to use something like the Pose Propagate tool to do so. (As nice as Pose Propagate is, it is still a bit of a mystery to users in some situations - particularly regarding what keyframes it will or will not affect, since it has to guess what is a held pose vs what is an actual "real" pose it shouldn't try to clobber).
    4. F-Curve Sculpting - Why stop at doing sculpt tools in the 3D view? Why not introduce these for editing keyframes too? While it probably wouldn't be that useful for hand-keyed animation, for motion capture/baked animation data (where the curves are typically dense and uniformly sampled, making them difficult to hand edit in general), perhaps it would be worth considering if sculpt-based tools are useful here for doing bulk edits to the data (e.g. smooth/denoise/grabbing/etc.). From what we've seen with Grease Pencil, sculpt works quite well on long chains of densely-spaced points, so it could well work really nicely here too!
    In addition to these tools, we can also have "editable motion paths" (as in, being able to move the keyframes on the paths around (within limits), and have the animation data adjusted. I'm not convinced that this is actually what is often needed and/or that it is what would work best in many cases (IMO, the Motion Sculpting tools probably solve more of the cases you'd be trying to fix this way anyway). It's also potentially a lot trickier to implement that everything else.  But, I'm not ruling this out completely for now - just, that it isn't a personal priority.

    There is also the matter of Onion Skinning. To implement MultiFrame Editing, we'd need something similar, so we'd need to eventually develop some solution to this problem. So far, we haven't implemented Onion Skinning as it would largely be prohibitively expensive to do so (i.e. we'd be evaluating some n+1 frames of data each time we want to draw a single frame!).  With the work on the Depsgraph (faster evaluation, with the option of doing it in the background) + a move towards embracing geometry caches (e.g. Alembic caching of animated meshes as a general rule, instead of an obscure hack for certain tricky situations), I think it might be possible to easily solve this! That's because instead of having to recalculate things all the time at draw time, we'd now be able to cache off the geometry (i.e. watch any Presto demos, and watch how it recaches the entire timeline, current frame outwards, upon any change being made), meaning that the drawing of the onionskins can potential be a lot faster (i.e. simply a matter of streaming the relevant geometry and rendering it with an appropriate shader). Possibly the main technical issue we have here relates instead to how we're rendering the geometry (i.e. what shaders we use) - specifically, we want a shader that lets animators see the silhouettes of the character shapes, instead of being bogged down with weird glitches in all the places where the mesh intersects itself.  One of Raf's blogposts touched on this - the solution he came up with (i.e. showing just an outline silhouette) is an interesting approach; I'm not sure yet about the relative merits of that approach though.



    *) Small improvements to commonly used workflows/tools we can begin doing now
    There are a raft of smaller technical issues (Blender-specific) that need to be resolved before we get into all these other things. For example:
    • Animation Filtering API - Over the past few weeks, I've been doing a bit of planning/investigation work on putting in place Python API support for accessing all the animation channel api's that are available internally for working with animation data. Among other benefits are:
      1. Finally you'll be able to use the same nice stuff we use internally (instead of having to roll your own) to create some new specialist editing tools,
      2. It'll be easier to create addons to support keyframe editing in the Timeline - I'm still not keen on supporting this as a built-in feature given the slippery-slope of feature creep that will happen here, hence why I'd like to first have the community reach a level of consensus about the level/depth of functionality needed here, in the form of an addon, before we look into formalising these.
    • Control over repeating/cyclic animations - These are big part of doing interactive/event-driven stuff, so we'll need to put in place some tweaks. Before adding things piecemeal here though, I'd prefer that we put together a comprehensive plan for what's needed (i.e. what we're trying to solve and why), to avoid heaps of overlapping features here. That said, some things I'm currently considering include:
      • Defining start/end points for actions/clips - So, instead of the cycle length for each F-Curve being calculated based on its own keyframes, we define a hard limit for the start/end range of an action, and all the curves in there use that as their reference points for cycling. The main question that this raises though is: why don't we just make animators use the NLA in such cases instead?
      • Making it easier to tweak things at the start/end of the cycle (for example, displaying 1-2 keyframes or even the entire range of keyframes for the cycles immediately before/after the "real" ones, perhaps in different colors/faded to make it more obvious what's going on)
      • When inserting keyframes beyond the end of the cycle range, we could do one of the following:
        • Remap the new keyframe back into the source keyframe range - giving us an easy way to tweak the cycle while previewing it on a long-looping timeline
        • Make all the intermediate cycles (between the original range and the current frame) into real keyframes, then insert the keyframe into the latest cycle.  The point here is that it would make it easier to animate "transitions" out of a cyclic motion, and/or overlay variations over repeating motion.
    • Calculation and management of motion paths on arbitrary geometry points - Instead of only tracking bone tips, animators may actually care more about the path traced by the nose-tip instead. This does mean though that path/time based editing in such cases could get rather complex. (Also, it'd be cool to try something like this for GP, though the changing geometry makes this hard)
    • Bone Groups and Selection Sets - Back when I did 2.5, some of the advice/feedback I heard was that Selection Sets in Maya were in many ways overly restrictive and integrated too deeply. However, with the advent of the Selection Sets addon, I think we need to revise this situation, and perhaps make the builtin Bone Groups more flexible. This is probably not the post for this, but in short, I propose that we keep the "each bone can only have one [main] group" (which will be used for colors mainly), then, we make it possible for each bone to be part of multiple other groups so that animators can create their own bone subsets (as now) as they work. 
    • Small workflow enhancements for common curve editing tasks - I'm aware that animators from other packages have been gathering around a few addons that provide quite a few useful utilities for things they do a lot. If any animators are interested in getting such functionality into Blender, it might be a good idea to start keeping track of these and identifying the priority items (e.g. via Right Click Select).
    • Managing rig complexity and relationships (drivers/constraints/parenting) - Though the exact implementation and semantics are yet to be determined, it's clear that we probably should move towards a more node orientated presentation of all the relationships involved in a rig. 
      • At the very least (and as a first step), it would probably be a good idea to set up a "rig overview" editor to provide an overview of all the drivers, constraints, and modifiers, to help keep track of where relationships exist
      • As a next step, we can then decide on more of the semantics - i.e. what sort of nodes to allow, how they connect to each other, etc.  One of the big challenges here though is figuring out how to solve the scoping problems (i.e. how many bones/objects do we show nodes for, while keeping everything manageable)
      • (Note: The new despgraph should have no problem supporting this sort of thing - the question rather is what effect it may have on our ability to write certain classes of editing tools). 
    • Driver Creation/Management - As mentioned in William's recent UI design doc, there are a number of things we could do to make it easier to set up and edit drivers (without or in addition to moving to a full relationship graph). Some of these are things that have been floating around on my todo lists for a while now too...
      • Popup panel showing the key bits and pieces (e.g. expression, relationship - linear/non-linear + factor, variables), with a button to open the full Drivers Editor. (OR, a "budget" version of this is a just a menu option to open the drivers editor with the driver in question focussed).  Sure, this breaks the "non-blocking, non-overlapping"  concepts, but we also have the search popups now - ultimately, it may be necessary/best sometimes to have such elements.
      • RNA Path Builder Widget - One problem many people have is with figuring out what driver paths they should be using. For a very long time, I've always intended to replace that textbox with either a "breadcrumbs" widget (like you see on any modern file browser), or by retrofitting the property search to support recursively traversing/building up the path. Recently I experimented with the latter approach, but ended up running into some technical difficulties (i.e. we'd effectively have to implement a new set of infrastructure in the UI toolkit, as the existing search gets in our way too much).
    • Fixing NLA Evaluation so that it can be used for Animation Layers. (See notes above about how this can be done - apart from some unresolved design issues about what the is best way of managing "default values", the rest should be relatively straight forward to implement)

    Code Quest
    With the 2.8 "Code Quest" under way now, you may be wondering what parts of this may end up getting some attention during the Code Quest. As at the time of writing, it's unclear if I'll get time to work on much/any of these things (?), since there are quite a lot of other larger unresolved issues we'll likely have to focus on in the interim.

    That said, if/when I get time to work on any of this stuff, it'd likely be the smaller issues identified in the section at the end, addressing many issues that animators have often raised with me. (Of course, given the chance, I'd also totally love to spend the time working on polishing Pose Sculpting once and for all, and spending time with animators refining/developing some further refinements... but, I'm sure we'll end up have other priorities that need addressing first).

    8 comments:

    1. Interesting stuff. Suffice to say, the one that really stands out is the NLA evaluation for Animation Layering, which is the only real pain point animating in Blender today.

      You're really on to something thinking about Aardman's puppets. Lately you see these extremely complex rigs with hundreds of tiny bones to make a face pliable, but the animator still ends up limited by the nature of skinning (pinching, bulging, collapsing). The answer, I think, is something like AniSculpt, where instead of trying to rig or make and balance shapekeys for every potential possibility the animator just makes the shape they need on the fly.

      Of the interface and interaction ideas being discussed recently, to be honest your Pose Sculpting is the most promising I've seen. A unified sculpting interface for both posing Armatures and making on-the-fly shapekeys (via an updated AniSculpt that works with Linked Groups) would be very intuitive for the animator and tremendously cut down on the set-up time from a rigging standpoint.

      ReplyDelete
    2. The cache idea is really interesting. There is one point for grease pencil where we could use this cache. In onion skinning, the current implementation uses the location of the object at current frame, so all strokes use the current transformation matrix to transform the strokes, but if the object moves in the previous/next frames, the matrix transformation does not change because we evaluate only the current object location.

      To get a way of know the matrix transformation of an object in a frame different of current frame would solve this issue, but without using a cache the time used to get all data make this technique impossible.

      ReplyDelete
    3. Great thoughts! What do you think of this as far as an animation tool?

      https://vimeo.com/216753419

      ReplyDelete
      Replies
      1. That's certainly some very interesting stuff. However, like another deformer system I once saw at one of the Siggraph conferences, the big problem I've always had with these is: "Ok great, but how would I put these deformers into action in practice to create animation?" So that's one of the big unknowns/issues that we need resolve if we want to allow animators to apply sculpt brushes on their geometry as a way of producing hand sculpted animations.

        Delete
      2. I've tested it (in a way similar to Daniel Lara) by adding shape keys to store the deformation. I in particular use Animall's Shape Key keyframe insertion to store all the frames of deformation in one shape key that can then be added on top of existing animation - sort of like having tweak controls without weight painting. The only drawback is that this method adds keyframes for all points on a mesh regardless if there's no movement. But that wouldn't be hard to clean up. I'm surprised this is not used more often. What I found surprising about Pixar's technique was not that they were sculpting, but that the kelvinlet process preserved volume way better than the grab brush in Blender's current release(s).

        Delete
    4. Oh, my god, it would be great!!You're the best.
      Especially onion skins,It's not just real-time,Create single ghost\snapshot should also be optional.
      Blender is too powerful,Everything else about him is awesome. The animation system has become a drag.
      The above content is expressed through translation software.

      ReplyDelete
    5. I have been writing some notes on the something https://github.com/adamearle/Back-To-Animation/wiki

      ReplyDelete