Animato
Firstly, let's briefly review the main concepts of the animation system (nicknamed "Animato") in use since 2.5
ID Datablocks
At the heart of Blender lies the ID-datablock system, which every Blender user should be familiar with (and if you're not, then you're missing out on a lot when it comes to asset management).
The basics of this are as follows:
All (re)linkable entities in a Blender file (e.g. Scene, Object, Mesh, Curve, Material, Texture, Node Trees, etc. - basically anything that is available as a list in bpy.data.*) are "datablocks". Compared to other non-datablock entities such as Vertex Groups, Bones, Keying Sets, F-Curves, and Nodes, them most distinguishing feature about datablocks is that they are all based on (i.e. contain) the "ID Block" structure.
More specifically, in software-engineering terms, all datablocks are subclasses of the "ID Block" (simply "ID" in the code) structure/type. The "ID Block" therefore acts as a kind of serialisation header mechanism (much like the base "object" types in many different programming languages/libraries - such as QObject in Qt, GObject in GLib, and java.lang.Object in Java), allowing us to have a reliable way of knowing that, if we get a pointer to some chunk of data, that we can figure out what type of data it is and also some other identifying metadata about it (including stuff like how many other links there are, and whether this block was linked in from another file), allowing us to know that it is safe to perform certain operations on that data.
That is why when making links/relationships between various entities in Blender, we can only create direct links between entities if the entity we're linking/creating a pointer to is another datablock. In all other cases, you'll notice that when we create a link, we must only do this using a string, which is often accompanied by a datablock reference. Examples include: Parent Object + Bone Name (for Bone Parenting), Armature Object + Bone Name (for Constraints), Mesh Object + Vertex Group Name (for Constraints again), etc.
Datablocks and AnimData
All animatable datablocks (i.e. most of the datablocks available, but not all of them - Image, Action, and Screen are notable exceptions) have another thing in common: in addition to having ID Block headers, they also space to host an "AnimData" block which immediately follows after the ID Block header. Note though that even though we reserve a slot for AnimData to live on such datablocks, it may not be filled (i.e. if the datablock has never have keyframes/animation or drivers added to it before).
This type of "ID Block + AnimData slot" header is basically what defines whether a given datablock can be animated or not (Note: for convenience, in the code, "IdAdtTemplate" is the special type for this header, which is effectively the superclass of all animatable datablocks).
Another thing I should point out here is why AnimData is attached to these ID-Blocks. There are two main reasons (and a host of others which I won't get into here). Having the AnimData on datablock-level:
- Makes it easier to reuse actions between different datablocks. If animation data lived instead of Scene level (or worse, as some amorphous soup of data that lived "somewhere" in Blender's memory ether), you'd have to get into heaps of weird bagging-up and isolation mechanisms that are quite hideous. Besides, we just don't have disembodied soups of data floating around in Blender - for one thing, we'd never be able to (re)load them after saving or even undo/redo operations!
- Keeps the data-access paths (via RNA) a manageable length and complexity. By using datablocks as the start-points which we use to walk a short distance to the data/properties required, we are able to make good use of the wide-reach of RNA to allow us to access almost any data in Blender.
AnimData
The AnimData block basically encapsulates the data for several animation-related capabilities, grouping them together in a consistent/tight-knit standardised unit which can be used across many different types of datablock types to provide the same level of animation feature support to all of them.
There are basically 2 components/groups of data within the AnimData block: Animation and Drivers.
AnimData - Animation Part
The animation part is evaluated first (actually, before any other data is evaluated) since its only dependency is time (i.e. the current frame + any subframe offsets for motion blur). However, animation effects must be in place before many other types of evaluation operations take place (e.g. driver evaluation, and transforms or geometry evaluation) Thus, it only needs to be evaluated on frame change (whether that be on animation playback, scrubbing, or baking-induced time change).
There are two parts/components here:
1) NLA Stack - A list of layers (known as tracks), within which there are strips (which instance pre-made/existing actions). Results are accumulated from the first track up to the last track.
2) Active Action - A reference to the Action which is always evaluated on-top of (i.e. after) the results of the NLA Stack. This is where all new always keyframes go, and is what the Dopesheet/Action/Graph editors provide a view/window of.
These together define the animation result for a datablock. As far as the overall evaluation process is concerned, this animation "blackbox" is a single atomic step/operation.
AnimData - Drivers Part
The drivers part is evaluated later as part of the depsgraph-ordered+tagged data update stages, with the intention being that the depsgraph has figured out an evaluation order which will mean that drivers should by and large get scheduled to be evaluated after all their ancestors are up to date.
Notes:
- Drivers can only be updated after all animation is done (or at least the animation from the same AnimData block, as well as the animation on anything it reads values from or writes values to).
- In practice, most of the drivers are independent (apart from those which rely on the results of sibling drivers, in which case, there are relationships between these which can get better exploited by the depsgraph), with no real reasons to perform driver evaluation all in one blackbox operation, other than the fact that that's the easiest way to do things without any other way of scheduling up the individual drivers so that the get evaluated only when all their dependencies are truly ready.
Current System
Animation Evaluation
BKE_scene_update_for_newframe() is used as the entrypoint for performing updates everytime the scene/database needs to be updated - whether this be for animation playback, scrubbing, or dupliframe/ghost drawing, for some or other baking operation.
One of the first operations here is a call to BKE_animsys_evaluate_all_animation(). This function goes over every single datablock in the entire file/database, and evaluates the animation component of that datablock's AnimData block (if present).
In order to ensure that instancers (e.g. Objects) can override the animation of instanced data (e.g. animation/drivers on Object level controls shapekey influence on the connected mesh (ob.data)), it is necessary to make it so that datablocks are evaluated such that "instanced" datatypes always have their animation evaluated before "instancer" datatypes. Let's clarify this with an example; in the case we've mentioned here, what we have is:
1) ShapeKey Animation - All shapekey datablocks
...
2) Mesh Animation - All mesh datablocks (usually there's nothing at this stage for meshes, since there's hardly any parameters at this level!)
...
3) Object Animation - All object datablocks
Of course, there are many other datablock types which also have to be scheduled up between these ones (which I've left out for clarity here). Nevertheless, the main takeaway from this is that the animation for all datablocks gets evaluated as the first stage in frame-change updates
Driver Evaluation
All drivers attached to an AnimData block are currently evaluated at the same time, one after the other, in the order in which they were defined. Although in the original design it was envisaged that the depsgraph would be used to at least sort these drivers within each drivers list, and also to tag the AnimData block with a flag to indicate that its drivers were to be evaluated, neither of these steps are actually done in the current system since it is too difficult to graft that into the current depsgraph.
AnimData.Drivers chunks are usually evaluated as the first operation (or one of the first) when each Depsgraph-handled datablock is scheduled to run (currently this means, Object, ObData, and Shapekeys as a special exception). This usually happens during the BKE_scene_update_tagged() evaluation entrypoint (which gets called as part of the event loop), but can also happen as one of the later stages in the BKE_scene_update_for_newframe() - since frame changes naturally mean that nearly all data which is affected by time or animation will need to be recomputed.
Limitations
There are 3 primary limitations that many will have encountered when working with this system:
1) Only driver collections on datablocks that the depsgraph currently knows about/handles get updated with any accuracy. [Work on other parts of the system should have addressed those problems already]
2) Drivers for pose-bone settings (or pose bone constraint settings) only get evaluated when the Object datablock is first encountered but before pose evaluation takes place. The main problem here is that this means that if one of these drivers relies on the pose evaluation results of another bone or bones in the same armature, this causes lag problems (since the driver cannot see the updated state of the pose bones yet when they're evaluated)
3) Animation is applied to all datablocks in the scene for the same timestamp.
- This makes it impossible for any time-offset features to exist (though it's arguable whether these are good to have in the first place!), but also causes a bit of trouble when dealing with objects which get instanced by groups.
[For now though, we'll sidestep the issue of groups/instancing troubles, since the problems there are far more systematic than we can easily tackle quickly - or at least during this GSoC period]
- This ends up causing some havok when there are sequence strips in the file which have a scene strip instancing another scene full of objects, etc., which then gets needs to get offset to some other timestamp before it can be evaluated. The ensuring mess causes database state pollution issues - things get very messy very quickly if not handled carefully.
Where to from here?
Animation
1) Introduce a "time source" node, whose main role is to act as the entrypoint for evaluation on frame-change events, such that we can use standard depsgraph mechanisms for figuring out what needs to be evaluated and how to schedule this all up, but also for making it easier to run queries on the depsgraph to figure out what needs to change on framechanges.
2) Have nodes in the depsgraph for scheduling up the evaluation of AnimData.Animation components (i.e. NLA Stack + Action blackbox operations), instead of having to hardwire up all the datablocks to get evaluated. This should not only solve the problem of different parts of the database getting updated to timestamps they won't need to be set to, but also allow more timely scheduling of this stuff.
3) Animation evaluation nodes need to ensure that AnimData override order constraints are taken into account
Drivers
Each driver should be able to be evaluated/scheduled up independently of other drivers (i.e. not together all at once, but while still obeying whatever relationships may exist between them).
Not only this, but they should also be able to be interleaved between other evaluation operations, such that they can truly get the results they need (and are able to also set their results while they still matter).
Side Notes
- I've glossed over some details in this post, but that shouldn't get in the way of getting the big picture across.
Thanks for posting this, is very valuable information. Would be great to post it in blender wiki also, are you going to do so?
ReplyDeleteRegards
Sebastian.