One source of confusion for beginner riggers sometimes is that even though a particular dependency path may not actually be currently active, from the perspective of the dependency graph they are still actively contributing to a cyclic dependency situation. Much of this is due to the fact that the old depsgraph could only really schedule things up by looking at the full set of dependencies and create a static ordering that could work in all cases.
This post explores a potential solution to this problem and how it could be integrated into the framework we've been looking at.
As a bit of a disclaimer, all things considered, this feature is currently not really that high on the priority list of stuff that really needs to be present. Despite this, I believe that it really shouldn't be too difficult to have as something that we can slot into the system at a later date if/when it really does become a necessity (and we have the capacity to work on properly supporting it), at least based on current design ideas (and/or lack of a solid codebase with particular sets of restrictions).
There are perhaps just 2.5 scenarios in which this type of thing comes up:
- There are two control systems for achieving the same results. One is better for use in some cases, and the other is nice for other situations. As a result, the rigger decides to include both in the rig, but makes it so that each set of controls tracks the others, so after posing using one set, the other set is also in a valid place to take over again if the situation is better suited to that control set.
- IK/FK Bone Schemes controlled using a slider on the same chain – It's somewhat debatable whether such setups are good/valid or not (I personally fall on the side that says that this is not; a key factor being that any rotations/scaling on the bones with an IK chain applied will end up affecting the end results, potentially leading to unintended results while doing the blending).
- Setups where you can either manipulate one of two systems, and want to use whichever one is being transformed in the UI. By default, one of the two (which is favoured as default/base setup, and ends up being properly keyframed) is used for evaluation purposes, where it ends up driving the other one (in case users want to tweak the pose using the other system at some point).
- There are multiple sets of geometry in the scene – each with a different data density. (Note: This is one of the cases pointed out in the LibEE paper. In their system, the ended up introducing a “culling-pass” over the graph to avoid computing the branches which weren't actually relevant, but only did so just before evaluation time).
Overall, I'm perhaps a bit skeptical whether these are good ways to work or not. Nevertheless, within our architecture, it should be possible to accommodate these...
Isolating Complimentary Branches
Much like how we isolate mutual nodes with cyclic dependencies between them and clump them together in “ID Groups” so that we can more effectively tackle their problems in isolation, it should be possible to detect this particular case (since it will once again look like yet another cyclic dependency situation due to the links between the chains, with a key observation being that all of those links are hooked up to a single control property/state).
At worst, it turns out that detecting these cases is quite tricky and error prone, so we end up adding ways for users to explicitly tag (or give hints about) these situations for us (i.e. when in doubt, remember that we always have the option to “ask the user for more info” to clarify the situation for future runs). For example, the drivers used for this purpose could get a flag to indicate that they may be being used for this sort of thing, or (if we ever adopt a node-tree interface for doing this sort of setup work) we could provide explicit nodes to specify this sort of thing.
The proposed solution is to basically have a special node for this purpose – let's call it “BranchNSwitch” – which would live within ID-Groups. This is shown in the diagram below:
It would work as follows:
- At build time, we figure out the sets of nodes which would be evaluated (i.e. little sub-graphs) for certain values of the switch. To keep things simple, we could initially limit it to a binary on/off (or percentage 0% = A, 100% = B) based thing. The key thing to note though is that we try to have all the things which either branch depends on are before the branch itself; otherwise, the cyclic resolvers on the ID Group will have to come into play...
- At run time, we initially schedule up the whole node (or outer shell of it) where the “switch” (i.e. “C” in the diagram) lives.
- When this node gets evaluated, it checks what the value of the switch is (NOTE: since this depends on the results of other nodes being evaluated, this must be done at runtime). This evaluation function is then able to push the set of evaluation nodes/steps (i.e. see ideas regarding “Static RuleSets” / Scheduling to come soon) to the front of that queue (or at least, to the same place that this node sat).
The specifics of how such scheduling would work out would still need to be worked out in detail later, when we have a clearer picture about that stuff – the main worry here is that we could get a race condition if nodes depending on the result of this switch manage to get run before the selected branch can be scheduled up, OR we'd get a wasteful single-core-sequentially-locked problem where this particular node does the evaluating of its subgraph itself (perhaps at the expense of preventing other nodes dependent on it and hopefully queued up for the same core from starting work).
Another interesting possibility is that this sort of mechanism could be used by cyclic resolvers as a way of dealing with those pesky “declarative constraints” scenarios (e.g. a system of Limit Distance constraints pointed at each other to keep a bunch of objects in close but freely moving relationship), but selectively breaking and mending links at different points which cycling through them.