A brief warning that some readers may find some of the content here upsetting: there is a "little bit" of math involved here, but I'll try to keep it tame.
The situation
Let's firstly have a look at the sort of scenarios where you're likely to encounter the problems I'll be discussing today. From what I understand, most people run into this when they try setting up "gears" - those circular things with the squarish teeth sticking out that interlock/grate against each other to transfer rotational movement.
Gears - Photo from HowStuffWorks.com (Photo courtesy Emerson Power Transmission Corp.) |
For gears of equal size just propagating rotational movement, the Copy Rotation constraint (or Transform constraint, though a bit clumsier) should be sufficient, though I wouldn't really count on it. For another of the other situations, none of the constraints can be expected to do any decent job expect if < 180 degree rotations are required.
Some underlying math...
By this stage, you should be wondering about that 180 degree limit I just mentioned. Let's have a look at a little diagram:
Equivalent Rotations: there's more than one way to skin a cat
This diagram shows that each orientation (i.e. rotations are actions, orientations are states/results arising from performing rotations) can be described in terms of two rotations - one in the clockwise direction (green), and one in the anti-clockwise direction (red) - around a pivot (the star) which is an axis running straight out of the screen like a poly's "normal".
Also, as you should have learnt in primary school, there are 360 degrees in a full circle, so there are 180 in a half circle. In the diagram, you can see that a 180 degree rotation gets you halfway around the circle.
Now suppose that I drew a line, and labelled the ends a and b (a corresponds to the pivot, and b the 0 marking). I then place the pencil down on this line, and rotate the pencil in whatever direction to whatever amount I like (but you're not allowed to see how I'm rotating this). Now, I ask you a "simple" question: what is the angle between the pencil and the line that I rotated the pencil by?
After guesstimating a bit, most people are most likely going respond with some positive number between 0 and 360, assuming clockwise rotation only too (NOTE: hopefully this isn't going to sound to confusing, but mathematicians consider "positive" rotation to be anti-clockwise and "negative" rotation to be clockwise, but most people naturally/are taught to regard clockwise as positive probably as that corresponds best to the way clocks run). But how do they know that I didn't rotate the pencil anti-clockwise instead by a different amount, or that I may have rotated the pencil several times before deciding on its final resting place?
The simple answer is that they can't. Just looking at the rotated pencil, you're only seeing a state: the new orientation of the pencil. There is no way to know with absolute certainty how much I rotated that pencil by if you just walked into the room blind. You would need to know a prior state to have some chance at guessing what I did, and then it's possible that I may have rotated the pencil so much that you still really can't tell (i.e. it could have been rotated several full cycles and it'd still look like it had done less than one).
Now getting back to the diagram, and the whole "180 degrees" business.
Due to the infinitely many ways we could represent an orientation as a rotation, we must have a way of determining the amount of rotation from some given orientation (so that computers can be able to do this). The solution here is that we take the "smallest multiple", giving us a +/- 180 degree range to represent all rotations with. This works well for most rotations, assuming that they are small (which many are), but at the expense of long rotations in one direction, and the risk of flipping when we get to 180 degrees from 0 (as this is a point of ambiguity). However, as the diagram shows us, this range is enough to cover all the possible orientations we can have.
How does this math relate to Blender?
Blender is computer software, which runs on computers, which are really just number crunching
To best understand how all this works in Blender, you'll need to understand how transforms are "evaluated" (or handled) from the values you set to what you see:
Diagram: Evaluation Pipeline for Transforms
- Yellow bubbles - transforms represented as separate numeric values for each axis. For example, X-Location, Y-Rotation, Z-Scale
- Blue bubbles - "basic" matrices (more detail about matrices later). As a user, you never really get to interact with this level of matrix.
- Green bubbles - "enhanced" matrices. These are matrices with some user-specified effects applied. These effects include constraints and scripts
- Red bubbles - animation system components. Perform operations on the given transforms, modifying them with respect to other transforms and/or settings, etc..
- Black arrows - step can always happen
- Dotted-arrows - either step will happen (constraints and transform matrix bubbles) to result in end result, OR step is optional (parent matrix)
- Grey-arrows - animation system inputs
- O/B - abbreviation for "Object or Bone"
- Constraints can only get transforms to use/work on/reference as matrices, and are evaluated late in the game
- Drivers (part of Animato) are able to use the original transform values as well as matrices, but may not be able to take post-constraint effects into account (for own object)
When describing this diagram, I mentioned "matrices". Most likely, you're sitting there wondering what the heck they are.
Put simply, you can just consider a matrix as a bunch of numbers arranged like a table/spreadsheet, which are able to represent a bunch of transforms. It's important to remember that they aren't single numbers, and that they don't always directly correspond to the original transforms you deal with (i.e. you can't directly read off rotation and scale, but location is sort of an exception).
Now, we're really only focussing on rotations here, so let's describe how that works.
Recall how I talked about rotations vs orientations earlier? Well, the matrices in this case store the orientation only. Therefore, to get from a matrix to "simple" x/y/z rotations, you're going to have to "decompose" the matrix to extract those values. That is, you're going to have to extract the orientation component of the matrix, and then determine the rotation angle per axis using the +/- 180 idea.
However, as you now know, you can't cleanly expect to get out the actual rotation performed; you can only get the smallest-rotation that will get you that orientation.
What does this all mean? Solutions...
Because there needs to be this decomposition step for rotations when working with matrices, and constraints only work with matrices, constraints will only 'see' rotations in the +/- 180 range. Therefore, when using constraints for long-rotating transforms, you're not going to see the results you expected.
Drivers on the other hand do not have this problem, as they can directly read the original transform values. So the answer to trying to make gears is to use drivers instead.
Why not fix the constraints then? Couldn't you just...? What about...?
Constraints use matrices only, as matrices act as a really efficient and compact way of storing+transporting+representing transforms. It also means that transforms can be combined with each other (doing it separately will only end up with objects moving locally to themselves), so that's really the only way some of them can work.
Furthermore, some constraints will therefore end up changing the transforms quite a bit. As such, the matrices cannot resemble the original transforms. If we just tried looking at the raw transforms, we could be quite off target by the time we have gone through a few constraints. And as mentioned just before, the previous transforms cannot be adjusted through some of the intermediate steps.
A special fix?
For the special case of gears, another solution comes up (as alluded to earlier): why not just use the previous state to know which side to carry on? Sure, this does let this special case work, and is also what most physics-sim type things rely on. However, it means among other things that it won't work when you're:
1) animating and jumping around and/or scrubbing the timeline to get a good feel for the timing
2) running this on a renderfarm, rendering things out of sequence
3) have motion that goes quite far between samples/evaluations, so that we don't actually see the real amounts of change
Therefore, this solution really doesn't work unless things are always going forwards, or you bake your results once they work, and then you've got to still define a good 'default'/'rest' state to reset from. All in all, I currently still do not think this is really acceptable at all, which is why Blender's constraints don't do this. This is also why we don't have a "secondary motion" constraint yet either.
I still hold onto hope that there is some way that we can solve this dilemma in an "elegant" way.
cool post,
ReplyDeletevery informative!
Really interesting! What I fight the most with constrains are World/Local/Pose spaces... never got them right in the first time... also with axes, I always have to test a bit after every "Track to" constrain for it to behave the way I want to.
ReplyDeleteThanks for sharing the knowledge!