So, what exactly are these quirks?
1) Each "Vertex Group" actually has 2 parts, instead of being a single entity
That is, there are the "name" tags on Object-level, and then the actual geometry data on Mesh/Curve/Lattice level. These are linked to each other solely based on the order/offset from the start of each list (i.e. an "index" value). Most of the time, when everything happens through the tools in the UI, there aren't any problems. However, from time to time, it is possible to get into situations where the object-level name tags and the underlying geometry data end up getting out of sync. This of course gives rise to some wacky/unintuitive consequences which users have only recently started running into (as documented in the links above).
Rationale:
AFAIK, this design dates back over 12 years, with this particular design used since we needed a way of having vertex groups for different types of objects (not just meshes). Hence, labels were added on object level instead so that they would be easier to access in a consistent manner.
Critique:
Sure, this design is wacky, and IIRC is something that Ton has mentioned we should change sometime. However, by and large, this is an example of the kind of backwards compatibility breaking changes which are kindof on the backburner when there are more pressing issues for common usage still to be tackled.
Technical Details (the nitty gritty of the implementation):
1) Object->defbase = list of bDeformGroup's
These are the object-level "labels" for vertex groups. This is where the names for each vertex group lives
2) Mesh -> MDeformVert + MDeformWeight
These are the mesh-level data representations of vertex groups.
- Each MDeformVert corresponds to one vertex in the mesh. For each vertex, within MDeformVert there is an array of MDeformWeights.
-- Each one of those MDeformWeights corresponds to that vertex belonging to some vertex group, which is identified using "MDeformWeight.def_nr" (which uses the 0-based index of the vertex group within the owning object's vertex group list).
2) Each Shape Key actually stores an entire copy of the mesh and not just the deltas
That is, shape keys do not store relative positions, and they do not only store the vertices which were explicitly moved to create the new shape.
Rationale:
I'm purely speculating on this one, since I haven't actually heard/seen/read any of the actual reasoning behind this anywhere. But here goes...
The original "Shape Keys" feature (aka "Absolute Shape Keys") came first. From what I understand from talking with Ton about this years ago, this was conceived as being "keyframes for mesh geometry", much like AnimAll does now, except that we didn't have one FCurve per coordinate of each and every vertex. You've got to remember that this was way back in the early 1990's, when directly editing mesh geometry to deform it was still in vogue (and actually technically feasible to do by hand).
Incidentally, this is why shape keys used to be drawn in the IPO Editor as horizontal lines, staggered at different y-values: The vertical height represented which frame they occurred on, as the special "Time" curve (displayed in place of the Basis shape) would intercept these lines -- by adjusting that curve, you could control when each of those absolute shape keys got fired.
Admittedly, I never really understood how to really use this back in the day. In fact, I remember that the combination of these wacky horizontal lines when trying to figure out how to animate "Relative Shape Keys" (more on this in a sec) from the tutorials (aka the "2.3 manual") was something I found immensely confusing. I never did manage to get it to work back then...Therefore, for this original use case, it made sense that each shape key stored all the locations of all vertices, and that none of these keys stored relative values.
Fast forward a few years, and the "Relative Shape Keys" (RVK's IIRC) were grafted on top of this system to take advantage of much of the machinery already in place for doing this. For Blenderheads from the post-2.5 era, it will probably sound quite foreign that there is the notion that there is actually more than one type of shape key in Blender, and that the ones you normally get out of the box these days are in fact these RVK's. Indeed, the fact that the other type even exists at all is quite obscure!
Now, since RVK's are built on top of the old architecture, the code still ended up creating shape keys by simply capturing all the vertex positions and baking them into the appropriate data structures. Then, at runtime when evaluating shapekeys, we'd simply calculate the effect of each key by firstly calculating the effect of the key relative to the basis shape, and then reapplying a scaled copy of this on top of the basis.
deformed = basis + (key - basis) * key.strength
Comments:
Although storing the full geometry is a bit memory intensive, there are some benefits to this approach:
1) If we'd gone with an "only changed" approach, on evaluation, we'd need to be jumping around the mesh data performing a whole bunch of lookups. This introduces some overhead (though I'm not sure exactly how much of a penalty that ends up being vs what we currently do).
2) Our current approach is what allows that feature where you can specify an alternative basis shape to use to work. Since the shape keys weren't actually created with any particular basis shape in mind (I mean from the perspective of the computer, not the artists), this sort of remapping can happen effortlessly for no additional cost.
3) Backwards compatability with all previous Blender versions
4) The situation doesn't even appear to be that bad even. Apparently, even Maya does this [4]
Incidentally, speaking of the paper linked there ("Compression and direct manipulation of complex blendshape models using HSS representations and GPU acceleration" by Irving et al.), there have been efforts at WETA and presumably other production houses to address these kinds of issues.
No comments:
Post a Comment