In this post, I'll go over a few of the key issues we need to contend with here, along with some other general thoughts I've been mulling over for quite a few years now about what IMO makes an effective UI for editing large amounts of geometry (i.e. vertices, control points, etc.)
Multi Object Editing Challenges
The two biggest technical challenges are:
1) Almost every operator (i.e. all the tools in Blender that you use) are coded to expect there to be a single "active object" that all the geometry present in edit mode comes from.
As a result, many shortcuts are taken, and assumptions are made about where to find stuff (e.g. other settings we may need, or things like transforms to apply to points to convert from local space <-> global space <-> screen space. Solving all of these issues will not be easy - though there are some ways we can "fake" things to minimise the amount of code rewriting we'd have to perform. (In essence, we'd want to just keep the existing code, and tell it what object to operate on instead of letting it try to find the active object; that way, we can make these operators operate on multiple selected objects at once)
2) What to do with the Undo Stack
Most modellers will know something about how the undo stack is currently split between "Global" and "Edit Mode". You may have even heard/encountered problems with shapekeys. But in short, there are not really easy/simple solutions here. Whatever happens, it will be tricky work, and will likely involve a big refactor of this chunk of code. I'd probably rate this as being trickier to do than #1 (on a scale of technical hardness), while #1 is more of a "bulk" problem (with individual tricky cases that will throw in some wrenches into the mix).
Of course though, there are also lots of non-technical issues that we'd have to solve too. Specifically, before actually embarking on this, some of the questions that need to be addressed include:
3) Should we be doing this in the first place? What problems does this try to solve? Is it really necessary to support this, or can we do without?
Judging from how frequently this has been requested in the past, there is certainly some demand for such functionality. But is this a true need, requiring us to sink lots of man-hours into retooling a large codebase into doing?
TBH, I still have my doubts about the merits of doing so. That said, there are certainly tools and use cases where I can see that being able to support multi-object editing would be a significant improvement. Several examples come to mind:
i) Moving several overlapping/connected components (e.g. window frames vs holes in walls) defined in several different objects at the same time, meaning that everything stays in sync
ii) Performing global geometry edits to multiple small objects (e.g. a cloud of rocks/debris) at a time - like sculpting some sort of distortion/blast wave, bulk applying smoothness/roughness/decimation effects to these, performing UV/projection mapping across all of these at once, etc.
iii) Animating multiple interacting characters, with probable multi-prop interactions. (Aaargh! This one is probably the subject of its own dedicated post in future)
All in all, there is probably a case here for supporting this. Not to mention how from a UX perspective, it would reduce friction with how people have become accustomed to working with entities in software.
4) How should it work?
Now this is the question that really we should be dedicating a lot of time to, and what I really want to talk about today. To save myself typing out a lot of the same things again, you can refer to my comments on the design task about how IMO some of these things could work (mainly from a technical / mental-models perspective).
In particular, I'd like to draw your attention to my comments regarding "Toggling Objects". I'll talk more about this in a bit, but I think it's critical that during this process, we do not overlook the important advantages that Blender's current approach has over every other tool in the market - the "industry standard" approach
On a side-note, oh, how I hate that term and the connotations of it... you only need to look at all "modern" housing being built here in NZ these days to realise that "to standard specs" mean that you get a house that has low ceilings (where you can literally touch/scrape your head on the ceiling), narrow hallways (~0.8-1.1m IIRC), narrow doorways (~0.9-1.0m IIRC), and narrow toilet rooms (~0.9m wide)... anyway, I digress
Editing Modes and The Importance of Exclusive Editing
For many years, I've though long and hard about why exactly traditional graphics-editing programs and tools suck so much. Let's be very clear here, several aspects of their core interaction paradigms suck. Big time.
I'm going to quote the example I wrote in the design discussion about one of the prime "worst aspects" of the way that traditional "standard" tools work:
Let's say you've got this path in Inkscape/MsOffice/etc. with heaps of vertices that you're trying to select + tweak. It takes you a very long time to carefully select the vertices you want to modify. And then, you click slightly off from one of the next verts you were going to select (and let's face it, we're human, so this will and does happen a lot).
In one of the aforementioned apps, what often happens is that when that happens, a series of things happen: 1) You lose selection on the object you've been carefully working on, along with all the carefully selected points, 2) The closest neighbouring object gets selected instead and made the active one, and 3) The nearest point(s) on that neighbouring object get selected instead.
Lo and behold, you may have just lost 5-10 minutes of work, and now have to also get out of editmode (as trying to get back to the original object is actually often impossible when you really try, despite being so easy to do accidentally!) Now, once you're back at object level, you need to fix up the object-level selection (which may not be easy with lots of tightly interlocking parts that sit almost planar to each other - e.g. a zig-zagging pinball tract), before getting back into edit mode to try and get the same selection again.In short, it's my pet peeve: the "lost/misdirected" selection focus while trying to edit the geometry of a particular object.
Initially, I thought it was just that they lacked the Selection Operation/Key vs Action/Editing Operation/Key separation (aka Right Click Select) that Blender famously has. The reason why the Selection/Action split is a good idea is that your key mappings are not overloaded to do different things given slightly different conditions/context. For instance, the "standard" LMB behaviour actually consists of a cluster of 3 different operations - two that are the opposites of each other, and a third one that does something completely different. What are these:
i) A short click, over an item, Selects
ii) A short click, on nothing, Deselects Everything
iii) A long click-drag, initiates the "Move" operation
Can you see the flaw with this scheme yet?
...
How about now?
...
Perhaps it would help if I provided a hint, and invoked "Fitts' Law" - that is, the smaller the target, the harder it is to select it (i.e. it takes you longer to do it, or you have to expend more effort to carefully do so).
Now, let's think about what exactly the targets we're talking about here are: vertices, shown as tiny 3-5 px dots (dots!). To put that in perspective, on a "modern, 27 inch HD screen", a "small" 16x16 px icon is smaller than the tip of your pinkie (i.e. around 3 mm); a dot, is a fraction of that! (For reference, to make Blender's keyframes in the Dopesheet easily selectable, I have to add a 7px buffer radius to each point, bringing it closer to the 16x16 "small icons", or else you'd just click and miss most of the time and get really frustrated!)
Ok, so I hope you're starting to get the picture: We've got a field of tiny targets - so small that they approach the point where they actually pose a significant challenge for our limited motor skils to quickly and accurately target, and we often want to select a careful subset of these to perform some operation on these. However, the input/control channel we've got for doing this - i.e. the left-mouse button state, Button Up/Off and Button Down/On - is a binary channel that is mapped to 3 overloaded operations that are highly context dependent. The computer literally has to guess what your intentions are, based on where the Button Down state occurred, and how long it was until Button Up happened again. Oh, and remember, I mentioned earlier that two of those states are like mortal enemies of each other (i.e. Selection and Deselection depends only on whether a tiny target that we often have difficulty targetting with 100% accuracy sits under the cursor), and that the third operation is like a useful shortcut, but could end up backfiring/being misinterpreted to mean "deselect all" or "select things random point from another object"...
It's not hard to see how easily you can lose a selection you've spent the last 10 minutes working on, because you slightly mis-clicked when starting to try and drag said selection to move it a short distance.... multiple times (as grid snapping makes it hard to get it where you want, but the overlaid ghosts make it hard to see the final effect, so it's hard to judge the effect of the move until you've finished it), opening you up to more risk of accidentally mis-clicking again. (Another pet peeve is getting your selection overwritten by accidentally selecting somthing from a lower layer than now thinks it gets to become the top-most item, making reselecting the smaller items near impossible!) Truth be told, I've lost so many hours to these kinds of quirks that it's not really funny anymore - I really cannot say that "standard" behaviour is actually such a good idea at all!
But really, separation of selection and action isn't even that critical at all when you think over the overall UX in detail. While I do also have a thing against drag and drop (argh... far too easy to accidentally activate this - it's especially nasty on my phone some weekends, when I can pull it out to find that the wallpaper's changed, half of my shortcuts have gone missing, and I've nearly butt-dialled 3 gibblish text messages + 1 outgoing phone call to some VIP's that shouldn't be disturbed), the problems from overloading can be mitigated/contained somewhat simply by removing "deselect by clicking in empty space" from that list. All of a sudden, your operation is half-as-deadly as it was before, and I probably wouldn't insist so strongly about LMB vs RMB issues.
Still, there's the problem of an errant click performing a "select" on some element that's outside the immediate scope that you care about; that is, while you're thinking about selecting vertices, you've suddenly done an object select, but the object you've now selected may not be one you actually care about (e.g. a relatively big background item, that makes selecting small foreground verts/elements nearly impossible). That problem still remains when you have no "edit modes" (or ones that are very fluid/permeable, where you can "fall out" of them by just looking at it askew, or by botching a click).
Thus, having thought at length about this, I now think that there's something more fundamental here than the Selection/Action thing: Specifically, Blender's editing modes provide a powerful mechanism for ensuring that we *only* operate on parts of a nominated entity/object at the level of granularity (vertex/edge/face) that's suitable for the task at hand - that is, you can't accidentally switch from one object to another, particularly when you've just spent lots of time working on a particular object selecting points to edit.
I'd argue that we really ought to make sure that this aspect of Blender's general UX philosophy remains. We absolutely must retain the use of the "select data -> enter editing mode to exclusively operate on that data at a finer level of granularity" as a core underlying principle. That and avoiding "blank space deselects" (ugh!)....
Mixed Selections - Geometry Points vs Manipulators
Earlier in the 2.8 project, there was also quite a bit of discussion/experimentation with lots of fancy new "Manipulator" widgets - interactive overlays that live in the viewport and facilitate a more point-and-click style of interaction.
As you should hopefully be starting to realise by now, I have some serious reservations/issues about how such things would work. You should probably be able to guess what I'm about to harp on about now too - Again, the chief concerns I have here concern Ambiguitiy and Overloading:
* When exactly are the hotkeys you're using intended to be captured by a particular manipulator (BTW, all bets are off if you've got more than one of the bastards visible on screen near the cursor)?
* How do I know that some mouse press I'm performing is supposed to be interpreted as a "select this thing under the cursor" vs "select the manipulator handle" vs "select the object behind/between the manipulator's arms" vs "make that nearby handle jump here"?
* What about when you click-drag? Is that a signal for the manipulator to start some operation along one of its handles, or did you intended to box select another region for it to operate on?
Motion Path Editing
Since I'm ranting about manipulators already, it's not much of a stretch to bring "Editable Motion Paths" into the fray too. Because really, they share many of the same interaction/UX problems that manipulators have.
If we do it the naiive way - i.e. having editable motion paths overlaid in object/pose mode, basically only supporting mouse-based operations for select/deselect/move (aka the unholy triad, with all its flaws I've pointed out) - you've got a clunky tool.
* One wrong click, and you're changing operating levels, from "motion path vertices", to selecting object/bones that active different sets of motion paths. I don't need to restate my arguments for why this is bad, I hope ;)
* By only allowing mouse-based operations, you're limiting the options you have for achieving smooth curves. That's because you'd need to have operators operating on different data levels (i.e. bones vs path vertices) - so things like nicer box select, smooth operators, sketch-based shaping tools, are all basically either a non-starter, or really awkward to introduce without causing conflicts with other stuff / or feeling really "tacked on".
For this reason, IMO, if you're going to support path editing, you really need to have a separate mode for it. That way, you have the space to actually support the "full-blown", rich-tooling style that Blender offers for operating on different types of data. From there, this leads into lots of other very interesting issues regarding cross-frame editing, and so forth (e.g. things like Multiframe Editing and so forth in the GP branch). Those are worth a deeper examination on their own right - look out for some posts about that sometime soon (I hope) :)
Now, assuming we've got the UX side of things sorted, there's another reason why I can't say that I'm that keen on introducing this for Blender: the technical side of solving the tricky IK-solving problem of converting from edits to the shape of a motion path, into the set of transforms on the connected bones/rig. Unfortunately, this is really not a simple problem - and is complicated to no end by the complexity of rigs (i.e. lots of bones, all the local/inherit stuff, with complex interdependencies/constraints/drivers/parent-contributions that mean that certain tweaks cannot actually work as you plan/intend, along with the fact that you have to then make it all flow together smoohtly so the interpolation doesn't do weird things in other parts). It'd be slightly different story though if everyone used simple rigs (and not monstrosities like Blenrig or Rigify - no offense to the riggers behind these beasts, but the complexity of those things is somewhat freakish), where we can just evaluate them simply on each frame without worrying too much about interpolations/etc.
There are though some relatively simple tools that I think we *can* try to provide animators with new in-viewport controls for shaping the over-time aspect of their performances. But, they are a lot more constrained than the "freeform grab and reshape the peaks of the path" approach that many people instinctively think of when talking about motion path editing. While these tools may look less flashy (especially in tech demos), I'm also reasonably confident that these may actually the kinds of things that we actually want to be doing to achieve good results.
Hi Aligorith,
ReplyDelete(For the record, I mostly do rigging and some animation in Blender, as well as modeling/sculpting.)
I get what you say about overloading and I also tend to think modes are a great strength of Blender's interaction model.
However it is important to consider that in the heat of the action - when tweaking an animation, scrubbing back and forth, changing the viewpoint, moving one keyframe slightly - we may want to avoid going through too many steps : selecting a "motion path object", switching to edit mode and making adjustements, the exiting edit mode and selecting the armature again sounds like a process long enough for the animator to have forgotten what they initially intended to do in the first place.
While contexts are good, I think this kind of action lies exactly in the same thought process and should not be externalized - instead some other solution can be found to end all ambiguity as to which click is going to do what, operate on which data, etc. For instance, a modal tool like the ones Campbell recently conceptualized looks like a good solution, doesn't it ? : a mode within the mode, bound to a hotkey and limiting interactions to the selected bone's motion path data, something like your pose sketching tool.
As per other manipulators and the ambiguities they introduce - seeing as it is uncomparably easier to select with LMB when using a tablet (which a good proportion of professionals do nowadays), I'd be satisfied with a global "manipulator switch" button allowing me to select stuff with nothing in the way, and toggle the manippulator on whenever I need it.
I wish you good continuation and needless to say am super grateful for all the goodies you bring to Blender on a regular basis.
Cheers,
Hadrien
Right, I get what you're saying.
DeleteI wasn't actually proposing that motion paths need to be separate, full-blown "Blender objects" per se (and indeed, that would actually bring a whole set of other challanges and problems). Instead, I was merely thining that perhaps we need something like the existing Edit/Pose/Sculpt Modes that objects have, so that we focus on editing the paths only - basically, it would be just a keystroke away (to toggle in/out of that mode), so again, nothing onerous.
Regarding the "Tools" system that Campbell was experimenting with - that is really a whole topic and and of itself I think. I tried to avoid bringing it into this discussion, as it's currently postponed/on-ice. That said, I do agree that it does seem like a very good fit for things like motion path editing, object pivot point editing (?), and also for many other interactive tools that feature modal interaction with some kind of manipulator widget, where the manipulator itself has quite a few control points/commands that you'd want to be able to activate.
What is not so clear to me atm (and one of the reasons why I suspect it's currently postponed, besides lack of manpower/resources to manage everything) is what the tradeoffs are for Blender's overall UX feel - specifically, right now, a lot of users really like how many of Blender's tools (e.g. transform, extrude, loopcut) provide a very high performance ceiling because, like Vim, they form a kind of vocabulary where you activate the command with a keyboard shortcut, activate some options (again keyboard operated), use directional mouse movements to adjust, and then confirm by clicking. However, these tools will likely encourage a lot of of just point-n-click (i.e. good for discoverability and also casual users, but also less fluid/chainable). On second thought though, it's probably not a binary/either-or problem: If these tools deactivate once they've performed a particular operation (e.g. for extrude/loopcut, the manipulator goes away until you reactivate, allowing you to select geometry with the full set of tools/operators) it might not be quite so bad.
But then, we still have the problems of:
* What operators stay as operators? Which ones become tools or part of a tool?
* And when should that tool actually need to become a whole new editing mode?
I see. I don't know the difference it makes internally, but as far as I understand the working difference between a mode and a "persistent modal tool" as proposed by Campbell is almost just semantics. Correct me if I get it wrong, but I see modes as essentially a way to offer an "activity/workflow-centric" interaction model & keymap for a specific task. Like you mentioned, it prevents accidental switching to some other totally irrelevant action (something Maya users know very well). However doesn't it sound like overkill to dedicate an entire new mode to motion path editing ? Unless maybe you see this becoming part of a larger toolset necessitating its own context ?
DeleteOn this subject, I can't quite make out from the videos how you implemented the pose sculpt brushes ? Are they a separate mode ? Or was this out of scope for your proof of concept ?
Deciding which operator gets the tool treatment is a good question. I very much like the straightforward way of doing things in Blender, but some operations are very complicated to tweak without a manipulator or a dedicated context. Most require making a selection beforehand, however in the case of the motion path, if vertices/keyframes are made not to be selectable so as not to interfere with the rest of pose mode (which I think makes sense), then I guess a tool (or a mode) is called for.
I imagine the interaction would go something like this : in pose mode, select bones, hit d key to enter "adjust trajectory" tool, which is brush-based and allows tweaking motion path vertices with click-and-drag. Or, default the hotkey to spawn a pie menu containing pose sculpt brushes along with "edit motion path" brush. Exiting a tool could re-use current "exit operator" hotkey (spacebar) for consistency.
The undo stack is a thing of its own... How many times have I seen my previous actions simply flushed because I had switched one time too many between object and edit mode ?
ReplyDeleteConceptually, the undo stack would also record mode changes so that undoing a lot of actions would make the user go through all the steps they had taken, in reverse order, not discriminating between edit mode changes and object mode changes. I understand concept and implementation are different things, though.
Right.
DeleteIndeed, it's not easy getting these things right (e.g. personally, I really hate how the new command-grouping behaviour picks the *first* frame change event to store, and not the *last* one - it means I'm constantly inserting keyframes on the wrong frame after undoing. I've tried patching it, but unfortunately, it's a lot trickier than it initially seems)
The edit-mode toggling example you give though is probably one of those that we could optimise more (like we've done for frame changes now).
"What problems does this try to solve?"
ReplyDeleteThis is the fundamental question, and there is absolutely zero point in trying to solve the problem until you have an answer to it.
The vast majority of use cases of multimesh editing can be solved by simplifying join/separation operations, which is mostly a matter of improving armature UI (because most object-level transformations could be done as well, or better, with an armature instead, if it was as easy to do as an object-level transform), and of improving the use of vertex groups, both for modifier modulation (why can't you specify a subsurf vert group?) and as targets for other operations (why can't you give a shrinkwrap modifier a mesh + VG as a target?)
There are some multi-edit problems that are just not solvable without shaking your head and saying, "I dunno lol." What happens when you have two non-uniformly scaled objects with a bevel modifier and you join them with a face? There is no solution to that problem. You make an arbitrary decision. (But it would be unwise to restrict use just because you don't know what to do in a few unlikely situations.)
Ideally, joining and separating objects would be a non-destructive operation. It is not impossible to do that. You can make VGs for each object and take notes on their pre-join state. It's just a UI problem. You can do this with note paper and a calculator, if you have the patience. It would be nice if Blender did it for us.
But for me, what I would want out of a multi-edit would just be a simplified UI. Rigging a spline IK system with hooks is a nightmare because I cannot snap cursor to various elements without a huge number of state changes. This difficulty makes it so that I simply don't bother with this as much as I would like. There are other potential solutions to this problem; there's no reason curve-to-mesh couldn't preserve information, there's no reason that there's no information-preserving armature-to-mesh, no mesh-to-armature.
"It'd be slightly different story though if everyone used simple rigs"
The number one problem with complex rigs is the fact that angles are evaluated as per-axis Euler angles. Most Blender riggers don't really understand Euler angles and what they mean. They give their eyes 2-axis limit rotations and expect interpolation to give them a circle. This, too, is solvable, and not just by demanding they learn how Euler, dawg. Angles as used by riggers are not best expressed as Eulers. They are best expressed as the intersection of ellipsoids and cones and planes, of parametric primitives.
It's not like that's the only problem. There are a few (locked tracks are harder to use than they should be, floors are harder to use than they should be, remapping transforms is harder than it should be.) But they're all solvable problems. Most of them are solveable in 2.79 Blender, they're just nightmarishly complex with the UI. Hopefully, some kind of rigging nodes system will be powerful enough that we can solve them just once and publish the node group, rather than having to solve them with Rube-Goldberg systems of coincident bones.