In this post, I'll go over a few of the key issues we need to contend with here, along with some other general thoughts I've been mulling over for quite a few years now about what IMO makes an effective UI for editing large amounts of geometry (i.e. vertices, control points, etc.)
Multi Object Editing Challenges
The two biggest technical challenges are:
1) Almost every operator (i.e. all the tools in Blender that you use) are coded to expect there to be a single "active object" that all the geometry present in edit mode comes from.
As a result, many shortcuts are taken, and assumptions are made about where to find stuff (e.g. other settings we may need, or things like transforms to apply to points to convert from local space <-> global space <-> screen space. Solving all of these issues will not be easy - though there are some ways we can "fake" things to minimise the amount of code rewriting we'd have to perform. (In essence, we'd want to just keep the existing code, and tell it what object to operate on instead of letting it try to find the active object; that way, we can make these operators operate on multiple selected objects at once)
2) What to do with the Undo Stack
Most modellers will know something about how the undo stack is currently split between "Global" and "Edit Mode". You may have even heard/encountered problems with shapekeys. But in short, there are not really easy/simple solutions here. Whatever happens, it will be tricky work, and will likely involve a big refactor of this chunk of code. I'd probably rate this as being trickier to do than #1 (on a scale of technical hardness), while #1 is more of a "bulk" problem (with individual tricky cases that will throw in some wrenches into the mix).
Of course though, there are also lots of non-technical issues that we'd have to solve too. Specifically, before actually embarking on this, some of the questions that need to be addressed include:
3) Should we be doing this in the first place? What problems does this try to solve? Is it really necessary to support this, or can we do without?
Judging from how frequently this has been requested in the past, there is certainly some demand for such functionality. But is this a true need, requiring us to sink lots of man-hours into retooling a large codebase into doing?
TBH, I still have my doubts about the merits of doing so. That said, there are certainly tools and use cases where I can see that being able to support multi-object editing would be a significant improvement. Several examples come to mind:
i) Moving several overlapping/connected components (e.g. window frames vs holes in walls) defined in several different objects at the same time, meaning that everything stays in sync
ii) Performing global geometry edits to multiple small objects (e.g. a cloud of rocks/debris) at a time - like sculpting some sort of distortion/blast wave, bulk applying smoothness/roughness/decimation effects to these, performing UV/projection mapping across all of these at once, etc.
iii) Animating multiple interacting characters, with probable multi-prop interactions. (Aaargh! This one is probably the subject of its own dedicated post in future)
All in all, there is probably a case here for supporting this. Not to mention how from a UX perspective, it would reduce friction with how people have become accustomed to working with entities in software.
4) How should it work?
Now this is the question that really we should be dedicating a lot of time to, and what I really want to talk about today. To save myself typing out a lot of the same things again, you can refer to my comments on the design task about how IMO some of these things could work (mainly from a technical / mental-models perspective).
In particular, I'd like to draw your attention to my comments regarding "Toggling Objects". I'll talk more about this in a bit, but I think it's critical that during this process, we do not overlook the important advantages that Blender's current approach has over every other tool in the market - the "industry standard" approach
On a side-note, oh, how I hate that term and the connotations of it... you only need to look at all "modern" housing being built here in NZ these days to realise that "to standard specs" mean that you get a house that has low ceilings (where you can literally touch/scrape your head on the ceiling), narrow hallways (~0.8-1.1m IIRC), narrow doorways (~0.9-1.0m IIRC), and narrow toilet rooms (~0.9m wide)... anyway, I digress
Editing Modes and The Importance of Exclusive Editing
For many years, I've though long and hard about why exactly traditional graphics-editing programs and tools suck so much. Let's be very clear here, several aspects of their core interaction paradigms suck. Big time.
I'm going to quote the example I wrote in the design discussion about one of the prime "worst aspects" of the way that traditional "standard" tools work:
Let's say you've got this path in Inkscape/MsOffice/etc. with heaps of vertices that you're trying to select + tweak. It takes you a very long time to carefully select the vertices you want to modify. And then, you click slightly off from one of the next verts you were going to select (and let's face it, we're human, so this will and does happen a lot).
In one of the aforementioned apps, what often happens is that when that happens, a series of things happen: 1) You lose selection on the object you've been carefully working on, along with all the carefully selected points, 2) The closest neighbouring object gets selected instead and made the active one, and 3) The nearest point(s) on that neighbouring object get selected instead.
Lo and behold, you may have just lost 5-10 minutes of work, and now have to also get out of editmode (as trying to get back to the original object is actually often impossible when you really try, despite being so easy to do accidentally!) Now, once you're back at object level, you need to fix up the object-level selection (which may not be easy with lots of tightly interlocking parts that sit almost planar to each other - e.g. a zig-zagging pinball tract), before getting back into edit mode to try and get the same selection again.In short, it's my pet peeve: the "lost/misdirected" selection focus while trying to edit the geometry of a particular object.
Initially, I thought it was just that they lacked the Selection Operation/Key vs Action/Editing Operation/Key separation (aka Right Click Select) that Blender famously has. The reason why the Selection/Action split is a good idea is that your key mappings are not overloaded to do different things given slightly different conditions/context. For instance, the "standard" LMB behaviour actually consists of a cluster of 3 different operations - two that are the opposites of each other, and a third one that does something completely different. What are these:
i) A short click, over an item, Selects
ii) A short click, on nothing, Deselects Everything
iii) A long click-drag, initiates the "Move" operation
Can you see the flaw with this scheme yet?
How about now?
Perhaps it would help if I provided a hint, and invoked "Fitts' Law" - that is, the smaller the target, the harder it is to select it (i.e. it takes you longer to do it, or you have to expend more effort to carefully do so).
Now, let's think about what exactly the targets we're talking about here are: vertices, shown as tiny 3-5 px dots (dots!). To put that in perspective, on a "modern, 27 inch HD screen", a "small" 16x16 px icon is smaller than the tip of your pinkie (i.e. around 3 mm); a dot, is a fraction of that! (For reference, to make Blender's keyframes in the Dopesheet easily selectable, I have to add a 7px buffer radius to each point, bringing it closer to the 16x16 "small icons", or else you'd just click and miss most of the time and get really frustrated!)
Ok, so I hope you're starting to get the picture: We've got a field of tiny targets - so small that they approach the point where they actually pose a significant challenge for our limited motor skils to quickly and accurately target, and we often want to select a careful subset of these to perform some operation on these. However, the input/control channel we've got for doing this - i.e. the left-mouse button state, Button Up/Off and Button Down/On - is a binary channel that is mapped to 3 overloaded operations that are highly context dependent. The computer literally has to guess what your intentions are, based on where the Button Down state occurred, and how long it was until Button Up happened again. Oh, and remember, I mentioned earlier that two of those states are like mortal enemies of each other (i.e. Selection and Deselection depends only on whether a tiny target that we often have difficulty targetting with 100% accuracy sits under the cursor), and that the third operation is like a useful shortcut, but could end up backfiring/being misinterpreted to mean "deselect all" or "select things random point from another object"...
It's not hard to see how easily you can lose a selection you've spent the last 10 minutes working on, because you slightly mis-clicked when starting to try and drag said selection to move it a short distance.... multiple times (as grid snapping makes it hard to get it where you want, but the overlaid ghosts make it hard to see the final effect, so it's hard to judge the effect of the move until you've finished it), opening you up to more risk of accidentally mis-clicking again. (Another pet peeve is getting your selection overwritten by accidentally selecting somthing from a lower layer than now thinks it gets to become the top-most item, making reselecting the smaller items near impossible!) Truth be told, I've lost so many hours to these kinds of quirks that it's not really funny anymore - I really cannot say that "standard" behaviour is actually such a good idea at all!
But really, separation of selection and action isn't even that critical at all when you think over the overall UX in detail. While I do also have a thing against drag and drop (argh... far too easy to accidentally activate this - it's especially nasty on my phone some weekends, when I can pull it out to find that the wallpaper's changed, half of my shortcuts have gone missing, and I've nearly butt-dialled 3 gibblish text messages + 1 outgoing phone call to some VIP's that shouldn't be disturbed), the problems from overloading can be mitigated/contained somewhat simply by removing "deselect by clicking in empty space" from that list. All of a sudden, your operation is half-as-deadly as it was before, and I probably wouldn't insist so strongly about LMB vs RMB issues.
Still, there's the problem of an errant click performing a "select" on some element that's outside the immediate scope that you care about; that is, while you're thinking about selecting vertices, you've suddenly done an object select, but the object you've now selected may not be one you actually care about (e.g. a relatively big background item, that makes selecting small foreground verts/elements nearly impossible). That problem still remains when you have no "edit modes" (or ones that are very fluid/permeable, where you can "fall out" of them by just looking at it askew, or by botching a click).
Thus, having thought at length about this, I now think that there's something more fundamental here than the Selection/Action thing: Specifically, Blender's editing modes provide a powerful mechanism for ensuring that we *only* operate on parts of a nominated entity/object at the level of granularity (vertex/edge/face) that's suitable for the task at hand - that is, you can't accidentally switch from one object to another, particularly when you've just spent lots of time working on a particular object selecting points to edit.
I'd argue that we really ought to make sure that this aspect of Blender's general UX philosophy remains. We absolutely must retain the use of the "select data -> enter editing mode to exclusively operate on that data at a finer level of granularity" as a core underlying principle. That and avoiding "blank space deselects" (ugh!)....
Mixed Selections - Geometry Points vs Manipulators
Earlier in the 2.8 project, there was also quite a bit of discussion/experimentation with lots of fancy new "Manipulator" widgets - interactive overlays that live in the viewport and facilitate a more point-and-click style of interaction.
As you should hopefully be starting to realise by now, I have some serious reservations/issues about how such things would work. You should probably be able to guess what I'm about to harp on about now too - Again, the chief concerns I have here concern Ambiguitiy and Overloading:
* When exactly are the hotkeys you're using intended to be captured by a particular manipulator (BTW, all bets are off if you've got more than one of the bastards visible on screen near the cursor)?
* How do I know that some mouse press I'm performing is supposed to be interpreted as a "select this thing under the cursor" vs "select the manipulator handle" vs "select the object behind/between the manipulator's arms" vs "make that nearby handle jump here"?
* What about when you click-drag? Is that a signal for the manipulator to start some operation along one of its handles, or did you intended to box select another region for it to operate on?
Motion Path Editing
Since I'm ranting about manipulators already, it's not much of a stretch to bring "Editable Motion Paths" into the fray too. Because really, they share many of the same interaction/UX problems that manipulators have.
If we do it the naiive way - i.e. having editable motion paths overlaid in object/pose mode, basically only supporting mouse-based operations for select/deselect/move (aka the unholy triad, with all its flaws I've pointed out) - you've got a clunky tool.
* One wrong click, and you're changing operating levels, from "motion path vertices", to selecting object/bones that active different sets of motion paths. I don't need to restate my arguments for why this is bad, I hope ;)
* By only allowing mouse-based operations, you're limiting the options you have for achieving smooth curves. That's because you'd need to have operators operating on different data levels (i.e. bones vs path vertices) - so things like nicer box select, smooth operators, sketch-based shaping tools, are all basically either a non-starter, or really awkward to introduce without causing conflicts with other stuff / or feeling really "tacked on".
For this reason, IMO, if you're going to support path editing, you really need to have a separate mode for it. That way, you have the space to actually support the "full-blown", rich-tooling style that Blender offers for operating on different types of data. From there, this leads into lots of other very interesting issues regarding cross-frame editing, and so forth (e.g. things like Multiframe Editing and so forth in the GP branch). Those are worth a deeper examination on their own right - look out for some posts about that sometime soon (I hope) :)
Now, assuming we've got the UX side of things sorted, there's another reason why I can't say that I'm that keen on introducing this for Blender: the technical side of solving the tricky IK-solving problem of converting from edits to the shape of a motion path, into the set of transforms on the connected bones/rig. Unfortunately, this is really not a simple problem - and is complicated to no end by the complexity of rigs (i.e. lots of bones, all the local/inherit stuff, with complex interdependencies/constraints/drivers/parent-contributions that mean that certain tweaks cannot actually work as you plan/intend, along with the fact that you have to then make it all flow together smoohtly so the interpolation doesn't do weird things in other parts). It'd be slightly different story though if everyone used simple rigs (and not monstrosities like Blenrig or Rigify - no offense to the riggers behind these beasts, but the complexity of those things is somewhat freakish), where we can just evaluate them simply on each frame without worrying too much about interpolations/etc.
There are though some relatively simple tools that I think we *can* try to provide animators with new in-viewport controls for shaping the over-time aspect of their performances. But, they are a lot more constrained than the "freeform grab and reshape the peaks of the path" approach that many people instinctively think of when talking about motion path editing. While these tools may look less flashy (especially in tech demos), I'm also reasonably confident that these may actually the kinds of things that we actually want to be doing to achieve good results.