tag:blogger.com,1999:blog-1978356510587112308.post8686380275297086918..comments2024-02-27T21:15:04.267+13:00Comments on Aligorith's Lair: Thoughts About Geometry-Editing UX issues - Multi Object Editing, Edit Modes, Selection/Action Split, etc.Unknownnoreply@blogger.comBlogger6125tag:blogger.com,1999:blog-1978356510587112308.post-88075156382229372772019-07-10T10:20:33.447+12:002019-07-10T10:20:33.447+12:00"What problems does this try to solve?"
..."What problems does this try to solve?"<br /><br />This is the fundamental question, and there is absolutely zero point in trying to solve the problem until you have an answer to it.<br /><br />The vast majority of use cases of multimesh editing can be solved by simplifying join/separation operations, which is mostly a matter of improving armature UI (because most object-level transformations could be done as well, or better, with an armature instead, if it was as easy to do as an object-level transform), and of improving the use of vertex groups, both for modifier modulation (why can't you specify a subsurf vert group?) and as targets for other operations (why can't you give a shrinkwrap modifier a mesh + VG as a target?)<br /><br />There are some multi-edit problems that are just not solvable without shaking your head and saying, "I dunno lol." What happens when you have two non-uniformly scaled objects with a bevel modifier and you join them with a face? There is no solution to that problem. You make an arbitrary decision. (But it would be unwise to restrict use just because you don't know what to do in a few unlikely situations.)<br /><br />Ideally, joining and separating objects would be a non-destructive operation. It is not impossible to do that. You can make VGs for each object and take notes on their pre-join state. It's just a UI problem. You can do this with note paper and a calculator, if you have the patience. It would be nice if Blender did it for us.<br /><br />But for me, what I would want out of a multi-edit would just be a simplified UI. Rigging a spline IK system with hooks is a nightmare because I cannot snap cursor to various elements without a huge number of state changes. This difficulty makes it so that I simply don't bother with this as much as I would like. There are other potential solutions to this problem; there's no reason curve-to-mesh couldn't preserve information, there's no reason that there's no information-preserving armature-to-mesh, no mesh-to-armature.<br /><br />"It'd be slightly different story though if everyone used simple rigs"<br /><br />The number one problem with complex rigs is the fact that angles are evaluated as per-axis Euler angles. Most Blender riggers don't really understand Euler angles and what they mean. They give their eyes 2-axis limit rotations and expect interpolation to give them a circle. This, too, is solvable, and not just by demanding they learn how Euler, dawg. Angles as used by riggers are not best expressed as Eulers. They are best expressed as the intersection of ellipsoids and cones and planes, of parametric primitives.<br /><br />It's not like that's the only problem. There are a few (locked tracks are harder to use than they should be, floors are harder to use than they should be, remapping transforms is harder than it should be.) But they're all solvable problems. Most of them are solveable in 2.79 Blender, they're just nightmarishly complex with the UI. Hopefully, some kind of rigging nodes system will be powerful enough that we can solve them just once and publish the node group, rather than having to solve them with Rube-Goldberg systems of coincident bones.vasilnhttps://www.blogger.com/profile/01774281127579532778noreply@blogger.comtag:blogger.com,1999:blog-1978356510587112308.post-85118610251166271142018-03-13T07:43:18.308+13:002018-03-13T07:43:18.308+13:00I see. I don't know the difference it makes in...I see. I don't know the difference it makes internally, but as far as I understand the working difference between a mode and a "persistent modal tool" as proposed by Campbell is almost just semantics. Correct me if I get it wrong, but I see modes as essentially a way to offer an "activity/workflow-centric" interaction model & keymap for a specific task. Like you mentioned, it prevents accidental switching to some other totally irrelevant action (something Maya users know very well). However doesn't it sound like overkill to dedicate an entire new mode to motion path editing ? Unless maybe you see this becoming part of a larger toolset necessitating its own context ?<br /><br />On this subject, I can't quite make out from the videos how you implemented the pose sculpt brushes ? Are they a separate mode ? Or was this out of scope for your proof of concept ?<br /><br />Deciding which operator gets the tool treatment is a good question. I very much like the straightforward way of doing things in Blender, but some operations are very complicated to tweak without a manipulator or a dedicated context. Most require making a selection beforehand, however in the case of the motion path, if vertices/keyframes are made not to be selectable so as not to interfere with the rest of pose mode (which I think makes sense), then I guess a tool (or a mode) is called for.<br />I imagine the interaction would go something like this : in pose mode, select bones, hit d key to enter "adjust trajectory" tool, which is brush-based and allows tweaking motion path vertices with click-and-drag. Or, default the hotkey to spawn a pie menu containing pose sculpt brushes along with "edit motion path" brush. Exiting a tool could re-use current "exit operator" hotkey (spacebar) for consistency.Hadrienhttps://www.blogger.com/profile/16237603130847253825noreply@blogger.comtag:blogger.com,1999:blog-1978356510587112308.post-34213718370112883202018-03-10T13:04:22.069+13:002018-03-10T13:04:22.069+13:00Right.
Indeed, it's not easy getting these t...Right. <br /><br />Indeed, it's not easy getting these things right (e.g. personally, I really hate how the new command-grouping behaviour picks the *first* frame change event to store, and not the *last* one - it means I'm constantly inserting keyframes on the wrong frame after undoing. I've tried patching it, but unfortunately, it's a lot trickier than it initially seems)<br /><br />The edit-mode toggling example you give though is probably one of those that we could optimise more (like we've done for frame changes now).Aligorithhttps://www.blogger.com/profile/11379156223939123157noreply@blogger.comtag:blogger.com,1999:blog-1978356510587112308.post-32644674096042578452018-03-10T12:59:39.258+13:002018-03-10T12:59:39.258+13:00Right, I get what you're saying.
I wasn't...Right, I get what you're saying.<br /><br />I wasn't actually proposing that motion paths need to be separate, full-blown "Blender objects" per se (and indeed, that would actually bring a whole set of other challanges and problems). Instead, I was merely thining that perhaps we need something like the existing Edit/Pose/Sculpt Modes that objects have, so that we focus on editing the paths only - basically, it would be just a keystroke away (to toggle in/out of that mode), so again, nothing onerous.<br /><br />Regarding the "Tools" system that Campbell was experimenting with - that is really a whole topic and and of itself I think. I tried to avoid bringing it into this discussion, as it's currently postponed/on-ice. That said, I do agree that it does seem like a very good fit for things like motion path editing, object pivot point editing (?), and also for many other interactive tools that feature modal interaction with some kind of manipulator widget, where the manipulator itself has quite a few control points/commands that you'd want to be able to activate.<br /><br />What is not so clear to me atm (and one of the reasons why I suspect it's currently postponed, besides lack of manpower/resources to manage everything) is what the tradeoffs are for Blender's overall UX feel - specifically, right now, a lot of users really like how many of Blender's tools (e.g. transform, extrude, loopcut) provide a very high performance ceiling because, like Vim, they form a kind of vocabulary where you activate the command with a keyboard shortcut, activate some options (again keyboard operated), use directional mouse movements to adjust, and then confirm by clicking. However, these tools will likely encourage a lot of of just point-n-click (i.e. good for discoverability and also casual users, but also less fluid/chainable). On second thought though, it's probably not a binary/either-or problem: If these tools deactivate once they've performed a particular operation (e.g. for extrude/loopcut, the manipulator goes away until you reactivate, allowing you to select geometry with the full set of tools/operators) it might not be quite so bad.<br /><br />But then, we still have the problems of:<br />* What operators stay as operators? Which ones become tools or part of a tool? <br />* And when should that tool actually need to become a whole new editing mode?Aligorithhttps://www.blogger.com/profile/11379156223939123157noreply@blogger.comtag:blogger.com,1999:blog-1978356510587112308.post-90545609576770473532018-03-10T06:22:37.816+13:002018-03-10T06:22:37.816+13:00The undo stack is a thing of its own... How many t...The undo stack is a thing of its own... How many times have I seen my previous actions simply flushed because I had switched one time too many between object and edit mode ?<br /><br />Conceptually, the undo stack would also record mode changes so that undoing a lot of actions would make the user go through all the steps they had taken, in reverse order, not discriminating between edit mode changes and object mode changes. I understand concept and implementation are different things, though.Hadrienhttps://www.blogger.com/profile/16237603130847253825noreply@blogger.comtag:blogger.com,1999:blog-1978356510587112308.post-51248640929481255432018-03-10T03:41:18.360+13:002018-03-10T03:41:18.360+13:00Hi Aligorith,
(For the record, I mostly do riggin...Hi Aligorith,<br /><br />(For the record, I mostly do rigging and some animation in Blender, as well as modeling/sculpting.)<br />I get what you say about overloading and I also tend to think modes are a great strength of Blender's interaction model.<br />However it is important to consider that in the heat of the action - when tweaking an animation, scrubbing back and forth, changing the viewpoint, moving one keyframe slightly - we may want to avoid going through too many steps : selecting a "motion path object", switching to edit mode and making adjustements, the exiting edit mode and selecting the armature again sounds like a process long enough for the animator to have forgotten what they initially intended to do in the first place.<br />While contexts are good, I think this kind of action lies exactly in the same thought process and should not be externalized - instead some other solution can be found to end all ambiguity as to which click is going to do what, operate on which data, etc. For instance, a modal tool like the ones Campbell recently conceptualized looks like a good solution, doesn't it ? : a mode within the mode, bound to a hotkey and limiting interactions to the selected bone's motion path data, something like your pose sketching tool.<br />As per other manipulators and the ambiguities they introduce - seeing as it is uncomparably easier to select with LMB when using a tablet (which a good proportion of professionals do nowadays), I'd be satisfied with a global "manipulator switch" button allowing me to select stuff with nothing in the way, and toggle the manippulator on whenever I need it.<br /><br />I wish you good continuation and needless to say am super grateful for all the goodies you bring to Blender on a regular basis.<br /><br />Cheers,<br /><br />HadrienHadrienhttps://www.blogger.com/profile/16237603130847253825noreply@blogger.com