While reading research papers as part of my honours thesis/project work (but also through videos and articles like these), I've also found that many approaches that came to mind had been tried - some successfully, some not so, and others that I wouldn't have thought of but which end up becoming "all roads lead to Rome" designs.
However, at the same time, there have been times when I've been struck with the immediate thought: "Wow! They had that back then? But gee... have our designs regressed?"
For example, take the following demonstration of the Xerox Star, which inspired the WIMP GUI paradigm used the better part of the last 3 decades:
For most people, the most significant features of this video are that this demonstrates the use of a mouse, the desktop + windows + icons + files metaphors, and the fact that the thing had a graphical display (as opposed to being purely text-based).
However, what was more striking for me was the fact that it had dedicated keys on the keyboard for performing common actions that worked in and were available across all applications. For example, a dedicated there were dedicated "Copy" and "Paste" buttons (instead of Ctrl+C, Ctrl+V) that worked for all applications, and was clearly labelled on the keyboard as performing that very purpose.
- The big red "STOP" button was also quite amusing, though I'd much rather have seen a big green "GO" button in its place (where the EnterKey on the NumPad usually sits these days) ;)
- The "Move" button was a bit odd in some ways. But, at the same time I couldn't help but think of Blender - especially if there were dedicated "rotate" and "scale" buttons beside it at the same time. Hehe...
For now, we can only really speculate on what could have been. After viewing that video, I could only wonder why these buttons ended up disappearing? Was this yet another one of the Jobsian quirks (i.e. like getting arrows keys removed, and wanting a single button mouse) that somehow managed to catch on? Was it just one of the economics-driven decisions (e.g. IBM didn't want to make spend money building such keyboards) and/or most software developers at the time didn't want the extra work of hooking up their code to such a standardised hardware solution? Was it that a bunch of left-handers complained that they couldn't operate the mouse AND use these keys efficiently without reaching over their keyboards? Or was there just a paradigm shift, where suddenly physical buttons on the keyboard became seriously uncool, and everything should only be point-n-click on the screen?
If (and it probably was) primarily the last of these, then the recent shifts towards touchscreens and having dynamic keyboards provide us with some interesting new opportunities. While the ability to dynamically assign labels to keys is interesting, far more interesting is the ability to create different layouts entirely. Perhaps all that's missing still are those haptic touchscreens (there was some news about this a while back) that provide the sensation of actually pressing a button, and/or a material which doesn't collect so much grease from fingers! Imagine having this working for performing sculpting on a tablet while on the go...
Hotkeys, Modes, Multiplexing, and Spatial Memory
But, let's take a step back and think back to why having a set of dedicated buttons for common operations located on the LHS of the keyboard (right under or quite near the Left Hand in rest position) may be quite an optimal setup. For this, we need to consider a few concepts which, I've been coming to find are actually quite closely related:
- Modal vs Non-Modal,
- Space-Multiplexed (many devices/objects, each with a single dedicated function) vs Time-Multiplexed (one device, many functions),
- and Spatial Memory
Anyone who's used the Microsoft Ribbon interface will know pretty well what I'm talking about (more on this later). For example, when you're on the "Home" tab, and you want to insert a table, you might start moving your mouse towards the ribbon, only to realise when you get there that you're on the wrong tab ("drat"). One of the PHD students here was studying this, and came up with "CommandMaps", a spatially consistent version of the ribbon that basically splays out every tab in the ribbon on a separate row. Thus, each command is now in a spatially consistent location, which was shown to allow users to quickly click on a target without worrying about mode switching (i.e. we've made a modal -> non-modal conversion). This works because we get rid of the spatial overloading. In other terms, we're going from having a time-multiplexed interface (where the same spatial region can serve multiple functions) to having a space-multiplexed interface (where each spatial location corresponds to a separate location).
In many ways, Blender's UI was in fact (many) years ahead of its time. IMO, one of the most notable examples of this is the old Buttons Panels that used to sit across the bottom of the screen. However, I'm not talking about Horizontal Buttons Panels that were introduced in 2.3, and stayed through until the end of the 2.4 series. Rather, I'm talking about the Buttons Panels as they existed in "ancient" versions of Blender, such as 1.8 (AFAIK one of the first publicly available versions, or at least the earliest version available for download).
|Horizontal buttons panel in Blender 1.8|
Does any of this begin to sound familiar now? Why, this is almost exactly the Microsoft Ribbon, albeit nearly completely text-based, with a lot more buttons (many of which were smaller than some of the big "feature" icon-buttons), and running horizontally across the bottom of the screen instead of the top (*). Oh, and this was some 20 years ago. Duh duh duh duh!
(* ASIDE: I'm currently a bit mixed still on whether putting commands/controls as a big fat panel across the bottom of the screen or across the top is superior, in terms of facilitating rapid access, but also being the most comfortable for motor movements and for being the easiest to spot).It's quite surprising that this very idea, that others are only now starting to adopt is something we used to have for such a long time, and have now abandoned. Sure, there were limitations with arbitrary length lists. But, then again, perhaps we should now be reconsidering whether those lists were well placed alongside everything else in the first place! Could we have done so differently?
Anyways, back on topic: as I've mentioned in the past, I think horizontal panels can actually work quite well as an interface component for providing quick access to a mixture of commands and options. However, for this to work, it seems that it really depends whether or not you can fit everything in (i.e. no scrolling), at a reasonable size (i.e. easy to hit targets - Fitt's Law) while maintaining enough prominent landmarks to facilitate quick orientation and recognition.
For vertical panels, a few months ago I did some experiments evaluating a few of the different ways of implementing them, and trying to determine the dataset densities for which various designs were suitable for use. If/when I get some time, I should really polish up the paper I wrote for it at the time (it was a bit too light for actual submission to conferences/publication still). In any case, hopefully there'll be more I can reveal about this in coming months.