Stop Drawing Dead Fish from Bret Victor on Vimeo.
Augmenting Human Intellect
Being able to harness the power of computers to derive new or useful insights, or as IIRC Doug Engelbart put it, "Augmenting Human Intellect", is a very interesting idea at the core of much of his work, and is something that I don't think we've really managed to do yet on a large scale. However, there have been some promising approaches in recent years for achieving this goal.
One of the first that many people may have already heard about is Hans Rosling's "Gapminder" talk at TED:
Examples of similarly impressive systems include things like Microsoft Pivot, which allowed people to restructure, filter, and browse through collections of data. I remember being quite impressed by this thing when I first saw a video demo of it many years ago (though less so when actually playing with it), especially the beautiful dynamic ways in which it transitioned between various perspectives, allowing for a seamless environment for exploring a dataset. However, when playing with it, I quickly found that although it was quite an interesting tool for browsing and sifting through certain types of datasets, it also had a number of downsides such as only being useful/usable when suitable collections have been created and curated for usage with the tool. At the time, the datasets were relatively large (relative to my monthly bandwidth at the time) so it wasn't really possible to really test it out on too many datasets (and since these needed to be downloaded, the performance in practice was a lot worse than in the demo, where IIRC they probably had local cached versions of everything), and then when my bandwidth finally caught up, the original servers were practically on the verge of shutting down, so hardly any content was left to view.
Microsoft Live Labs Pivot
In more recent times (i.e. last year), an interesting paper was published describing a new system along similar lines: "Gliimpse"
This is a novel take on the classic problem of WYSIWYG vs Markup Languages for creating formatted documents. Just like in Pivot, we are presented with an interface which presents multiple views/representations of a single dataset to users, with animated transitions between these two different representations. Previously, I've sat on the fence wobbling a bit about these issues, and for a while veered towards preferring markup languages for the greater transparency they afford when dealing with formatting tags and data selection, but also for allowing a greater content focus as the ultimate formatting of the text is not constantly in your face tempting you to tweak it.
Bret Victor's Work
So, where does Bret Victor come into all of this?
I really love how after viewing some of his work, you can gain a much deeper understanding (or at the very least, a novel perspective that stretches your horizons) of the interplay between concepts such as: how things are represented (e.g. through the use of abstractions to allow us to make sense of things from different perspectives [1]), how you can interact or work with these representations (see points below about gap between traditional mathematical notation and what's needed to actually "get" it and successfully work with it [2]), or what we could gain by making these representations more useful to people by harnessing the power of the new mediums/platforms at our disposal and making things more interactive or by allowing more flexible manipulations [3].
Inventing on Principle
1) The Role of Representations
Perhaps the most important parts of this talk for me were related to the ideas surrounding the role of traditional paper/print-based representations as being compact static representations suitable for mass distribution of ideas in a concise form. While this may have been fine (note, I do not say brilliant or perfect) in the past for communicating ideas when paper-based distribution was the only practicable means of long-distance dissemination of new ideas and scholarly findings, in the age of dynamic and interactive displays, we can do much better than this.
As Bret stated, these paper-representations were intended to be compact and concise encodings of the concepts being discussed, so that readers could decode this information for themselves from the concise representation they're given without having any other way(s) of communicating with the original authors. That is, to get full value from such static representations, we need to be able to fully decode all the hidden meanings contained within the publications, drawing upon other prior references and knowledge to interpret the knowledge presented.
However, more tellingly was the part where he goes on to discuss how "real" mathematicians don't actually think about mathematical ideas in terms of these stilted static representations (suitable for representation on paper). Rather, the reason they "get" maths is that form mental representations of the concepts (which make sense to themselves, but may in fact be very very far removed from the bog of notation used). For example, if it works for you, sloshing buckets of liquid moving through stretchy tubes [parametric stuff/interpolation/remapping], squishy inflatable floatation devices with stripy red and white bands over the top [certain geometry shapes], and bags containing heaps of balls with different colours on them [sets and probability] may be much more productive mental tools/aids than strictly "official" representations based on formula manipulation and so forth may be.
2) Dynamic Manipulation and Inspection
The second half of this talk was focussed on the importance or utility of being able to directly manipulate and examine your data dynamically, and hence be able to iterate on ideas or debug problems quicker. The ability to dynamically modify parameters in the code and to see the results reflected in a running instance was cool. However, the network debugging example was IMO much more impressive and practically useful.
The Ladder of Abstraction
Being able to view things from different perspectives is really important to helping us understand things. Selecting the best level to view things from though is the crux of really understanding the power of abstractions, and how to use them.
A few assorted notes:
- Read the essay, and play with those interactive examples. You'll probably wish you had a tool like this when you had to do some or other thing...
- Perhaps I've been looking at this stuff a bit too much with a paper I'm writing at the moment, but this concept of using different levels of detail/abstraction to look at a problem has come up in a number of places in the academic literature, including:
1) Ben Shneiderman's Visual Information Seeking Mantra of "Overview First, Zoom and Filter, then details-on-demand" (see http://www.infovis-wiki.net/index.php/Visual_Information-Seeking_Mantra for an overview - excuse the pun),
2) A Cockburn, A Karlson and B Bederson. A Review of Overview+Detail, Zooming, and Focus+Context Interfaces. ACM Computing Surveys. 41(1): 1-31. ACM Press. 2008.
Stop Drawing Dead Fish
The key insights for me from this video were...
1) Wow! This feels exactly like what I've always felt a proper music notation/creation tool on a computer should actually be like.
I dunno about you, but IMO the current options you have for creating/composing music using a computer are generally really restrictive. For example, Point and Click interfaces for this really really suck as they are intensely irritating (requiring lots of laborious mouse travel, finicky selections, and knowing in advance how f***ing long you want your piece to be - or else, you have to struggle to get it to add/remove space when you run out), while your options for keyboard based notation are often still quite awkward due to bad key placement (though kudos to Noteflight which is so far the best I've found in this regard, though even that still has some clunky points).
It was really quite inspiring seeing him using two multi-touch touch-screens, freely gesturing/drawing shapes on one device, while chording/holding down temporal modes with the other hand. For a moment there, I really wanted to drop the projects I have on hand and start hacking that now ;)
2) As mentioned above, the tech setup he showed there is awesome.
What, with the two touchscreens wirelessly synced together, the ability to still use a physical keyboard to type (where necessary), and the type of types on interactions you can do there (i.e. multimodal stuff, with . Couple that with perhaps a large monitor facing the user (as we have now), so that you don't need to constantly hunch over the smaller touchscreens (as you'll probably develop muscle and spatial memory for the location of items there eventually, and will no longer really need to look where you're poking to do stuff - hence, a large monitor in the usual place for focussing on the output of the actions you're performing may be nice to have). In a way, this is quite similar to the setup that Dreamworks animators seem to use (as shown in the LibEE paper IIRC).
3) A new approach to visually programming dynamic behaviour?
Although I'm still not 100% convinced about the specifics of the dynamic visual programming scheme he presented - it still feels a bit weird and potentially confusing to me seeing those "helper" lines get created and used - I have to say that it is quite an interesting approach which broadens our horizons for thinking about how to approach this problem. Admittedly, the problem of how to specify dynamic behaviour for interfaces and novel interaction schemes is quite an interesting problem which I've been pondering about a bit for a while. There are times when you can easily visualise a new dynamic interaction style, and can probably even quickly sketch the thing and know very well how it should all work and look, but pondering how to actually implement it (with all the low-level event and state handling involved) often turns out to be quite a nightmare.
In the stuff he showed, I liked how you could effectively have "methods/rules" within each of the actions, and then combine these together, or even apply to other entities (though IMO that part - i.e. with that second bubbles action - really messes with your head). Also nice was the parameter extraction, especially seeing where the idea has come from, and where he's now taking it.
(As an aside, it must also be said that the demo apps he made look pretty darn slick. A candidate for what Blender-Touch should look like, hmmm?)
No comments:
Post a Comment