Showing posts with label UI. Show all posts
Showing posts with label UI. Show all posts

Tuesday, July 29, 2025

Reaction to "Enough AI copilots, we need AI HUD's" - i.e. Thought Provoking Ideas on Reframing the UI / UX of AI

I just read a very thought provoking blog post from Geoffrey Litt, based on some ideas pitched some 30-ish years ago by Mark Weiser

Blog Title: "Enough AI Copilots, We Need AI HUD's"

https://www.geoffreylitt.com/2025/07/27/enough-ai-copilots-we-need-ai-huds


My (Non-AI Powered) Summary, with additional spin on top
1) The industry at large is going about this all wrong!

2) The "me too" industry frenzy currently of just piling in on top of the "tasteless" (to paraphrase Steve Jobs) zombie train of building digital versions of Tom Riddle's horcrux-diary is stupid.  

    I'm with Mark on this - as those who've followed me for long enough know, I'm very much against all this "chat" and "agent" style crap that all the AI-TechBros are currently all still hyping up.

3) HCI + Interaction Design specialists/experts such as myself should really be stepping in and stepping up to meet this challenge head on - to get back in there to steer the ship as it were, instead of being backseat passengers to an almost certain trainwreck - by doing what we do best:

    a) Taking a step back, and asking the essential basic questions about what exactly we're trying to really achieve, and **WHY**...

    b) Crucially, not letting the existing framing cloud our judgement, and obscuring our own personal ethical + philosophical principles on the direction that we wish to steer technology in

4) What is it that we have to offer then?

   A) Show Don't Tell - If your tech is really that fancy, it doesn't need to be in our faces all the time. We shouldn't *need* to be constantly "conversing" with it, like micro-managing a third-rate annoying minion

   B) An Assistive Superpower - (Now where have we heard *that* before?  ;)    Yep, the emphasis here should be on the tech taking care of the mundane stuff, while using its strengths to highlight stuff we can't figure out as humans...

*NOW* we're talking.
  i) Harnessing tech to help people do what they couldn't otherwise do
  ii) Augmenting + Building Up, NOT Replacing + Subjugating!

These are indeed very much things that are right in my wheelhouse, and problems that I can get behind!  (Whereas the "AI" discourse to date has very much been a very alienating, and unpalatable soup of world-destroying slop that make me and countless others sick to our very core)

Thanks for the reminder Geoffrey!

 

As for the rest of us:   It's high time we started getting cracking, and righting this ship! There's lots of work to do!

Sunday, April 20, 2025

Thoughts on Windows 11's "Recall" Feature

Seeing one of the latest threads this morning about Win11's Recall feature, I'm not surprised that it does what it does TBH

Some of these points overlap with comments I made earlier when news of this feature first broke. I can't easily find those now, but if/when I do, I may amend this post with those notes as well, as they better cover a bunch of other insights I don't think I've captured here as well.

 

EDIT:  Cool, according to this Ars article, they do seem to have put in place most of the reasonable safeguards I'd expect / recommend them to have.

Monday, January 6, 2025

Thoughts on Teaching Human-Computer Interaction University-Level Courses

Those who've followed me on various other platforms will probably have heard bits of this spiel before, maybe worded slightly differently in each instance, but ultimately discussing the same ideas / talking points. So I thought it'd be good to write up a canonical version once and for all - especially since it's unlikely that I'll end up doing this anytime soon anyway  (i.e. never say never, but at least for the foreseeable future in the next 3-5/10 years, I'm honestly not really that interested in a return to academia or teaching).


Key Points:

1) I've seen things. Perhaps a whole lot more than most people get the chance to see, and with a bunch of variables controlled for (or at least held somewhat constant).

2) From my observations, left to their own devices, the majority of students gravitate towards doing certain things that, left uncorrected / undiscovered (until too late usually) practically mean they probably don't learn half the lessons they probably really ought to  (i.e. From my perspective, they practically learned nothing (beyond maybe picking up some new trivia that'll be gone soon after the final exam) and just tried to get an easy pass on a course they may have thought was one of the "easier" ones they may be required to take).  At least IMO.

3) "UCAPT" - A useful mnemonic I developed during my time doing this, which I adopted for evaluating student's work (+ and also later in my own practice) to check if all the necessary things have been accounted for.

Thursday, December 26, 2024

UI / App Toolkits + Frameworks - The "Missing Middle-Layer"

Over the past decade and a bit, on almost every project I've worked on, I've come to realise that there is often quite a massive gulf between what the Standard Library + UI Toolkits typically offer, and what it is that you really typically require when building anything of consequence for the real world. The problem is such that really, you end up needing to spend quite a bit of time re-inventing the following sub-systems for each project OR end up paying the tax of not having these for each and every feature you add (by effectively reimplementing them *per feature* instead, but without the benefits that having the standardised system brings).

While the initial seeds of what I'm about to discuss date back to around 2010, things really started to take shape around 2018/2019, when I first seriously started mooting the idea of maybe creating my own Programming Language / Environment someday (i.e. "Kea"), a batteries-included environment for doing everything "my way"...

One of the key aspects of that language would be that the following functionality / capabilities would end up being "baked" into the language as first-class citizens. Before fully embarking on that journey again, I thought it would be good to first prototype these systems in other environments where they may help advance the overall state of the software industry. Hence this blog post.

 

The Short List - Core Functionality:

* 1) Property Metadata System  (i.e. something like Blender's "RNA" system)

* 2a) Bidirectional Property Binding (Data Objects <-> UI Widgets)

* 2b) Widget / Factory that creates standard auto-bound widgets, given only the Property ID + host object reference  (i.e. something like Blender's UI widgets)

* 3) Automatic hierarchical property serialisation system  (i.e. something like Skyline-X's Preference Sets)

* 4) Version patching  (i.e. something like what is used for Blender's SDNA system)

   * Includes utilities related to version number management and/or Git branch/revision info

* 5) System for Physically-Based Unit Handling and/or Unit Conversions

 

The Short List - Extended Functionality:

* 6) Static/Dynamic Expressions / Drivers, Expression-Evaluation, and/or String Template Substitutions

* 7) A "Datablocks" System + CRUD API's for managing those

* 8) "User Preferences" system

* 9) "Operator"-like system (for standardised logging / error handling, undo/redo, and background-exec of expensive long-running-processes) - inspired by what Blender uses since 2.5

* 10) Basic Extendable Templates/Base-Implementations for Handling the Following Functionality (Optional)

   * Standard logic for New / Load Project/ Save Project stuff

   * Standard logic for handling cache files / support data files

   * Built-in screenshot per-viewport/view-angle screenshot functionality (with repeatable / savable parametric configuration)

   * Grease Pencil / Built-in Freehand Annotation Tools

   * Node-editor 

* 11) UI Toolkit Extensions + Features

   * Collapsible Panel implementation   (Note: This is surprisingly absent from most UI toolkits in practice)

   * Control-Gain Ladders / Input Convenience mechanisms

   * Popup panels templates

   * "Overlay toolbar / interactive viewport tools" system 

   * Popup info panels / extended menus for Unit Conversions + String Templating functionality

 

Monday, November 25, 2024

Initial Thoughts on "Shortcut Editing" UI's

Is it just me, or do pretty much all the "Shortcut Editor" UI's in various apps suck?

I don't claim to have a fully fleshed out + well considered + tested alternative in this case (unlike with many of the UI problems that I've spent some time toying with), but just to throw some ideas out there to kickstart a conversation about these UI's that I think as an industry we need to have.

So, without further ado, here are a few ideas for how I'd go about making Shortcut Editor UI's better if / when I try to design one next time.

 

Saturday, November 9, 2024

Principles and Frontiers for Creative Software Tools

Here's the long-promised "Manifesto + Roadmap" for the future of Creative Software Tools I'd been wanting to publish since May/June 2024, but was ultimately stalled from doing so by a bad first encounter with Covid ultimately sapping my strength to take on outside-work commitments for a few months.  

NOTE: To get this out, I may just publish it first then amend it later

 ~~~

Dabbling with designing up another DCC tool after a hiatus of a short break from that field has reminded me of a whole bunch of untapped / unsolved directions for the future of DCC tools to make them more useful to the humans who use them.

Thursday, October 31, 2024

Wishlist for My Ultimate Photo Management + Editing Tool

Now that my project schedule is freeing up again (and most importantly, I'm finally free of my various university contracts / commitments over the past few years, with the rather onerous IP provisions those came with), my attention has again been turning towards what sorts of projects I may want to start working on in my free time going forwards.

 

The key operative principle though for any such projects I now take on is this:

From now on, any passion-projects I dedicate my free time to (and with full force) will necessarily only be ones that I fully control + own. Unfortunately, experience and hindsight have taught me that merely having something be open source (but still part of someone else's platform / hosted by some other funding org) is ultimately not the answer I once believed it to be.


Note: This is also NOT a firm commitment to actually embarking on building all of these things. But rather, just some open-air brainstorming, hoping that someone will build it all for me (and then not put it behind a hideous subscription-based paywall). Heck, maybe the mere act of brainstorming these designs then releasing them as blueprints to hopefully inspire a whole ecosystem of interfaces should be the actual project!


Enough framing boilerplate. Let's get down to the original topic for today's ramblings:

What my ideal "next-gen" photo management + editing tool solution should look like, were I to go through the effort to set one up.

Thursday, August 8, 2024

Tip: Getting Threaded Conversations Working Consistently Across Your Desktop Outlook Mailbox

Here are some notes on how I've got Outlook set up to make my work mailbox a bit more manageable. Most of these things are probably officially documented "somewhere", but it's nice having a quick guide for getting a setup that seems more sane (for anyone coming from Gmail)

 

This post comes about because, while I had this working in my Inbox, I found that this was not the case for my other folders once I moved threads there. Hence my search for answers. 


Short Instructions  - For Enabling Threading on "Other" Folders (see image above):

1) Go to one of the affected folders
2) Go to "View" tab on Ribbon
3) Enable "Show as Conversations"
4) In the confirmation prompt, click "All Folders" (vs just "This Folder")

I suspect I'll have to go in and redo this for any new folders I add at some point in the future. But at least this gets all the current ones working nicely.


Bonus Tip:  Fixing the need to triple-click on the annoying little triangles to collapse threads

Make sure you've got that "Always Expand Selected Conversations" thing from that menu selected, which fixes that problem (it seems).

Oh, and you need to do that per-folder... unfortunately, this case doesn't show the handy "change for all folders" popup.

Monday, May 6, 2024

Mirrorless Camera Observations - A Few Months In (Part 2 in Series)

As a followup to Part 1, here are my observations on the mirrorless camera having used it for a few more months (and for long periods instead of my old DSLR).

 

Pros:

Unfortunately, this list is currently still a lot smaller than it should probably be... especially given the price tag on this thing.

1)  As mentioned last time, the overall improved sharpness + richness of colour range the sensor are capturing are a definite improvement. Especially loving being able to use decent / normal-ish shutter-speeds that I want to be using at night, and *still* getting the shots with reasonable quality is something that's been a real benefit of using this at night and/or in marginal lighting conditions

2) As expected, when dealing with sharply varying lighting conditions (e.g. shooting birds in a backlit tree, where taking one step to either side can land you in a patch of bright sunlight + a different exposure), having a digital viewfinder is a real improvement as it allows seeing in real-time the exposure changes. Also, just having this all be via a digital screen vs sun-rays passing through a zoom lens direct to eyeball is a real safety measure that's good to have. Same too the ability to use the viewfinder while recording video. Basically all the expected benefits of shooting with a mirrorless camera (and subsequently having "live view" in the viewfinder) are proving to be as I epected

3) When it works, the auto-focus is good for tracking moving objects - Operative word: "when" it works... More on this later.

4) Having 3 (or actually 4, if counting the one on the new-style lenses) dials able to be used for controlling various settings is a big improvement on only have 2! Particularly as the third one can now be mapped to controlling ISO in manual mode, making that mode actually useful if you want to lock in camera behaviours with the other 2 then use ISO to get the desired exposure (i.e. typically underexposing relative to what the camera's metering things the scene requires)


Cons:

This list is unfortunately still a lot longer than I'd like, with a bunch of these being ever-frustrating things that grind you down everytime you use it. (Nothing puts these into focus as much as just switching back to shooting on old DSLR for a change, and suddenly no longer having to deal with most of this crap)

Sunday, January 21, 2024

Mirrorless Camera Observations - First Week Impressions

Recently I bit the bullet and proceeded to get a new camera to supplement my ageing (and also apparently somewhat ailing though still trustworthy) 7D DSLR. After sticking with one camera for over a decade, switching to anything else was always going to come with a learning curve. What I didn't expect though was what some of those learnings would be!

 

Short Summary of Key Points

Pros:

* Between a nice new higher-quality standard lens, and new sensor tech (and being FF this time too), images are a lot sharper in general. Especially when shooting landscape stuff (e.g. treelines against sunsets in particular, but also tiny text on small labels on things in frame) are now often very sharp + clear with this setup, whereas it used to be somewhat hit and miss whether the same applied before with my old standard lens setup

* There are now 3 dials on the body that can be used to adjust various things, along with an additional mappable control-ring on the new-generation lenses. As you'd expect, in Full Manual Mode, these dials have been mapped so that the two that used to be present still control shutter speed + aperture as before, with the new one handling ISO - exactly as I'd been wishing for many years.

* New sensor == Higher ISO levels you can use (and with less obvious grain when that happens)

* Auto-focus available during video recording

* Can use EVF for "through the lens" live-view preview of what you're recording during video recording - which is better for situations where holding at arms length to see the LCD was problematic

* Can also choose to use the LCD screen in  "pop out" mode out the side, with some angling support available

* A whole bunch of new + more advanced autofocus modes + settings to choose from, along with a wider array of focus points that can be used

 

Wednesday, December 22, 2021

QML Quirks - A laundry list of bizzare happenings, bugs, and dodgy incomplete crap

Over the past few years, I've built a fair few UI's using QML (Qt's DSL for writing UI code) - proper ones, including one for a mission-critical / safety-of-life application, and another powering the tool to be used across a large group of non CS types. In other words, things that had to work, and not just be interactive "nice to have" toys (aka research prototypes).

Memorably, I was once asked during an interview whether I would recommend using QML and/or how it compares to using the more battle-tested QWidgets. At the time, I'd only really used it for a bunch of research prototypes (i.e. implementing HCI experiments to be exact), where it presented a great environment for implementing the kinds of dynamic non-traditional interfaces I needed. For that it was great and saved a lot of time. But, admittedly, it did also throw up a bunch of glitches (e.g. randomly sampling garbage from the wrong texture buffers / other applications even, particles not showing when chaining several scenes together but being fine when used in isolation, etc.). At the time, I could only attribute some of these to me perhaps trying to combine a few too many highly experimental techniques where perhaps the framework hadn't been tested so great.

However, knowing what I do now, I would strongly recommend that unless you were building something non-mission critical, and where the thing is loaded with animations / dynamic effects, that you really shouldn't be using it. Sure, you may be able to knock out a prototype quite quickly - but at some point - often at ill-timed moments, you will randomly stumble across one or more intractable bugs / quirks from left field that have you scrambling to rewrite / refactor the whole lot.

 

Disclaimer: Just to be clear - I generally do still like the idea of the QML language, and I think it does many things right. However, there are also many ways in which the "declarative paradigm" is really awkward to work with (*ahem* creating dialogs / temporary items / sequential-flow-based-types / etc.) 

More disconcertingly though, implementation-wise, it is seriously lacking in quality / completeness / stability, etc. in enough ways that mean that I cannot in good conscience recommend any new greenfield projects to start adopting it now.

(Plus, the fact that the embedded scripting / logic programming language it uses is Javascript... blegh!)

[Web Browser UX Proposal] Bookmarks 2.0 - Load / Save "Page Snapshots" Instead of "Bookmarks"

It's been a while since I last posted anything here. Since then, the Blogger editor UX has changed a bit (and I'd say, quite detrimentally in a few key areas) making it a pain to write + post anything here. At some point, I'll likely end up converting this blog to a statically generated format, so I'll have full control over the longevity + setup of it, as that's been a recurring issue with Google properties for a while now.

Speaking of UX issues, here's a proposal for a way to solve one of the bigger issues we have with web browsers currently. Specifically, it aims to improve the usability of bookmarks, reduce the reliance on having to keep so many tabs open for certain reasons, and may also help the Internet Archive in its valiant efforts to keep on top of the endless churn of the web.

Note: While writing this, I've been considered setting up a web browser dev env to tinker with doing this myself (and probably fix several dozen other annoyances in the process) - but that's probably just holiday-mode brain trying to take on too many side projects that will have to get dropped as soon as the daily work-year grind starts up again.

Anyway, just thought I'd post this here to get a bit of visibility onto it.


Sunday, August 11, 2019

How (Busy) Software Engineers/Scientists Use Their Computers - A Datapoint

Having spent a few years doing HCI research, I know very well the importance and value that getting data points (any data points at all) about how people set up their workspaces and organise their work/workspaces is - especially for anyone involved in Operating System UX, Personal Information Management (PIM), File Systems, or Web Browser UX work.

Today, I thought I'd make a quick post outlining my personal workflow, in the hopes that this will be a useful datapoint for anyone out there designing these systems (*wink wink* Microsoft ;).  Admittedly, the way I work is probably a bit of an outlier, but I there are many elements here that should be of general value. Hopefully this will be of use to some people out there who study this stuff :)

So, without further ado, here is an overview of my typical working environment.


Friday, March 9, 2018

Thoughts About Geometry-Editing UX issues - Multi Object Editing, Edit Modes, Selection/Action Split, etc.

Just a little earlier, I spent some time reviewing and commenting on T54242 regarding the design issues behind making Blender be able to have multiple objects in Edit/Sculpt/Paint Modes at the same time. As outlined, there are certainly a few rather prickly issues in that we'd need to resolve to be able to do this.

In this post, I'll go over a few of the key issues we need to contend with here, along with some other general thoughts I've been mulling over for quite a few years now about what IMO makes an effective UI for editing large amounts of geometry (i.e. vertices, control points, etc.)


Tuesday, February 13, 2018

The Battle Against Compartmentalisation - Curiculum Design Challenges

Recently there has been a lot of discussion about the need for ethics courses as part of Computer Science / Software Engineering (or really, any form of engineering) degrees for obvious reasons. While I don't dispute the need for such courses (and indeed, I firmly applaud and welcome the introduction of these as integral parts of the curriculum) experience suggests that we do need to think about how we're presenting such material to students.

Specifically, I have doubts about whether the standard model of "let's just include a course in there to tick that requirement off" is actually the most effective way of doing it. Examples of such courses include "Programming in Matlab", "Ethics", "Security", and to a lesser degree, parts of HCI/Usability/UX. Many of these also carry a bit of added baggage in that they are often designated as "required" courses for a particular degree (more on this later).

From personal experience (and from observing students over many years), topics like this cannot be easily "compartmentalised" into a "tidy little thing that you think about separately from other things". That is, you can't really say, "Here is the body of knowledge you need to know. Memorise it, and pull it out of a hat when you need it". Instead, true understanding and mastery of such material is actually only achieved by adopting the "fundamental mindset" involved. For example, a few crude examples of fundamental mindsets for the aforementioned fields are:
     * Security - Trust no inputs - Treat everything that the "user" inputted as being potentially compromised, and a potential attempted attack.
      - Alternative mindset: How can I hack/crack this?

     * Ethics - How could this go wrong in the worst case? Who could get hurt/harmed, how bad would that be, and are there better alternatives that won't cause anyone that sort of trouble?
      -  Alternative Summary 1: Don't be a jerk
      -  Alternative Summary 2: Would you subject yourself and your loved ones to this? Your mother? Your children/future-children/grandkids/great-grandkids?

     * Usability - Humans are clumsy and stupid creatures of habit (with limited memories, limited attention spans, limited physical capabilities, and a whole bunch of other handicaps).   The problem therefore becomes - how do we try to reduce confusion and/or the potential for things to go wrong so that the bumbling apes can still get their jobs done.
       - Alternative Summary: Could I still use this thing when drunk, sick, injured, all of the above, and I couldn't look up the code/docs to check what's going on?

     * ProgrammingComputers are idiots - They can perform complex operations, lots of them, very very fast. But, you have to precisely tell that what to do, when to do it, and how to do it first.

However, you can only adopt/absorb a fundamental mindset if you spend time developing a new a set of skills and/or adopt new ways of thinking about problems.


Sunday, January 8, 2017

Voice Controlled AI Devices - A Reaction Post

In response to the article about voice-controlled boxes being activated by a news item about how a kid managed to buy a dollhouse + cookies off Amazon via the voice control.

Interpreting sound has never been an easy thing. Not for humans, and definitely not for computers! If you actually think about it, it's not that hard to imagine how hard it is for a computer to understand speech and sounds. For example:
   * How many times have you had trouble understanding someone's accent? Or had a misunderstanding because you misheard someone's muffled speech over a noisy/muffled/faint/crackling/unreliable phone?  Well, guess what, for a computer doing voice recognition, the only input it's got is the sound coming in from the microphone... which of course is mixed in with everything else going on sonically in that environment (e.g. TV's, smartphones, gaming consoles, music players, rangehoods, kitchen equipment, aircon, running taps, open windows/traffic-noise/neighbours, bickering flatmates, etc.). And that's not to mention that the users may be out of range of the microphone, or the microphones may be cheap trash bought for bargin basement prices, and have been wired backwards...

   * How many times have you been watching a film or tv show, and found yourself lurching for the fire escape as a siren sounded on screen? Or reached for your phone, only to realise that it wasn't your phone ringing, but that of the lady at the next table? Or perhaps you've responded to someone calling your name, only to find that a stranger had been calling another stranger, and not you (the now slightly embarrassed sucker trying to pretend that you didn't just not-answer to your name). Clearly, even us humans get it wrong quite often, but at least we often have the benefit of *context*, the ability to use our other senses to diambiguated the situation, and a few other "on-the-fly" techniques. (This probably goes some way towards explaining why there's a reason that people like me really don't like answering phonecalls or having to call people on the phone...). Anyways, if it's hard for us humans to get this stuff right, expect the computers to have an even harder time to disambiguate all of this!


Inspired by all this, I wondered what a "day in the life" of one of these voice recognition boxes would be, when deployed in a domestic environment that's not kindof far from the "idealised model-human" fantasy that designers often find themselves falling back to... The answer was that it would feel like they were a lost and isolated operative thrust into a war zone - "hostile enemy territory"...

Saturday, September 17, 2016

Annoying Habits of Computer Science/Software Engineering (Students) Designing UI's...

Over the past few years, I've had the opportunity to have a front-seat view of how groups of 3rd year computer science/software engineering students approach the problem of designing a UI. It has been said in a few places (citation needed) that ultimately, the way group projects end up taking place for class projects and in real life are largely similar(ly awful). Thus, given that many of these folk will end up in the workforce in the next few months to a year's time as the newest batch of "professionals", if what I've seen is anything to go by, no wonder we're kindof perpetually doomed...

It's also no wonder then that we're often burdened with so many absolutely terrible systems for what-should-be-mundane/trouble-free processes like activating cards or making use of various services for the first time, etc. Or, nastier problems like the current religious dogma + regime of "automatic software updates" that regularly foist themselves at you every other day, usually at the least convenient times, and from time to time leaving a colossal mess behind when they're done.


Sunday, May 8, 2016

CHI2016 - Sketching Papers, and General Discussion of Interesting Research Directions

The annual CHI (Human-Computer Interaction) conference is on this week in San Jose. As one of the "big + important" conferences in Computer Science research, it's always interesting/important to keep an eye on what's happening there to see if there any interesting things come out of it. So, I duly started checking out the accepted papers, before stumbling across the "sketching" section.

My first thought was, "woah... they have a section on interfaces for sketching tools?!", followed quickly by, "I wonder if there's anything of interest there...?" It turns out that there are two papers here, both of which fall quite squarely into the frame of the type and style of research that I love doing most (i.e. the "fun stuff" I'm doing with Grease Pencil + Pose Sculpting/Sketching, vs the empirical work I currently do for my PhD).

So, what were these papers?
1)  "Skuid: Sketching Dynamic Illustrations Using the Principles of 2D Animation"
2)  "Storeoboard: Sketching Stereoscopic Storyboards"


Sunday, March 27, 2016

Driver Workflow Improvements - "Property Eyedropper" for Quicker Setup, and Other Features

It's taken a few days of intense hacking and investigation, but late this afternoon I finally succeeded in implementing a new way workflow for setting up drivers that should help streamline the process. Many have been requesting a faster way of doing this for years and it's been on my todo list for a while now until I finally got around to it this Easter.

Here's a little demo of how this now works:




Thursday, March 10, 2016

Workspace Upgrade: New 27'' Monitor

Today, I finally went out and got myself a nice big 27 inch 4k IPS monitor for use at home. I'd been stewing and investigating move this for several weeks now (though the idea of getting a larger external monitor for use at home has probably been floating around over half a year now).