Here's the long-promised "Manifesto + Roadmap" for the future of Creative Software Tools I'd been wanting to publish since May/June 2024, but was ultimately stalled from doing so by a bad first encounter with Covid ultimately sapping my strength to take on outside-work commitments for a few months.
NOTE: To get this out, I may just publish it first then amend it later
~~~
Dabbling with designing up another DCC tool after a hiatus of a short break from that field has reminded me of a whole bunch of untapped / unsolved directions for the future of DCC tools to make them more useful to the humans who use them.
PART I - UNTAPPED / UNEXPLORED OPPORTUNITIES
1.1) Always start with an infinite / unbounded canvas
Across every major category of software, we always see the same story every time: When the user starts up the application, they are either presented with a pre-sized box (OR, are told to have to decide and tell the computer up front what that size of that box / grid should be)
For a medium that is actually unconstrained by the physical limitations imposed by physical media (i.e. we are not actually using physical pieces of paper here, which necessarily have certain sizes and dimensions, and which then must be sorted relative to each other in some fixed order), does it not strike anyone how odd it is that:
* Our word processor's start with a default screen that shows an portrait-orientated A4 sheet of paper, with the cursor placed at a fixed position from the edges due to the page margins. WHY?
* Our drawing / painting tools by and large require defining a bounded canvas of a particular size to draw on, with the size corresponding to whatever final output page / image-size is required. WHY?
* Our music notation applications present us with a set of staves that are divided into ~32 bars (or whatever number), with each bar then filled with a whole-bar rest by default, and bar lines that rigidly divide everything into a grid into which notes need to be slotted in. WHY?
* For DAW's / MIDI sequencer rolls / etc. - Again, there's this obsession with first defining a number of cells + divisions within those cells, before being able to start creating.
WHY?!
Why must it be like this?!
I strongly believe that we should be able to have maximum flexibility to just create as we want, plonking down whatever we want into the machine as fast as we can to capture what we want to express, and unencumbered by what (loosely paraphrasing an early Pixar animator who used to be a puppeteer) amounts to "trying to dance while filing your taxes, one step at a time"
Why is it that we cannot just be able to plonk stuff directly anywhere on the infinite space where-ever we please, then as our ideas develop, to then carve up these snippets into chunks (that we can then start thinking about dividing up into physically sized / restricted sized cells), and either move them around relative to each other (to try out how they fit), or to add connecting lines between them to test out different ways of flowing between all the various snippets
1.2) Built In Version Control / Evolution History Graph
Closely related to that idea of being able to create links between different snippets is the idea of having a built-in Version Control / Evolution History Graph tool that you can use to experiment with different versions of a project (i.e. being able to try out different versions for certain aspects, switch between them, etc.) and keep track of all this without having to use an external tool (that then requires reloading your work each time).
Working with Git opened my eyes to the possibilities of being able to "try out" different versions / approaches, and to be able to easily go between them to evaluate their merits. But trying to do this with most other tools quickly turns out to be a rather prohibitively difficult thing to even attempt to do!
1.3) Grease Pencil / Built-In Annotation Tools
My experiences building and integrating Grease Pencil into Blender taught me that the value of having an in-built annotation tool within every DCC tool (and which works across all areas of the tool, saving those annotations as part of the project files, and allowing these to be shared + potentially used as an input method that user-defined scripts / addons can then take advantage of) is a very powerful and infinitely invaluable capability to have.
I'm biased about this, but yes, I think more tools should have something similar built in!
1.4) Batch Mode
Another idea I've been tinkering with for a few years is that it would be really handy having a "batch" mode to use when experimenting with the effects of various variables. This is sort of like an "Evolution History Graph" (+ built-in experiments notebook) rolled into one, but with less manual work reverting/changing configs, and instead being able to simultaneously look at all of them at once.
The key point is really being able to have the tool display a grid where it automatically calculates out the effects of manipulating multiple variables/parameters across a range of values for each, and saving multiple snapshots of the thing under test under various states to examine what impact that had (i.e. looking at the object you're modelling from a set of different viewports/vantage-points, having an "interactive viewport" where you can set tumble the object from an arbitrary viewpoint interactively (as if doing it in the main viewport) but then having all other configurations also show what would happen when you do that).
This is not just restricted to 3D/CAD modelling situations, but also applies to things like checking the effect of adjusting a padding value / layout calculation for some UI code, where you need to perform a series of actions to bring up the element under test (but under different configurations - e.g. different font sizes / adjustable parameter combinations), and save off a wall / grid of these different configurations to more quickly compare these. (Yes, I recently had a situation where I wished I had this exact tool)
Note: Implementing something like this then requires a working "2.3" implementation, otherwise you can't really pull it off. Otherwise, we need some kind of "automation agent" for driving the UI in the background to get these results.
PART 2 - IMPLEMENTATION CONCERNS
2.1) "Standard" Functionality that the Programming Language / Application Frameworks Should Just Provide Already
It's 2024 already, and yet every time I start a new project, I'm inevitably faced with having a massive mountain to climb each time to rebuild all of the "standard" infrastructure I need to have for building an actual usable tool.
These things include:
* 2.1.1) Blender-like UI toolkit that focusses on "properties" instead of "widgets", with auto-bindings + auto-generated widgets with sane/project-customisable default behaviours based on the properties.
* NOTE: I practically built something like this in "Python + PyQt5 + QML" for TPMS Studio (using a lot of metaproperty magic). It worked relatively well, though there were also a whole bunch of rough edges. Maybe I might rebuild a new version of this (taking into account some of the limitations that plagued the old one) and release it, as I think it's important that the industry at large has access to something like this to get what they're missing.
* NOTE: For all the newfangled web-bro folk, what you folk call "Reactive" UI implementation design patterns only partially approach what we "used" to have before we introduced our property engine. In other words, you're still one generation behind!
* As part of this Property-Widget system, for the Property definitions, we have things like:
* Physical Units (and automatic conversion between the common ones)
* Range Constraints (i.e. absolute min/max values, but also "soft" min/max + adaptive defaults based on state of other variables)
* Linkable Datablocks system
* 2.1.2) "Template Variables / Driver-Expressions" system for making fields depend on each other (or on other global constants/settings - either from the application, or from the current project), and/or a way to do live / one-off instant expression evaluations instead.
* 2.1.3) Auto-serialisation engine for dumping + reloading project state + doing version-patching (i.e. something like a "Serde + Blender SDNA + Skyline-X Preference Sets" mash-up) - Time and time again, I've learned that one of the best favours you can do yourself is to make sure you've got something like this set up from very early on, as it just saves you so much effort (+ gives you an easy way to go in and figure out what may going wrong by doing easy state dumps)
* 2.2.4) User Preferences / Settings system (e.g. for standard paths, and default settings) - See also 2/2
* 2.1.5) Tools / Operators system - A standardised system for handling all the standard inevitable infrastructure around wanting to log what tools got activated and when, handling any error conditions (or success status reports) they emit, and importantly, offloading expensive operations away from blocking the main UI thread (i.e. so we can show interruptible progress-report indicators).
* 2.1.6) Built-in "notebook" system for adding documentation / notes about what you're doing in your project
* 2.1.7) Built-in "viewport screenshotting tool" with state-snapshotting-metadata saved into the file (as per TPMS Studio / draw.io) - The idea is that you can define a fixed region (that gets saved to the file, and can be repeatedly reused) for making screenshots, with these screenshots also having auto-naming based on a user-specified template/format, AND with the settings used to produce the screenshot saved into the image (so you can directly load the screenshot images into the program and start/resume editing from that exact saved state)
2.2) Tool Settings vs Application Defaults
Towards the end of my time with Blender (and again with my most recent CAD tool project), one area that has repeatedly come up is that we want some way of saving settings/presets for various tools. More work is needed on figuring out good ways of dealing with this (i.e. is it per-project, or bound to the machine the tool runs on, or is it a mix of both, or something else?)
2.3) Ability to fork the current "application state" so that we can run expensive operations in the background on subsets of the state (without affecting what the user is currently working on)
Towards the end of my time working on Blender, this is something that my original Depsgraph + Evaluation Graph design was supposed to handle - with the separate "Components" that would've entailed we could just calculate stuff with a smaller subset (instead of duplicating the entire database, or resorting to the confusing-nonsense-fallacy that is "Copy on Write"). The main problem I was struggling with was ultimately how the ownership of all these component evaluation contexts would work, but also the problem of how to flush "animated" state (on properties) back to the main DB that the UI was bound to.
On TPMS Studio, I ended up trying to build an alternative/stupider system (based on just building a separate copy of the current app state, and not binding up any of the UI hooks). In that case, my hands were partially tied as too much of the state was all tied into that central backbone, so I couldn't easily tell it to not instantiate up all of the sub-objects in the application state tree.
Hopefully I'll eventually crack this problem in one of my next systems.
No comments:
Post a Comment