Thursday, June 20, 2024

Thoughts About "AI" (Winter 2024 Edition) - AKA: No, I do NOT want to have to "talk" to your "chatbot"

I briefly interrupt coverage of my Music Visualisation Project to cover a brief rant about the topical "AI" issues that are all the rage right now.

 

My current position on all this "AI" hype is:

1) TBH, I bloody HATE all this "me too" bandwagon jumping crap that's going around at the moment, and hope it all blows over sooner rather than later - just like "Crypto" and "NFT's" and "Metaverse" fads before it did. The sooner the better!

See also this "supremely on the point" blog post ;) -  https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

 

2) The UX of all these "AI" tools is fundamentally flawed:  i.e.  

     "I do NOT want to have to fucking 'talk' to your bloody 'chatbot' to do stuff!"

 

3) The majority of all this "AI" hype is all being poured into all the wrong directions: 

    "We should be focussing our efforts on helping people do what they cannot do otherwise (i.e. augmenting human abilities),  NOT trying to replace them  (i.e. destructive misery causing)"

    That there is perhaps the best way to sum up the ethical line / standard I use to decide what I spend time working on. I'm only interested in working on stuff that betters humanity's ability to do stuff they otherwise wouldn't be able to do without the technology. Other stuff (e.g. ad networks, DRM, fintech, killer robots, facial recognition, tracking + surveillance tech, making people/industries/etc. "redundant", etc.) I refuse to work on  (and really, anything I am not interested in, I do a categorically *awful* job at...)

 

4)  In that light, will I work on or play with AI stuff at some point?

     Short Answer:  If AI is the right tool for the job, I will consider it.

     Operative word: "right tool"

     So far, none of the problems I have been working on have required reaching into that toolset, so I haven't bothered to really delve too deeply into it. But if the opportunity arises where AI presents a better solution than we can achieve otherwise, I will consider it then.

     Prime Example:  With some of the image generation + editing tech out there now, we finally have the a set of powerful tools for fixing a whole bunch of previously prohibitively difficult-to-fix problems, giving us the ability to do spot fixes for defects that would've previously ruined many images / videos. In that sense, these user-guided "repair" tools are precisely the "powerful magic fix-it tools"  that we've all dreamed of having all these years, and so, by my previously stated principles, they may well be the right tool for the job in those cases. But using these tools to construct entire fabrications from scratch, trained off everyone's data (however ill-gotten)? Nope - that's pretty much something that should not be done!


Back to point 2...

The fatally flawed UX premise of AI:

AKA  - "I do NOT want to have to fucking 'talk' to your bloody 'chatbot' to do stuff!"

 

It's weird that, just when we'd almost eliminated the need to actually have to talk to people to get things done, we're now trying to undo 30 years of progress! (Except now it's not people on the other end, but dumb, infuriating machines...)

 

Surely I can't be the only one out there who's always been mildly panicked at having to make phone calls (or answer them)... or worse, having to tell a cashier a complex order (ugh... the planning + translating + remembering that requires... ugh)...

Or even worse, those old horror back and forth merry-go-round calling marathons where you become the de-facto MITM ringleader / organiser trying to get everyone to agree to a time to meet up...😖

 

And yet, here we are in 2024.

Every "tech" company, their legions of minions, and grifters spinning up side-hustles to try to cash in are all trying to shoehorn "AI chatbots" into *everything*, as if there's a "giant unmet need" for being able to "just tell the computer what you want in natural language"...

 

Err nope! Nope. NOPE!

Maybe it's because I'm old enough to still remember how badly "voice recognition chatbots" from the 2000's worked (or maybe even up until a few years ago - you'll easily find them on support lines for random big-enough companies alright...)

Or maybe as I'm not a "Suit" (aka one of the manager / MBA types) who spend all day on a phone yelling at people to do stuff (and prefer that mode of "getting things done")

 

I mean, maybe it's because I'm one of those old-school folks who *can* design a system using nothing but a pencil + wad of paper (or really, just 1-2 sheets in most cases if I'm being honest) and no internet connection (but preferably with direct lines of communication with actual expert end users of said system, if a good outcome is desired)... For folks like me, I've often found that delegating work to a "minion" or "underling" actually ends up being worse than just finding a way to type faster and just get it done myself with less frustration + effort translating + breaking down the task (+ solving all the tricky bits they encounter along the way)...

I mean, by the time I've done all that, I've actually practically done 90% of the bloody work already!

(From the various anecdotes I've read, Milt Kahl practically fell into the same category too - AKA a true control-freak with the skill of a master craftsman)


/end rant

 

Hopefully the next post here will be back to usual programming - i.e. a more "positive" and "constructive" topic, of which there are several I can write about, given time + energy (damned COVID)

No comments:

Post a Comment