“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.” —Jean Baudrillard
When Jim Jones and his 900+ followers knocked back their deadly shots of diphenhydramine, promethazine, chlorpromazine, chloroquine, diazepam, chloral hydrate, cyanide, and—to the eternal chagrin of the Kraft Heinz company, whose cherished kids’ sugar-water brand was incorrectly but permanently welded to the event—Flavor Aid, there had to have been at least one survivor thinking, now, finally, at long last, the scales will fall from their eyes.
In a manner of speaking it was true. Jones’ followers were notably quieter after their deaths. If Jonestown were to happen today, we could train up a chatbot to keep the tumult going post-poison.
Too soon? Well, get used to it. There is not a single corner of the human spirit which will go unexploited in the quest for more meat for the AI grinder. Or, if I may borrow a phrase from Linda McMahon, the A1 grinder. Linda may be little more than a perambulatory mollusk, but she can malapropriate with the best of them, and in accidentally relabeling the algorithmic fad sweeping the nation like MySpace or the pet rock, she did us an enormous favor. In choosing A1, the hoary old steak sauce originally created for the king we never had—George IV—Linda hit, piñata-like, on an excellent metaphor: flavorful if tiresome in small amounts, repulsive at scale, utterly devoid of nutritional value.
I have a technical matter I wish to get off my chest: The unwanted interlocutor that is being shoved into every orifice of our lives, which threatens to turn every textual exchange into lengthy, largely unread clots of soggy and redundant language, which will allow the scamming industry to operate at heretofore unimagined scale and effectiveness, which will give talentless managers license to slough off highly professional creative workers—these hallucinating sentence machines are not AI in any sense of the word. I mean, they are certainly artificial, but they are no more intelligent than a mechanical watch or my Honda Fit. It’s an algorithm. You could work through it yourself using paper and pen, if you had millions of years to spare.
So called generative AI isn’t even new. ELIZA, the first such program, dates all the way back to 1961. ELIZA would never be mistaken for ChatGPT, but the former is the Wright Brothers’ flier to the Airbus A350—distinct in appearance, but sharing DNA.
Except that neither ELIZA nor ChatGPT do anything particularly useful, whereas an Airbus A350 can carry you to someplace where people aren’t so goddamned excited about ChatGPT.
So generative algorithms have been kicking around for a minute. I had a program for my old Mac in the mid-90s that would generate paragraphs and pages of text meant to read like it’d been excerpted from Emanuel Kant. None of it made any sense, but then neither does Kant. It was impressive though—there was no limit to the amount of garbage it could generate, and it was grammatically correct. So why did Kant Generator Pro fail to spark a revolution?1
Well I reckon in part because a revolution wasn’t needed. It was the mid-90s. We were busy foolishly adopting the Internet. These days it’s no longer the mid-90s—perhaps you’ve noticed. We’re not adopting anything except PTSD and it’s been a good long while since Silicon Valley showed us anything worth oohing and ahhing over. Stock prices are in grave danger of not exploding irrationally and failing to provide massive paper wealth to investors whose last eight-ball is just a faint patina of dust on a Loverboy CD case.
And what did our genius class do, faced with a long future living with the slim margins that have made the restaurant industry a distant memory?2 They took a page from the past. ELIZA was distinct from many of the generative systems that followed until this current crop for one simple reason: chatbot front end.
I’d like to avoid wading into a bunch of academic papers, but I’ll provide citations to anyone who wants them. In brief, chatbots do basically one thing: convince people that there are things going on under the hood that, well, aren’t. ELIZA looks ridiculously primitive by comparison to ChatGPT but that’s only relative. In 1961 it convinced a lot of people it was intelligent, and the most significant factor in that perception was that people talked to it (well, typed to it), and it talked back. Nowadays this is known as the ELIZA effect, and it’s considered an anti-pattern—that is to say that that the use of chatbot interfaces is considered deceptive and ethical software developers tend to avoid them. The same goes for avatars and voice interfaces, especially cute ones. Those automated calls from insurance companies and banks, the ones that almost seem to be from humans given the naturalistic pauses and intonation—they use this technology for a reason. And it’s not because programmers showing off. It’s because these sorts of interfaces disarm your rational mind.
Which is frankly a necessary move because otherwise smart people would look at this “tool” with the skepticism it deserves. Everyone is running around trying to find a problem for which the quasi-random generation of text is the solution. This is not how tools work. People didn’t have hammers for a decade before someone realized they would be useful for driving nails.
People sure as shit would not be accepting A1 as a replacement for real, effective tools like search engines. And yet here we are. Search, if one were to be uncharitable, grants access to falsehoods and requires discrimination from the user; A1 generates falsehoods and asks the user to simply accept what is given them. This is not an improvement. Quite the contrary, it is a baffling capitulation to an aggressively marketed toy/weapon.
But search is only the tip of the assberg. Our institutions—corporations, schools, governments—are falling over themselves to “get ahead” of A1, even though it’s not at all clear that where ahead is. All that is certain is that the desire to be in the top spot is trumping all other considerations—veracity, authenticity, originality; all are disposable, all are to be sacrificed at the altar of “cutting edge,” despite the fact that we are not, the vast majority of us, AI researchers, and simply using a product more pervasively than some competitor no more puts anyone on the cutting edge than would switching to that peanut butter and jelly they sell in one jar.
The promise and fear of A1 is that it will usurp jobs. Fear because, well, as a people we don’t like being unemployed. Promise because of the vaunted “efficiency” that A1 will supposedly goose. Don’t count on either one. A1 will certainly change jobs, and it’s hard to see how for the better. It’s not going to result in a net decline in jobs. Quite the contrary: a whole new class of jobs has already emerged, encapsulated in the world’s dumbest neologism: prompt engineers. A1, see, is so intuitive, so ready to meet the user on his or her level, that we somehow need engineers to talk to these digital braniacs.
Of course, there’s nothing of engineering in prompt engineering. It’s more like algorithm whispering—one studies the product as though it were a strange and skittish animal, and after a two-week course you too can issue prompts so esoteric as to defy human understanding, which contradicts the whole point of chatbots. And let’s not allow the word “prompt” to slink offstage like a supporting actor in a production of Springtime for Hitler. What tool has to be prompted? I told my wrench to remove a nut, but it turned on my TV instead. I guess I need to modify my prompt.
It’s fucking ridiculous.
All this to the side, the most egregious effect of A1 has been the prioritization of product over process. I know there are those willing to knock down everything good and just in the world simply to get their product to market fastest, but there’s no reason to foist this empty-headed haste onto the rest of us. Of all people it was a YouTuber, Drew Gooden, who said what I feel is the most concise takedown of A1, stating, “I would rather make something shitty on my own than watch a computer make something good.”
I understand of course that Coca-Cola doesn’t want shitty, not for any price. But forcing creative people to bypass the entire creative process in exchange for making a selection from a range of equally banal and garish images or dull text is a guaranteed path to shitty not just for the moment but for all the kids coming up. I suspect that the CEOs and managers who are pushing this garbage down into their organizations like expanding sealant foam are incapable of such long-term, holistic thinking. That tracks—a firehose of text is well suited to the predominant creative form of the day; that is to say, content. Content—that featureless gray sludge we pour daily into templates. Devoid of meaning, devoid of communication, devoid of truth. You can claim you only use it for rough drafts, but there’s no avoiding the exhaustive need for content—just any goddamn garbage to fill the gaps. And after all, what better metaphor for the post-truth Trump era, in which words, unmoored from humanity, are mere bludgeons in crass power plays?
We are surrounded, all of us, by cultists, asking us all to drink poison. Yet you still have a choice. Drink, or fight.3 I’ll take the latter road thank you very much. You should too, if you know what’s good for you.
The best thing about Kant Generator Pro was its included “excuses” module, which generated—you guessed it—excuses. A choice selection: “I stupidly plunged a leather punch through my son's leg, and when I was waiting for the repairman to get to my house there was this unspeakable fire, and then while I was scraping the strewn debris from my teeth, there was this explosion. Then I suffered a bout of severe paranoia, then there was this terrible hail storm, and then while I was scraping the bone chips from the floor, I suffered a petite mal seizure…”
Remember restaurants? Me neither.
Or drink and fight if you’re Irish of course.
I thought you’d mention your old pal Clint Clark Duke.
Thoughtful writing by real people is one of the only things keeping me going right now. Thank you for this one. I'd love to read your thoughts on how to best take "the latter road" (and how to convince others to do the same.) Also, can I share this?