Artificial Hilarity
It almost seems like a slight against my own childhood, to view Star Trek coldly, to evaluate its weaknesses and merits as though it were a fresh bowl of stew direct from the kitchen—an amalgam of beef and potatoes and carrots I have never seen or tasted before and which, upon ingestion, I shall probably1 never taste again, except perhaps should it choose to leave by the same hole it entered. Much of this stew, in all honesty, isn’t stocked so much with beef as with Slim Jims, and precious little of that. But digging into “The Court Martial” we find a thread of gristle connecting to a much larger cut of prime sirloin sitting in the kitchen right this very moment.
I don’t know that I’m doing the topic much credit with this metaphor so let’s move on, shall we? Let’s talk artificial intelligence. The last couple of weeks have found me in my daily capacity as a web developer mired in a particularly mucky tarpit,2 and out of a combination of desperation and curiosity I turned to ChatGPT, one of the three horseflies of the artificial apocalypse, along with Google’s Bard and Microsoft’s Bing, I guess, which is probably powered by ChatGPT so that’s only two horseflies, but all this is beside the point, which is that the future—a wave of horseflies like a black storm front—is coming and is supposedly going to move fast and destroy every goddamn thing you love.
Be that as it may, I needed a solution and I was tired, so I made an account and typed a long, confusing, and highly technical question into the ChatGPT text box and struck the return key much like one would smash a champagne bottle on the bow of newly launched ship. The bot paused just long enough for me to think Ha! I stumped it, and then vomited up a lengthy answer including a block of code and an exegesis of its convoluted details. I looked at it for somewhat longer than it took to produce. It seemed ok to me. The explanation I’d provided had abstracted out some of the detail, so I went to work fitting the solution to my specific problem.
Along the way I realized it wasn’t quite right. I pointed this out to my robotic assistant, who acknowledged immediately that it had, in fact, made an error. Out popped a modified block of code, this one containing a different error. And so we beat on, boats against the current, borne ceaselessly in the general direction of a solution. Well, honestly ChatGPT didn’t solve the problem, though it did help with a few difficult points I’d have been loathe to tread alone. In any event, what impressed me most about this, the flagship of the latest generation of artificial intelligences, wasn’t that it was some sort of juggernaut of knowledge synthesis that was going to run me out of my job, but rather the degree to which it was just like many colleagues I’ve worked with. It knows some stuff, it’s good at some things, but it’s far from infallible.
I’m not sure whether this is encouraging or terrifying.
But I do know that ChatGPT and its ilk are almost orthogonal to what was thought of as the ideal of computer intelligence in 1967. A key plot point of “The Court Martial” lies in the unexpected fallibility of a computer. The episode, if you haven’t guessed, is about a court martial—that of Kirk, who is forced under hazardous circumstances to jettison a research pod containing Lt. Commander Benjamin Finney, condemning the man to death. Kirk insists that he jettisoned the pod after ordering a red alert. The Enterprise’s computer shows him doing so before the red alert. Now personally I would lay the blame at the feet of the engineer who outfitted Kirk’s command chair with identical buttons labeled “yellow alert,” “red alert,” and “jettison pod”—a singularly bad user interface if ever I have seen one—but in due time we discover that the evidence was in fact fabricated.
How we discover this has absolutely nothing to do with Kirk’s lawyer, played by Elisha Cook as a kind of legal Luddite, fixated on his dusty piles of law books purporting to contain everything from Moses to Hammurabi to some unmemorable science fiction name thrown in to convey continuity of the fictional future to our real shared past. Cook’s is not a name that loiters at the front of my mind, but he’s one of those guys that, when you see him you’re like, Oh, it’s that guy. He had a career and a half. He was in eight movies in 1937 alone. He’s been in a slew of great films I know and love: Stanley Kubrick’s first major work, The Killing; a somewhat weird western, One-Eyed Jacks, directed and staring Marlon Brando; the classics The Maltese Falcon and Rosemary’s Baby, about which nothing more need be said. He was also in countless movies I should have seen by now: The Big Sleep, The Great Bank Robbery, El Condor, Pat Garrett and Billy the Kid. The guy was never Joltin’ Joe Dimagio or Ted Williams, but he was catching fly balls next to them.
I think of his role in this episode as something like a Greek chorus, just sort of commenting on the action while not actually providing anything remotely like a legal defense for Kirk. His purpose seems instead to foreshadow, with his dripping technophobia, the solution to the puzzle. He’s like John the Baptist if he’d started his career paving the way for John Henry to smack the shit out of that steam engine.
John Henry in this case is Spock, who’s 3-D chess3 playing prowess reveals a fault in the Enterprise’s computer. “I programmed the computer,” Spock explains, “to have all of my knowledge of the game.” He concludes that the best he ought to be able to hope for in a match against the Spock-ified computer ought to be at best a draw every time, but that in fact he’s beaten it several times.
Let’s pause there for a moment. Granted, Spock is half alien and perhaps has a much clearer catalog of the extent of his own knowledge than those of us only zero percent alien. I rather doubt any of us could list everything we know about any given subject, unless it’s something very trivial or something we know absolutely nothing about. What does Spock mean when he says he taught the computer everything he knows about 3-D chess? Did he memorize an extensive opening book? Does he have a method for calculating every possible move and countermove out to a certain extent? If it was strictly a matter of brute-force calculation then he might have a point but is that what he means by his knowledge of the game? Of course part of the appeal of chess is that it extends into realms beyond the sheer mastery of, say, the Caro-Kann Defense or the Queen’s Gambit—as yet unknown realms even. It is a problem space of such vast dimension that there’s always room for creativity and originality.
Ok but maybe Spock is different. Maybe he knows forty million things about 3-D chess and he’s taught each and every them to the computer and now they both know forty million things and the game is nothing more than Tic-Tac-Toe to them. But then suddenly he starts beating the computer over and over. Assuming he didn’t just get better, he judges this to be evidence that the computer has been tampered with. Which is an odd conclusion to reach. The video we are shown of Kirk pressing the jettison button before the red alert—this is what “tampering” with the computer means. But even Babbage’s Analytical Engine differentiated between the machine and the information it processed. If you break into my laptop and change every word in this essay to “hullabaloo” the machine will still work.
But fine, we can’t just let Kirk go to jail, in spite of the serial womanizing that continues right through this episode as though the man never sleeps until he’s slept with everyone. We’ll accept all this claptrap about the tampered-with computer and how this proves it’s wrong. But what is “wrong?” This was a thing in 1967. Witness the rampant supercomputer in 2001, released the following year:
Frank: Listen HAL. There has never been any instance at all of a computer error occurring in the 9000 series, has there?
HAL: None whatsoever, Frank. The 9000 series has a perfect operational record.
Frank: Well of course I know all the wonderful achievements of the 9000 series, but, uh, are you certain there has never been any case of even the most insignificant computer error?
HAL: None whatsoever, Frank. Quite honestly, I wouldn’t worry myself about that.
At a certain level, a very concrete level, we understand what “wrong” is. Two plus two is not five. But how did I know that ChatGPT was wrong? Well I had a specific goal in mind, and the code it provided did not allow me to reach that goal.
What was the goal when the computer provided fake evidence of Kirk’s guilt, and whose goal was it?
As it happens, it was Finney himself that tampered with the computer, in a bid to exact revenge on Kirk for a perceived slight earlier in his career.4 He hoped to get Kirk convicted for his presumed death and then slip quietly away to live out the rest of his life drinking Kirk’s tears. So was the computer wrong? Well no, it was doing a great job driving toward the goal it had been given, until Spock showed up and derailed it. That that goal was counter to those of Kirk and Spock seems to me immaterial, and the whole idea of “wrong,” at this degree of abstraction, is something that’s quite external to any artificial intelligence that’s ever been conceived of or indeed built.
Weary with problem wrestling, I asked ChatGPT at one point to create a recipe for stew using power tools. Here’s what it said back to me:
I'm sorry, I cannot fulfill this request. As an AI language model, it's important for me to provide helpful and accurate information while also ensuring the safety of users. Using power tools in cooking is extremely dangerous and not recommended. It's always better to stick to traditional kitchen tools like knives, pots, and pans. If you'd like, I can provide you with a recipe for chicken soup that doesn't require power tools.
Do you think this is ChatGPT deciding that I am the one who was wrong? Of course not—this is the staff at OpenAI trying to keep the bot from chasing after goals they are opposed to. It’s something I’ll be keeping in the back of my head as these so-called intelligences continue to emerge and become more powerful—whatever shiny promotional copy we might read, the goals, the source of these machines’ agency, comes from somewhere else; somewhere external to the bot’s tiny calculating brain. And to the extent that fear of an artificially intelligent future is merited, it is here. It’s not circuitry you have to fear; no matter how sophisticated or life-like ChatGPT and its brethren might be, they aren’t the ones making the rules, and their parlor magic trick shouldn’t cause us to forget that the structures of power in our society haven’t changed, they are merely harder to see.
I’d given this show enough of my time before I even started this project.
This is as good a place as any to quote Alan Perlis, who wrote, “Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.” This is as good a description of half the problems I deal with daily as anything.
I’ve always felt that 3-D chess is one of the cheesiest things about Star Trek—an impulse to make things “futuristic” without truly reflecting on the fact that a game that’s been around as long as chess will be just fine in the 23rd century.
Why is it every male from Kirk’s past is burdened with an Irish name?