What if I'm wrong?
This is Western Coffee—notes on building the creative body. Last time: AI reads. The whole series is here. Please share this email; you can sign up free below.
Like Sam Altman, I learned to write code on an Apple (mine a bit more primitive than his) in the early ’90s. Sure, I played games and, anticipating both my career and the text you’re reading now, printed out newsletters. But for a time what really engrossed me was writing programs, however simple—line after line in Applesoft BASIC. From age 8, I intuited that the computer’s most salient potential was as a creative tool. To program was to summon, from nothing, a freestanding entity, something that could operate in the world on its own. If anything, it was more creative to code than to write, because what you coded could wield a creative potency of its own.
If I’d liked writing programs just a little bit more than I liked writing words—or maybe if my education had emphasized different things—I’d probably be working to advance the very technology about which I’m now writing more and more words of warning: artificial intelligence. Indeed, I’ve flirted with creative technology throughout my adult life. As an editor, I collaborated closely with engineers and designers—designing and engineering a few things myself, even learning to write JavaScript, a vastly more complex and mind-bending instrument than the BASIC of my childhood.
I say all of this to set up that I’m not an obvious or inevitable skeptic of AI. And indeed, I have nothing against AI in itself; I actually share the belief animating many AI-building efforts that superintelligence is part of our creative destiny—a pretty much inexorable output of human problem-solving and ingenuity.
The problem is how, and how quickly, we’re going about it: the incentives, the prevailing cultural and economic vectors, the likelihood that an almost infinitely powerful and flexible technology—in our context, without mitigating factors that seem to me unlikely at present—will amplify the worst ills of our civilization.
In philosophy, we talked about a style of argument deemed prima facie, or “on its face.” On its face, AI (specifically artificial general intelligence, or AGI, which the smart bets have it we will see sooner than we think) will be the most dangerous invention of humankind by an order of magnitude—the first cognitive competitor in our history, one whose scalability means it will almost instantly outcompete us. (People may argue against this, but I think if you’re modeling things logically—not just following the blow-by-blow of what’s the case right now—it really is true prima facie.) On its face—because it is software created by humans—AI is unpredictable, chaotic, steeply disposed to escape our control. Safeguards against this are definitionally as easy to remove as they are to install. On its face, AI has much the same destructive potential in the hands of bad actors as a nuclear bomb, with none of the logistical hurdles to obtaining it.
I’ve started saying our best hope is that the first time things go really wrong, they go wrong enough to produce a total, globally state-enforced (if that continues to mean something, and that’s a big if) consensus against the unchecked use of AGI—kind of like nuclear arms agreements—but not wrong enough to be the worst catastrophe in human history. I believe that is our best hope.
And yet. People who are closer to the technology, who understand the code and the models and the weights—while they concede, to varying degrees but almost across the board, an existential peril—say they believe it is more likely that advancing AI will make human life better.
It’s taken a while to wind up to it, but I want to engage with that stipulation from this particular corner of thought, from the view that our purpose as human beings is to embody creativity. What would it look like if things go well?
If it’s easy to catastrophize about AI, it’s also easy to picture more utopic outcomes—starting, for example, with some superintelligent help mastering nuclear fusion. AI is already helping to discover novel drug candidates, and within a few years we’ll have paradigm-turning models of every system in the body. (Indeed, if I could surgically exempt medical innovation from my pull-the-plugism, I’d be tempted.) If energy cheapens and proliferates fast enough, if the government and corporate custodians of the technology are well meaning and stable enough, if we quickly establish ample systems for creating and harvesting the new bounty of goods worldwide, if the medical advances are shared freely, and if the most devoted capitalists are willing to renounce the game of maximizing their ludicrously vast advantage over everyone else (lol)—then the accelerating replacement of the human workforce could be a liberation, the heralded age of abundance. Maybe we’ll greet this abundance not with the excess that is so often the pendulum swing against constraint; maybe we’ll all be content to see one another content. Maybe we won’t continue that old human pattern of replacing each conflict or anxiety with another as soon as it lifts. Maybe no one will use their iPad to invent a human virus with 100 percent lethality and a silent three-month transmissible incubation period.
Mahler, Bruckner, Sibelius, Beethoven, Rachmaninoff—they all made more daring, trippier work as they got older. I would love to hear Bruckner’s 13th Symphony, or his 25th. The thought of spending a day or a week or a month inside a simulation of his process, feeling an intelligence modeled on the arc of his work, watching melodies and counterpoint and orchestration and recapitulation fall into place, mapping his references and allusions—and influencing that process, working as a co-author with him, infusing some late-Romantic earworms of my own—that might be the most pleasurable creative experience I can imagine. A teacher, is what we’re talking about. A teacher with endless understanding and all the time in the world, who knows your brain down to the synapse. Anyone who wanted to could get a dozen times smarter without even tweaking their biology.
Or, I’d love to live inside the closest possible recreation of my novel subject’s 19th-century life, to inhabit an effortless synthesis of the documents and letters and two centuries of research and the photos and records of the relevant sites, to hear the way everyone around him might have spoken, to interview them, to watch different theories of his behavior and intentions play out—to be able to write with my creativity and embodied experience, but that depth of contextual understanding.
And speaking of bodies, what athlete doesn’t want to understand what’s going on with hers? The tech companies have known for some time now how much we crave data about ourselves, how much we want to quantify our performance and progress. Any workout is better with a good coach, because there’s always—always—something you could be doing a little better, something you’re not aware of yet. Here, too, the potential to learn, to be taught, is sumptuous. There’s a world in which AGI ushers in a paradise of the curious.
OK. That was a serious effort to summon a brighter future. We can’t rule it out. It would require more wisdom on the part of more people, including ourselves, than we’re accustomed to. The trend lines are bad. But creating is the antidote to despair—and we will be litigating these questions, all of us, before you know it. If I’d followed the software-writing path, I might be reading right now about my own CEO saying my job would be one the of first to be replaced, and I would understand how little use there was in resisting it. So then I’d be thinking about which left turn to make, about how to evolve and survive. However this turns out, it’s not a useless exercise to start molding your best-case scenario.
Kindly send me your thoughts, questions, and provocations: dmichaelowen@gmail.com.