Staring Out across the New Mexico Desert
Gerald Murnane, in his “memoir of the turf” (Murnane 2015), describes having spent much of his life trying to devise a “reliable and profitable” system for betting on horses. “In my early years as a follower of racing”, he writes, “I tried to pick winners haphazardly, but in 1952, when I was only thirteen, I began recording the recent form of every winner, hoping to discover some recurrent pattern that would help me predict future winners.” His stated purpose was to be able to “rent a comfortable flat in Dandenong Road, Armadale; to own a small car; to join a middle-level golf club; and to put together a library of a few hundred volumes of fiction and poetry, along with a select collection of long-playing records”. But if he’s anything like me, he also had a second purpose, parallel to the first. The second purpose would be to unearth some hidden meaning or knowledge about the world.
Unlike Gerald Murnane, I’m not interested in horse racing, but I’ve always sensed this near-irresistible allure in AI and ML systems. I think that’s why I’ve spent so many hours watching videos of AlphaZero and AlphaStar, learning about and fiddling with GPT-X and DALL-E Y, learning statistical modelling techniques, searching Google Scholar with terms like “principal component analysis” and “cluster analysis”, reading stories of people who found and exploited novel strategies, etc. Sometimes these efforts do generate new knowledge – for example, AlphaZero has inspired new chess and go strategies – but mostly they only give a vague beginning of an answer.
When he was four or five years old, Albert Einstein’s father gave him a compass. No matter how he turned the compass, the needle still pointed in the same direction. He wrote: “I can still remember – or at least believe I can remember – that this experience made a deep and lasting impression upon me. Something deeply hidden had to be behind things.” The desire is to uncover hidden patterns, to crack the code, to catch a glimpse of one or another mystery.
I personally am chiefly interested in question-answering oracles. Agents only interest me to the extent that they reveal their knowledge in the strategies they adopt. I hardly watch AlphaStar or AlphaZero because I enjoy seeing them winning; I rarely follow sports. I watch them because I want to see how they win.
I don’t think I’m the only one who has this interest. Socrates, Lycurgus, Croesus and Alexander the Great all consulted the Oracle of Delphi. Fernando Pessoa, who was a brilliant poet but a muddled thinker, cast hundreds of horoscopes. But you’ll note that it’s lazy. In this, I’m not exploring the world; I’m taking a guided tour. I don’t create hypotheses and try to prove or falsify them; I wait for somebody or something to tell me the answer. In fact, I don’t even check that the answers make sense, though they do often disappoint me. (It might not be a coincidence that one of the maxims inscribed in the forecourt of Pythia’s temple is “certainty brings ruin”.)
All this puts me in something of an awkward position. In October I’ll be leaving my job as an engineer at a cryptocurrency quant hedge fund to join Rethink Priorities’ AI Governance and Strategy research team. And on the one hand, I’m curious and excited to see what the future of AI holds. But on the other hand, I think there’s a non-negligible probability that AI causes a global catastrophe of unprecedented scale in my lifetime. And these two possibilities scale together: the more capable an AI is to reveal mysteries, the more powerful it must be, but the more powerful it is, the more dangerous it is for us. I’d rather not be one of those people who stared out across the New Mexico desert in the 1940s only to die from cancer in the 1950s.