The Biggest Game in Town

2022-06-18 • 19 min read • comment via LW

img

The end result is our present age’s maladaptation and creative incapacity. We must, at this point, either accept the death of our civilization or else opt for artificial adaptation, since natural, instinctive adaptation has failed. (Pessoa 2001)

– Álvaro de Campos

I started reading about Game B after a friend asked what I thought about it. I’ve now spent about four to six hours reading about it and still do not know whether it’s original, interesting or correct. Maybe that alone says something, though I should point out that I’ve mostly read and listened to publicly available texts and recordings – I’m not privy to private discussions within the Game B community. I admire Game B theorists’ desire to create a better world and to uphold what they might call a healthy “information ecology”. That’s not nothing and the rest of this post should be read in light of that. Hopefully my vague ruminations will bring someone else the clarity I didn’t quite get.

Summary #

I describe Game B, a worldview and community that aims to forge a new and better kind of society. It calls the status quo Game A and what comes after Game B. Game A is the activity we’ve been engaged in at least since the dawn of civilisation, a Molochian competition over resources. Game B is a new equilibrium, a new kind of society that’s not plagued by collective action problems.

While I agree that collective action problems (broadly construed) are crucial in any model of catastrophic risk, I think that

  1. civilisations like our current one are not inherently self-terminating[1] (75% confidence);
  2. there are already many resources allocated to solving collective action problems (85% confidence); and
  3. Game B is unnecessarily vague (90% confidence) and suffers from a lack of tangible feedback loops (85% confidence).

Game B: What It Is #

The project of Game B was conceived about a decade ago by one Jim Rutt, a cheery businessman and later chairman of the Santa Fe Institute, and a group of collaborators.[2] They’d originally wanted to start a new political party – the Emancipation Party – but soon pivoted to the project that’s now known as Game B, partly because they realised that no Gen Yers or Gen Zers would be interested in joining something so dinosaurian as a new political party.[2:1] Nowadays the project is connected with the Daniel Schmachtenberger/Rebel Wisdom cluster of “sensemakers” and with the intellectual dark web, though it has a vibe of its own.[3]

The philosophy of Game B is basically what you’d get if you designed a worldview around the tragedy of the commons.

But let’s begin at the beginning. Game A is what we’ve been doing (or in Game B terms, playing) since the Neolithic Revolution, or maybe since we started using tools and languages.[4] It’s characterised by intense competition leading to increased extraction of natural resources and accelerating technological progress.[5] Together, these generate significant risks of human extinction[6] … Like any game, Game A has a set of rules, not only about what answers to give, but even which questions to ask; you can think of it as, in Gramscian terms, a cultural hegemony or, in capitalist terms, the sense that “there is no alternative”.[7]

Readers of this blog may have already read Scott Alexander’s classic Meditations on Moloch; if not, let me gently nudge you to do so now. The post describes a number of “multipolar traps” (henceforth “Molochian traps”), situations where we all make locally optimal decisions that together have globally very bad outcomes, and where it’s difficult for an individual to coordinate with others in order to break the status quo.

Game A is the land where Moloch rules. It’s designed to solve a set of problems for large communities with scarce resources.[8] The problems are: resource extraction, defection (in the game-theoretic sense) and intergroup competition. The solutions include: hierarchies of power, free markets, religion, law, international treaties and professional armies.

Game B is what comes after. Game B has a new set of rules, new questions and answers: it’s a new paradigm. Game B is still designed to solve problems for large communities with scarce resources, but its solutions avoid Molochian traps. So instead of hierarchies of power, for example, you have decentralisation and new forms of democracy, where (the idea is) decision-makers are those whom a decision will actually affect.[9]

How will Game B come to be? The idea is that we are in a time of flux, and will need to find a new equilibrium; Game B is meant to create an “attractor” which will pull the world into a new, better equilibrium … A group of people will get together, create a “Proto-B Community”, grow and eventually become its own little Game B civilisation, so successful that all others will be drawn to join or emulate it.[10] Because Game A solutions fail to solve many collective action problems, Game A civilisations don’t reach their full potential; Game B civilisations will solve most collective action problems and therefore outcompete Game A civilisations.[11] (I don’t think Game B theorists have any delusions about the ambition and difficulty of their project.)

Again – how will Game B come to be, in practice? No one knows, really. Quoting the Game B Wiki: “Through analogy, Game B players gather together to feel their way up each hill with their toes, sensing for the loamy untrodden ground beneath them, slowly inching forward, listening for signals from one another, adjusting at each step to orient themselves toward the flag that is barely visible through the gloaming.” Somewhat similarly, Jim Rutt talks about what he calls “the adjacent possible” which to me just sounds like gradualism. In general, there seems to be a focus on adopting a fruitful stance or attitude over pursuing some specific agenda or project, though apparently there’s some disagreement about that within the community.[12]

So much for what it is. Now for what I make of it …

Collective Action Problems Are Important #

Game B puts a lot of emphasis on collective action problems, competition, Molochian traps and so on.[11:1] I agree with Game B theorists that these are very important.

Yudkowsky (2017) divides problems causing civilisations to “get stuck” into three categories:

  1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;
  2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; and
  3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

Game B talks a lot about these, though in my opinion never stated as clearly as they are by Yudkowsky. I think they’re really important failure modes, though they only constitute some of the drivers of catastrophic risk. My feeling is that fully solving these problems is neither necessary nor sufficient in attaining a radically better society, but that they’re a key part of any good model of catastrophic risk.[13]

Let’s take the risk of engineered pandemics as an example. Suppose a pandemic that kills >50% of all humans is in no one’s interest. But if they think the risk involved is low enough, it’s plausibly in the interest of some epidemiologists to do gain-of-function research. Here we have an example of all three of Yudkowsky’s categories:

  1. Epidemiologists and others make decisions that impact every human. In case of failure, the epidemiologist stands to lose a lot, but the probability of failure is so low that the expected downsides are really only significant when they affect many people.[14] Similarly, biotech corporations are incentivised to carry out research in order to reap future profit, even when such technology may increase catastrophic risk, harming everyone (including biotech corporations).
  2. Some decision-makers don’t have the information needed to make good judgments of gain-of-function research risks or downplay those risks since their careers depend on it.
  3. If one epidemiologist stops doing gain-of-function research, their career may suffer for it, and someone else may take the funding they would’ve gotten and carry out the same research anyway. Similarly, if one lab decides to be transparent about past accidents, their reputation will suffer (relative to other labs) and they’d have a harder time getting funding, whereas if all labs did so that might not be the case.

These considerations are important but – and I think Game B theorists agree with this – they are not everything.

First, even if we were to solve all these problems as well as we could hope for, there would still be significant catastrophic risks. For example, it could be that, given the information available to all of us, we are fundamentally mistaken about how risky some activity is. Or, even in a situation where we can eliminate risks for humans thanks to everyone taking part in the relevant decisions, we do things that are risky for future generations or non-human animals. Or everyone is weakly incentivised to do the globally optimal thing, but someone just has a fundamentally depraved value system and is able to unilaterally do a lot of damage in spite of those incentives. I’m not sure “meta-protocols for hyper-collaboration” and “win-win structures” guard against those failure modes, which are real.

Second, even if we don’t fully solve these problems, we can still reduce the catastrophic risks involved. For example, we could generate technology and knowledge that allows us to guard against or mitigate pandemics. Or we could implement or strengthen partial solutions using methods of cooperation that we’re already familar with; Ord (2020), for example, recommends increasing the budget of the Biological Weapons Convention group, giving the WHO more funding and power and introducing regulation to ensure DNA synthesis is screened for pathogens among other things.

The Current System Can Probably Go on for a Long Time #

The philosophy of Game B holds that we live in a really fragile world and that the current system – Game A – will therefore self-terminate.[15][11:2] But I don’t see any reason why our current system – capitalism and all[16] – can’t in theory go on for 200+ years; specifically, I think it’s 45% likely that >50% of humans live in places with free markets and private-owned businesses (say, at least as free as those in China today) in year 2222 (to take one example). I do think we are at some too-high existential risk[17] and that collective action problems contribute to this risk, but I don’t see quite the same risks that Game B theorists do, and I think we can reduce risks within the current (Game A) system.

I guess an important thing to remember here is that the current system actually give us a lot of solutions for dealing with Molochian traps. That’s how we’ve made it this far. It’s great if we can improve the ways we cooperate, but from Game B I get the sense that our current ways of cooperating are fundamentally broken and cannot be fixed without a paradigm shift, which I think is untrue. (For more on this, see Zvi Mowshowitz’s response to Meditations on Moloch.)

Two areas of existential risk often brought up in Game B discourse are natural resource depletion and climate change, so let me say something about those in particular.[18]

Natural Resource Depletion Probably Won’t Destroy Our Long-Term Potential #

While modern technology does come with significant existential risk, resource extraction probably doesn’t? I think humanity would recover from all but the most extreme civilisational collapse scenarios, so for natural resource depletion to be an existential risk, it would have to be really, really bad.

So how do things look? We’re probably not going to run out of phosphorous (which is crucial in food production) any time soon. It looks like we’ve stopped increasing the amount of land we use for growing things; our food production gains are now purely efficiency gains. Water scarcity is perhaps more serious. As the number of humans grows and some water sources get depleted, we get less freshwater per capita (Lal 2015), though food production continues to increase even so (Dinar, Tieu, and Huynh 2019). As Lal (2015) notes, if all freshwater supplies were divided equally among Earth’s people, the per capita supply would be 3x what we need; scarcity is and will likely remain geographically heterogeneous, as dry and densely populated regions suffer most, and generally “Earth’s water resources are adequate to meet the present and future demands”. So natural resource scarcity seems like a problem, but I have a hard time even imagining how it would cause total human extinction or anything close to it in the next century, say.

Robert Wiblin has also looked into natural resource depletion and seems bearish, writing that “[s]omeone who confidently tells you almost all natural resources will become much more or less abundant should be treated with suspicion” (but see comments for some additional discussion). Ord (2020, 167) puts the existential risk due to natural resource scarcity in the next century at <0.1%[19], though I think that’s pessimistic.

Game B texts discuss resource depletion because it’s a paradigmatic Molochian trap: it’s good for those who extract and make use, but bad for all of us if we run out, and hard to slow down (because you get outcompeted). I don’t think it’s as large a problem as they seem to do, so this seems like an area where collective action problems are not (yet) catastrophic.

Climate Change Probably Won’t Destroy Our Long-Term Potential #

Benjamin Hilton recently published a well-researched article on climate change. He writes:

The Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report is, to our knowledge, the most authoritative and comprehensive source on climate change. The report is clear: climate change will be hugely destructive. We’ll see floods, famines, fires, and droughts – and the world’s poorest people will be affected the most. But even when we try to account for unknown unknowns, nothing in the IPCC’s report suggests that civilisation will be destroyed.

In other words: the direct consequences of climate change will be very bad, but our current system can very likely weather them. Ord (2020, 167) agrees, putting the existential risk due to climate change in the next century at 0.1%.

Tons of People Are Already Trying to Solve Collective Action Problems #

For every collective action problem endangering a civilisation, there are a hundred analogous collective action problems happening within that civilisation.

Example: There’s a large firm with many divisions. The head of each division wants to increase their budget. Because the heads of all the divisions compete for the same pool of money, they start using underhanded tactics (sucking up to their boss, spreading rumours about their competitors, fibbing to make themselves look good). Whoever doesn’t, falls behind. As a result, the whole firm’s norms deteriorate … ventually the firm starts floundering, harming everybody, etc.

That’s a Molochian trap. If a firm like that figures out how to coordinate better, it’ll have an advantage over its competitors. The same thing applies to any large organisation.

In other words, there are millions of people out there trying to think up ways of solving collective action problems and mostly failing, so our default assumption should be that it’s really, really hard to improve things on the margin. If that’s the case, then we’ll either never fully solve these problems (because they’re beyond our ability to solve) or we’ll eventually solve them by default, because we’re already strongly incentivised to do so. These outcomes both seem to make Game B work less valuable.

Objection: These people are not really trying to coordinate. Neither the CEO nor the board is selflessly pursuing the company’s interests; they are pursuing their own interests, which only partly align with the company’s. A concerted effort could improve things much faster. Reply: I feel like, even if they’re only partially incentivised to improve coordination, the sheer number of people doing it, and the selection pressure on companies in the marketplace, means we should be seeing about as much progress as we can hope to see. However, I’m not sure about this.

Doubts about Epistemics #

I have two concerns that are less about the philosophy of Game B and more about the project of Game B as it’s currently implemented. One, Game B seems unnecessarily vague. Two, Game B lacks tangible feedback loops. These things make it less legible, and less likely to correct course should it need to do so.

Game B Is Vague #

Game B is futurology, and futurology will always be speculative. But it doesn’t have to be vague. I had a hard time engaging with Game B arguments, I think as a result of this vagueness. (And here I should note that I haven’t dug that deep in the Game B literature, so it’s possible more specific claims appear somewhere and I just haven’t come across them yet.)

First, I’d love to see Game B theorists try to quantify things. For example, the Game B Wiki happily lists a bunch of characteristics that a Game B civilisation will have (e.g. it’s “the flag on the hill for an omni-win civilization that maximizes human flourishing” and “the environment that maximizes collective intelligence, collaboration, and increasing omni-consideration”); it also lists a bunch of design criteria (e.g. it should be characterised by “win-win structures” not “win-lose structures” and “post-growth & evolving homeostasis” not “growth”, etc.). Of course reading this leaves one thinking, “Sure, win-win sounds better than win-lose, but how do we actually get there?” but let’s set that thought aside for now. These characteristics are still not specific enough for me to really reason about. It would be better, in my view, if it said things like “A Game B civilisation is one where absolute inflation-adjusted GDP growth never exceeds 5% in a decade, life expectancy is >100 years and the PPP-adjusted poverty rate is below 0.01%, among other things.” Of course these measures don’t need to capture all aspects of “post-growth” and “flourishing” in order to be useful; they are useful because it’s easier to disagree with them, because they’re specific.

Second, I’d love to see Game B theorists come up with counterarguments to their ideas, and then rebut them. I haven’t seen anything like this in my admittedly shallow dive into Game B writings. I’d contrast that with Holden Karnofsky’s blog post series “All Possible Views About Humanity’s Future Are Wild”, where he immediately brings up two potential objections after summarising his own view. That gives me confidence that he’s considered his argument critically and a better understanding of what his view really is.[20]

Third, I’d love to see Game B theorists link claims about the future to predictions. For example, the Wiki says that “we will either get the emergence up into a higher degree of order or an entropic drop down into a lower degree of order”. That’s vague. What do Game B theorists think is the probability of a 20% reduction in world population within a five-year period happening in the next 100 years? What do they think is the probability that >50% of habitable area is covered by forest in 2100 (currently it’s 38%, according to Ritchie and Roser (2021))? They would probably be able to operationalise these questions better than I am, but either way their answers would be really helpful in understanding their model.

Game B Suffers from a Lack of Tangible Feedback Loops #

How and when will Game B theorists find out that they are on the wrong track (if they are)? I feel like they could in theory go on writing the same things for decades without ever adjusting course, even if they’re wrong. That doesn’t seem ideal.

The problem is similar to discussions about a “meta-trap” in effective altruism, i.e. spending all one’s resources on growing the movement and no resources on actually doing stuff. The Game B Wiki has a projects category but it (1) is not very impressive and (2) includes projects started by people outside the Game B community. (Again I should note that my shallow search probably hasn’t found every relevant project.) I would like to see the Game B community anchor its theory in practical projects; if it did, it would be easier to interpret and evaluate the theory. For example, Game B theorists talk about “self-governance”, but people like Audrey Tang and Glen Weyl combine ideas about democracy with applied projects that aim to improve democratic processes.

If the Game B community launched a bunch of projects and saw all of them fail, that would at minimum be an indication that their theories need to be revised. If they do so and the projects succeed, that should both make them more confident in their theory and also proud at having achieved something practically useful! As an added bonus, an impressive track record would contribute to the growth of the Game B community.

Objection: We don’t really know what these kinds of projects should look like. What we need is more research. That’s why there’s more talk about which stances and principles one should adopt. For example, there is this list of suggested principles, like truth-seeking and self-actualisation. Reply: Fine, but you still need some way of figuring out if you’re on the right track (and correcting course if you’re not). Otherwise you risk getting lost in the woods of ideas. I haven’t seen anything like that in the Game B project yet, though of course, as I’ve mentioned, I haven’t spent that much time looking.

Conclusion #

Game B seems like it might or might not be interesting, might or might not be original and might or might not be a more or less accurate model of the world. It’s hard to evaluate, which is one of its problems. It was especially hard to evaluate for me because the Game B community is my outgroup, with a vocabulary that sounds alien to me, with assumptions that I’m not aware of and with ways of thinking that I don’t grok. Overall, though, I can say that I agree with some aspects of the Game B model (e.g. collective action problems are important) and disagree with others (e.g. we probably don’t need a total paradigm shift in order to reduce catastrophic risk to a sustainable level).

References #

Dinar, Ariel, Amanda Tieu, and Helen Huynh. 2019. “Water Scarcity Impacts on Global Food Production.” Global Food Security 23 (December): 212--26. https://doi.org/10.1016/j.gfs.2019.07.007
Lal, Rattan. 2015. “World Water Resources and Achieving Water Security.” Agronomy Journal 107 (4): 1526--32. https://doi.org/10.2134/agronj15.0045
Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
Pessoa, Fernando. 2001. The Selected Prose of Fernando Pessoa. Translated by Richard Zenith. Grove Press.
Ritchie, Hannah, and Max Roser. 2021. “Forests and Deforestation.” Our World in Data.
Yudkowsky, Eliezer. 2017. Inadequate Equilibria: Where and How Civilizations Get Stuck. Machine Intelligence Research Institute.

Footnotes #

  1. Obviously nothing lasts forever. When I say they aren’t inherently self-terminating, I mean roughly that they can plausibly last for >1,000 years. I don’t mean that they can outlast the heat death of the universe. ↩︎

  2. See the first 3 minutes of The Story of Game B for the early history of Game B; and at 4:05: “Hmm, we did pretty good with baby boomers – good enough that, from an early test marketing perspective, it would say ‘Proceed.’ With Gen Xers we did about half as well as we did with boomers, which from our marketing experience would’ve said ‘Probably improvable by fine-tuning over a period of time to make it good enough to launch.’ … Millennials – there weren’t Zs yet – were a complete washout: we did not get a single millenial to join. Not one! And we went back and talked to some millennials and it was quite uniform that the idea of a political party was anathema. It was as if you’d said catshit, right? Emancipation Party – catshit!” ↩︎ ↩︎

  3. Jim Rutt refers to the work of Daniel Schmachtenberger and others as “the San Diego interpretation” of Game B (The Story of Game B, 15:05) and emphasises that it’s different from other variants, including the original idea. My description of Game B here should be read in light of this; I refer to Game B as I’ve understood it (from reading and hearing a variety of sources) but I’m not sure whether what I describe is the definitive or even majority view.

    As for the intellectual dark web, Bret Weinstein was part of the meetings that spawned Game B in the early 2010s. His brother seems vaguely involved.

    I was previously mostly unfamiliar with all these groups. ↩︎

  4. “I wouldn’t describe the agricultural revolution as the beginning of Game A … In a way you could say it was a major step function in the beginning of the anthropocene … But I would take it back to stone tools and fire …” (Daniel Schmachtenberger, Game B Dialogos, 48:30). ↩︎

  5. From “An Initiation to Game B” starting at 7:50: “Here we stand before the cradle of civilisation at the birth of Game A. As competition increased, the tribe began to lose its instinctual ability to sustain the principles of wholeness and regeneration … Game A is the game of growth, rivalry, control and accumulation … The race to accumulate and control accelerated faster and faster and competition began to dominate cooperation. Though necessary to advance the human tribe, Game A would prove to be fragile … Because Game A’s technology takes from nature without bothering to give back, it is fundamentally exploitative. The strategy of rivalry and accumulation led to the exponential development of complicated technologies.” ↩︎

  6. From “An Initiation to Game B” starting at 9:43: “Since these [advanced] technologies require extraction from the geosphere, eventually they become lethal to the biosphere. As the game continues to advance, our global ecosystems are collapsing – all the while our technologies are gaining the potential for destruction and depletion on a vast scale … When the principles of wholeness and regeneration are abandoned there is only one fate: self-termination.” ↩︎

  7. From “An Initiation to Game B” starting at 11:05: “The parasite you carry with you is an object of control and blind adherence to the rules of Game A. It was set upon you in your earliest years. The parasite makes sure that you conform to Game A, by conditioning you to view yourself as separate from nature and from other people, so that you forget the principle of wholeness and purpose as a steward of the earth.” ↩︎

  8. “Game A is primarily characterized by scarcity and thus rivalrous or win-lose dynamics: How do we increase our resources production? How do we divide up the scarce resources? How do we compete with other groups of people?” (From the Game B Wiki.) ↩︎

  9. “[Game B should] be non-hierarchical – that’d be a fundamental value …” (Jim Rutt, The Story of Game B, 7:45). The rest (decentralisation and solving the principal-agent problem) is kind of me inferring what Game B theorists probably might say. ↩︎

  10. From “An Initiation to Game B” starting at 13:40: “Restoring this co-creative engine to the human tribe will once again allow cooperation to out-compete competition – leading to an exodus out of Game A.” ↩︎

  11. “We need to rigorously align the incentive of every agent … with the well-being of every other agent and with the commons, with no gap. And to the degree that there’s a gap, meaning that two agents have misaligned interests and you have direct competition, which can express itself militarily or through corporate competition … or to the degree that someone’s interests and the well-being of the commons are misaligned, then you get externality – as you run exponentially more energy through an incentive system that internalises harm, or that causes direct harm, exponential externality is existential.” (Daniel Schmachtenberger, Game B (and Game A) in a Minute.) ↩︎ ↩︎ ↩︎

  12. From “An Initiation to Game B” starting at 11:45, and explaining how to make progress towards Game B: “By starting with what you did at the outset of this journey. Asking more questions than trying to provide answers will open your mind. By first finding what is yours to do then building meaningful relationships with those whom you need to work with. This will open your heart. And finally by acting and creating from a place of integrity within your tribe. This will train your body to play Game B.”

    Jim Rutt says that the disagreement between those who focused on changing as an individual versus those who wanted to build new and better institutions was “the main fission that caused the community to kind of set upon itself in a not-too-pleasant way” (Jim Rutt, The Story of Game B, 26:50). ↩︎

  13. AI alignment might be an exception here? You can think of the alignment problem as a principal-agent problem. Maybe solving this coordination problem as well as the ones between AI labs, governments and other actors is enough to create glorious, aligned artificial superintelligence! ↩︎

  14. Say the epidemiologist gets 1 utility point from the knowledge, prestige, promotion or whatever they get out of doing gain-of-function research and -1,000 utility points from dying. Say the work has a 0.00001% probability of killing 50% of humanity (including the epidemiologist). The expected value of doing the research when you consider only the epidemiologist’s life is 1 - 0.0000001 ✕ 1,000 = 0.9999. But if you consider the whole of humanity, the expected value is 1 - 0.0000001 ✕ 1,000 ✕ 0.5 ✕ 8,000,000,000 = -399,999. ↩︎

  15. Here is Daniel Schmachtenberger, for example: “If we are gaining the power of gods, then without the love and wisdom of gods, we self-destruct.” I guess he was unwittingly channeling E. O. Wilson, who expressed basically the same sentiment when he wrote: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”

    Jim Rutt mentions that existential risk was not originally a part of Game B thought: "[In the early days of Game B] we thought that the political and economic systems could collapse from their own internal contradictions, but we were not yet focused on bigger questions [like] ‘Is our social operating system literally heading for suicide?’ I would say those conversations became part of the discussion in the Game B world, though [they] probably weren’t dominant. The dominant sense was that the world was not good … (The Story of Game B, 22:05). ↩︎

  16. From “An Initiation to Game B” starting at 11:35: “The [Game B hegemony] knows only one God: the modern, growth-based economy.” Though Game B theorists would probably agree that socialism, too, is a Game A solution – as are all political and economical systems implemented at scale so far. ↩︎

  17. I follow Ord (2020) here in using “existential risk” to mean a destruction of our long-term potential. That could be either via actual extinction or by getting locked into a stable suboptimal state. ↩︎

  18. Specifically, I’m considering the next century or two, not e.g. thousands or millions of years into the future. In general, I’m somewhat confused about which timelines Game B is meant to apply to. If a Game B civilisation never materialises, can we expect a civilisational collapse in the next 100 years? 1,000 years? 10,000 years? I have no idea what Game B theorists would say, though Jim Rutt does mention as a condition for a Game B civilisation that it should be able to “exist for centuries at least” (The Story of Game B, 7:45) (which seems rather unambitious given our current centuries-old civilisations). ↩︎

  19. To be precise, he puts the existential risk due to all non-nuclear and non-climate change environmental damage (including cascading biodiversity loss) at 0.1%. ↩︎

  20. Let me caveat that by pointing out that Karnofsky is an effective altruist and therefore in my ingroup, whereas Game B theorists are my outgroup; it’s possible that I treat Karnofsky differently for that reason. In a weird way, it feels like Game B is a vague echo of effective altruism, and sensemaking of rationality, though the comparisons are tenuous and break down quickly. ↩︎