Could Game Theory Be Used To Prevent Human Extinction?

Could Game Theory Be Used To Prevent Human Extinction?

Game theory is a powerful tool for understanding strategic behaviour in economics, business, and politics. But some experts say its true power may lie in its ability to help us navigate a perilous future.

Illustration: Jim Cooke

Still, this idea remains controversial. There are many debates over whether game theory could really help us prevent an existential disaster, whether that’s a nuclear war, a malicious AI — or even an alien invasion.

A Theory Of Social Situations

Before we get too far into the discussion it’s important that we do a quick review of game theory to go over some fundamental concepts. If you’re already familiar, just skip ahead to the next section.

Game theory helps decision makers analyse and choose strategies that constitute the best reply to the actions, or potential actions, of others. For this reason it has been called the theory of social situations, though it’s not necessary for the “other player” to be a single individual. It could be a group of individuals, a corporation, country, or even a natural phenomenon.

Utilitarians are particularly fond of game theory because it’s concerned with the way rational and self-interested agents jointly interact amongst each other to bring out the most desirable, or in some cases the least worst, outcomes. So, in any game theoretic scenario, a decision maker must be able to identify the agents or phenomenon they’re concerned with, and then assign a utility function to the outcome — a utility function being the value of something that satisfies human wants and/or provides usefulness. So the utility function assigns a value to outcomes in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities. As self-interested agents, we’re constantly trying to “maximise” our own “utility”.

Could Game Theory Be Used To Prevent Human Extinction?

Dramatic — and existential — game theoretic scenarios were featured extensively in Christopher Nolan’s Dark Knight.

Game theory was designed to deal with the interdependence of decision makers. It deals with situations where what you do depends on what I do, and vice versa. The classic example, of course, is the Prisoner’s Dilemma, a problem in which two prisoners have to choose between admitting their shared crime or keeping silent, with different sentences contingent upon what each of them have to say. A prisoner will get off scot free if they rat on a partner who remains silent, with the silent partner getting a maximum sentence (known as a “defect”). If they both rat on each other, each gets a medium sentence. But if they both stay silent, both get token sentences, which is the best overall result (known as cooperation). Yet logic would dictate, through the minimax principle (i.e. you should minimise the possibility of a worst case scenario) that you should talk.

The Prisoner’s Dilemma exists around us and it reveals, sometimes quite tragically, the behaviour pattern of interacting people. Sometimes, choices that seem logical, natural or ideal can lead to mutual damage and destruction. It also reveals that a disparity sometimes exists between individual rationality and group rationality.

Indeed, in non-cooperative game theoretic scenarios, the “best” choice for an individual sometimes results in collective disaster. John Nash earned the Nobel Prize in economics in 1994 for what would later be dubbed the “Nash equilibrium.” As he showed, sometimes during non-cooperative games, each player is assumed to know the equilibrium strategies of the other players. So no player has incentive to change their strategy given what the other players are doing. For example, I can either work hard (cooperate) or slack off and just look busy (defect). But because my company will give me a raise regardless, I might as well slack off.

Shall We Play A Game?

Since its inception, game theorists have won no less than a dozen Nobel Prizes, mostly for work in economics. But it has also been applied to geopolitics, foreign relations, and strategic risk assessment.

Could Game Theory Be Used To Prevent Human Extinction?

Back in the 1950s during the Cold War, mathematicians Merrill Flood and Melvin Dresher undertook experiments as part of the RAND corporation’s investigations into game theory. The state-sponsored group was looking to apply game theory to global nuclear strategy. It was around this time that computer scientist and mathematician John von Neumann came up with the strategy of Mutually Assured Destruction (MAD). In 1960, RAND futurist and Cold War strategist Herman Kahn advocated for a more reasoned approach. In his book, On Thermonuclear War, he conceived of the Doomsday Machine, which he described as “an idealised (almost caricaturized device)” to illustrate the danger of taking MAD to its extreme. Kahn’s work was later parodied in Dr. Strangelove, though he never advocated the hypothetical device as a practical deterrent.

That same year, economist and foreign affairs expert Thomas Schelling published a book, The Strategy of Conflict, that pioneered the study of bargaining and strategic behaviour, or conflict behaviour, through a game theoretic lens. His applications of game theory to warfare and nuclear disarmament was one of the first to effectively apply game theory to real life. In 2005, along with Robert Aumann, he won the Nobel Prize in Economic Sciences “for having enhanced our understanding of conflict and cooperation through game-theory analysis”

Indeed, he presented a nuanced and creative application of game theory to important social, political and economic problems. He showed that persons or groups can actually strengthen their position by overtly worsening their own options, that the capability to retaliate can be more useful than the ability to resist an attack, and that uncertain retaliation is more credible and more efficient than certain retaliation. His counterintuitive insights proved to be of great relevance for conflict resolution and efforts to avoid war.

Writing in the Washington Post, Schelling’s former student, Michael Kinsley, provides an interesting example:

So you’re standing at the edge of a cliff, chained by the ankle to someone else. You’ll be released, and one of you will get a large prize, as soon as the other gives in. How do you persuade the other guy to give in, when the only method at your disposal — threatening to push him off the cliff — would doom you both?

Answer: You start dancing, closer and closer to the edge. That way, you don’t have to convince him that you would do something totally irrational: plunge him and yourself off the cliff. You just have to convince him that you are prepared to take a higher risk than he is of accidentally falling off the cliff. If you can do that, you win. You have done it by using probability to divide a seemingly indivisible threat. And a smaller threat can be more effective than a bigger one. A threat to drag both of you off the cliff is not credible. A threat to take a 60 per cent chance of that same thing might be credible.

Schelling said that deterrents must be credible to work. Military theorists such as Paul Huth have said that threats are credible if the defending state possesses both the military capabilities to inflict substantial costs on an attacking state in an armed state, and the attacking state believes that the defending state is resolved to use its available military force. But as Schelling pointed out, a “credible threat” can sometimes come in the form of appearing a bit crazy or unhinged. In fact, some defenders of Richard Nixon claimed that the evidence of his apparent insanity was actually a purposeful strategy to enhance the deterrent power of America’s nuclear arsenal.

Game theory, it’s clear, can lead to some very strange and even dangerous conclusions.

Post Cold War Uncertainty

Game theory, which takes a simplified view of interactions, was effective during the Cold War when the world was dominated by two prominent state actors, the U.S. and U.S.S.R. But now that the world has gone from a bipolar geopolitical arrangement to a multipolar one, things are considerably trickier.

Could Game Theory Be Used To Prevent Human Extinction?

Sean Gallup/Getty

For example, back in April when Russia was threatening Ukraine, some commentators worried about an eventual Russian invasion of Estonia and an ensuing NATO-led war. Political scientists like Jay Ulfelder now worry that it’s part of a larger trend, and that peaceful settlements are becoming harder to find. Disturbingly, game theory supports this assertion. In a recent New York Times post, economist Tyler Cowen wrote that:

The point from game theory is this: The more peacefully that disputes are resolved, the more that peaceful resolution is expected. That expectation, in turn, makes peace easier to achieve and maintain. But the reverse is also true: As peaceful settlement becomes less common, trust declines, international norms shift and conflict becomes more likely. So there is an unfavorable tipping point.

In the formal terminology of game theory, there are “multiple equilibria” (peaceful expectations versus expectations of conflict), and each event in a conflict raises the risk that peaceful situations can unravel. We’ve seen this periodically in history, as in the time leading up to World War I. There is a significant possibility that we are seeing a tipping point away from peaceful conflict resolution now.

In the case of a potential conflict between NATO and Russia, game theory would suggest that NATO is not posing a credible threat. As noted in The Economist:

[The] last decision [for NATO] is whether or not to respond to a Russian invasion [of Estonia] by attacking Russia. The problem here is that the payoff to NATO’s big military powers to attacking Russia is hugely negative. A third world war fought with conventional weapons is among the best possible outcomes, with nuclear war being among the worst. The payoff to not attacking Russia, by contrast, is a small cost (to countries not called Estonia, or Latvia or Lithuania, or maybe Poland). It is difficult to imagine the key NATO governments risking thousands, or perhaps millions, of citizens’ lives for the integrity of Estonian territory.

So we then move to the penultimate decision. If the payoff to invasion is higher than that to not invading we can conclude that Russia will invade. Here we run into a little trouble since, on the face of things, not invading clearly entails a higher payoff, at least in terms of Russian welfare. But the identity of the decision-taker is important here. Clearly Mr Putin is willing to accept some economic cost to Russia to obtain foreign territory, so if our western eyes reckon it’s idiotic to invade we’re obviously not perceiving Mr Putin’s utility function correctly. The man gets something out of expanding Russia, throwing NATO for a loop, and generally reliving the bad old days. So it’s possible that Mr Putin will perceive the payoff to invading Estonia as positive. In that case, it is hard to imagine that American military threats will discourage him. Odds are decent that Mr Putin will start nibbling away at the Baltics after finishing with Ukraine.

So what is NATO to do? As we’ll get to in just a bit, this is where game theory starts to fall a bit flat.

Navigating Extinction Risks

As noted, game theory has been used in the past to address existential risks, or at least one in particular, namely nuclear armageddon. Looking ahead to the future, and as human civilisation is set to have to manage the next generation of self-inflicted apocalyptic threats, some philosophers have turned to game theory for some potential guidance.

One such thinker is Oxford University’s Nick Bostrom. He came up with the maxipok principle, which states that we should:

Maximise the probability of an ‘OK outcome’, where an OK outcome is any outcome that avoids existential catastrophe.

In other words, and from a utilitarian perspective, the loss in expected value resulting from an apocalyptic catastrophe is so enormous that the goal of reducing existential risks should be the most important consideration whenever we act out of an impersonal concern for humankind as a whole. Thus, we should adopt a policy that influences the order in which various technological capabilities are attained — a principle he calls Differential Technological Development.

According to this rule, we should deliberately slow down the development of dangerous technologies, particularly the ones that raise the level of existential risk, and accelerate the development of beneficial technologies, especially those that might protect humanity from the risks posed by nature of other technologies. Futurists Luke Muehlhauser and Anna Salamon have taken Bostrom’s idea one step further by proposing Differential Intellectual Progress, in which society advances its collective wisdom, philosophical sophistication, and understanding of risks faster than its technological power.

At best, however, maxipok should be used as rule of thumb and not as some kind of moral compass or ultimate decision-making principle. As Bostrom notes,

It is not a principle of absolute validity, since there clearly are moral ends other than the prevention of existential catastrophe. The principle’s usefulness is as an aid to prioritisation. Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy.

It’s also important to note that maxipok differs from the popular maximin principle which suggests we should choose the action that has the best or most favourable worst-case outcome. Bostrom claims that, since we cannot completely eliminate existential risk, the maximin principle would require us to choose the action that has the greatest benefit under the assumption of impending extinction. That would imply that we should “all start partying as if there were no tomorrow” — which Bostrom agrees is as implausible as it is undesirable.

As noted, the maxipok principle helps with prioritisation. It can also serve as a guide when doing a cost/benefit analysis of potentially destructive technologies.

But as noted by philosopher Anders Sandberg:

There are unpredictable bad technologies, but they are not immoral to develop. However, developers do have a responsibility to think carefully about the possible implications or uses of their technology. And if your baby-tickling machine involves black holes you have a good reason to be cautious.

Of course, “commensurate” is going to be the tricky word here. Is a halving of nuclear weapons and biowarfare risk good enough to accept a doubling of superintelligence risk? Is a tiny probability existential risk (say from a physics experiment) worth interesting scientific findings that will be known by humanity through the entire future? The MaxiPOK principle would argue that the benefits do not matter or weigh rather lightly. The current gain-of-function debate show that we can have profound disagreements — but also that we can try to construct institutions and methods that regulate the balance, or inventions that reduce the risk. This also shows the benefit of looking at larger systems than the technology itself: a potentially dangerous technology wielded responsibly can be OK if the responsibility is reliable enough, and if we can bring a safeguard technology into place before the risky technology it might no longer be unacceptable.

As Sandberg correctly points out, maxipok (and even maximin/minimax) can only be taken so far; it’s helpful, but not sufficient.

What’s more, these strategies represent subjective preferences; they can describe existing preferences, but they are not really prescriptive — they describe what people do do, not what they should do. Indeed, game theory is not concerned with how individual people make decisions and how they perceive uncertainty and ambiguity. That is the domain of a field called decision theory.

Staving Off An Alien Invasion

Here’s another way that game theory could help us avoid extinction, albeit a more speculative one.

Could Game Theory Be Used To Prevent Human Extinction?

As we search for extraterrestrial intelligence (SETI), we have no way of knowing if aliens are friendly or not, making the practice of Active SETI a dangerous one indeed. Messages sent into deep space could alert hostile aliens to our presence. So what are we to do?

According to mathematician Harold de Vladar, game theory may be able to help. He argues that the SETI problem is essentially the same as the Prisoner’s Dilemma, but reversed. Mutual silence for the prisoners is equal to mutual broadcasting for aliens, presenting the best results for both civilizations. Instead of a selfish prisoner ratting out his accomplice, selfish aliens could remain silent in hopes that another civilisation takes the risk of shouting out into the cosmos.

New Scientist elaborates:

In the classic version of the prisoner’s dilemma, each selfishly rats on the other. But as we do not know the character of any aliens out there, and as it is difficult to put a value on the benefits to science, culture and technology of finding an advanced civilisation, de Vladar varied the reward of finding aliens and the cost of hostile aliens finding us. The result was a range of optimal broadcasting strategies. “It’s not about whether to do it or not, but how often,” says de Vladar.

One intriguing insight was that as you scale up the rewards placed on finding aliens, you can scale down the frequency of broadcasts, while keeping the expected benefit to Earthlings the same. Being able to keep broadcasts to a minimum is good news, because they come with costs – rigging our planet with transmitters won’t come cheap – and risk catastrophic penalties, such as interstellar war.

It’s an interesting strategy, but one predicated on far too many unknowns.

Not An Entirely Valid Approach

These various scenarios and strategies are all very interesting. But could they really help humanity avert an existential catastrophe? I contacted Future of Humanity Institute research fellow Stuart Armstrong to learn more.

“The unsexy truth is that game theory’s main contribution to risk mitigation is identifying areas where game theory should not be allowed to be valid,” he told io9. “What’s more, the problem is that game theory, when it works, simply says what will happen when idealised players are in a certain competitive situation — it merely illustrates situations where the game theoretic outcome is a very bad one, which motivates us to change the terms of the competitive situation.

He offered the example of global warming.

“Game theory tells us that everyone benefits from overall cuts in emissions, and benefit from being able to emit themselves. So everyone wants everyone else to reduce emissions, while emitting themselves,” he says. “But the Nash Equilibrium suggests that everyone will continue to emit, so the planet will eventually burn up.”

To avoid that fate, Armstrong says we need to step out of game theory and utilise such things as multilateral agreements or similar interventions which can change our assumptions.

He also says that game theory has similar implications for arms races in artificial intelligence. In the race to develop powerful AI first, some developers may skimp on safety issues. It also means that “public goods, like existential risk defences (such as asteroid deflection initiatives) will be underfunded, absent some international agreement (everyone would be tempted to “free ride” on the defence provided by someone else).

Armstrong says that the models used in game theory are always a simplification of reality, so they’re not always valid.

“You could argue that mugging, for instance, is a low-risk activity, so more people should indulge in it,” he says. “It’s likely that some models have a Nash equilibrium where almost everyone is a mugger, and the police are too overwhelmed to do anything about it.”

Consequently, there are legitimate and illegitimate uses of these models.

“An illegitimate use of such a model is to say ‘well, it looks like there will be a future of mugging!’ A legitimate use of it would be to suggest that there are forces in society that prevent mugging going to its natural equilibrium. This could be social norms, ethical values, ignorance on the part of the would-be muggers, expectation that the police would react to contain an increase in mugging before it became uncontrollable, or something not modelled. Then we could start investigating why the model and reality diverged — and try and keep it that way.”

Finally, Armstrong pointed out that prisoners, when subject to the Prisoner’s Dilemma, often avoid defecting. So there are potential non-regulatory tools (such as reputation) to avoid game theoretic attractors.

Taken together, it’s evident that game theory is probably not the best approach for avoiding existential risks. It’s over-simplified, non-prescriptive, and at times dangerous. But as Armstrong points out, it can alert us to potential problems in our thinking, which can be corrected before disaster strikes.

Additional source: Stanford Encyclopedia of Philosophy.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.