My news feed has been dominated over the last week by arguments both for and against a pause on AI development, prompted by this open letter by the Future of Life Institute., which called for:
AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Chas Hasbrouck has an excellent post summarising the various views on AI held by different groups (and examples of the people belonging to each group). Tyler Cowen then suggested on the Marginal Revolution blog that we should be considering the game theory of this situation (see also his column on Bloomberg - as Hasbrouck notes, Cowen is one of the 'Pragmatists'). I want to follow up Cowen's suggestion, and look at the game theory. However, things are complicated a little, because it isn't clear what the payoffs are in this game. There is so much uncertainty. So, in this post, I present three different scenarios, and work through the game theory of each of them. For simplicity, each game has two players (call them Country A and Country B), and each player has two strategies (pause development on AI, or speed ahead).
Scenario #1: AI Doom with any development
In this scenario, if either country speeds ahead and the other doesn't, the outcomes are bad, but if both countries speed ahead, the planet faces an extinction-level event (for humans, at the least). The payoffs for this scenario are shown in the table below.
To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:
- If Country B chooses to pause development, Country A's best response is to pause development (since a payoff of 0 is better than a payoff of -5) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
- If Country B chooses to speed ahead, Country A's best response is to pause development (since a payoff of -10 is better than extinction);
- If Country A chooses to pause development, Country B's best response is to pause development (since a payoff of 0 is better than a payoff of -5); and
- If Country A chooses to speed ahead, Country B's best response is to pause development (since a payoff of -10 is better than extinction).
In this scenario, both countries have a dominant strategy to pause development. Pausing development is always better for a country, no matter what the other country decides to do (pausing development is always the best response).
For anyone who believes in this scenario, pausing development will seem like a no-brainer, since it is a dominant strategy.
Scenario #2: AI Doom if everyone speeds ahead
In this scenario, if both countries speed ahead, the planet faces an extinction-level event (for humans, at the least). However, if only one country speeds ahead, then AI alignment can keep up, preventing the extinction-level event. The country that speeds ahead earns a big advantage. The payoffs for this scenario are shown in the table below.
Again, let's find the Nash equilibrium using the best response method. In this game, the best responses are:
- If Country B chooses to pause development, Country A's best response is to speed ahead (since a payoff of 10 is better than a payoff of 0);
- If Country B chooses to speed ahead, Country A's best response is to pause development (since a payoff of -2 is better than extinction);
- If Country A chooses to pause development, Country B's best response is to speed ahead (since a payoff of 10 is better than a payoff of 0); and
- If Country A chooses to speed ahead, Country B's best response is to pause development (since a payoff of -2 is better than extinction).
In this scenario, there is no dominant strategy. However, there are two Nash equilibriums, which occur when one country speeds ahead, and the other pauses development. Neither country will want to be the country that pauses, so both will be holding out hoping that the other country will pause. This is an example of the chicken game (which I have discussed here). If both countries speed ahead, hoping that the other country will pause, we will end up with an extinction-level event.
For anyone who believes in this scenario, pausing development will seem like a good option, even if only one country will pause development. However, no country is going to want to willingly buy into pausing development.
Scenario #3: AI Utopia
In this scenario, if both countries speed ahead, the planet reaches an AI utopia. The fears of an extinction-level event do not play out, and everyone is gloriously happy. However, if only one country speeds ahead, then the outcomes are good, but not as good as they would be if both countries sped ahead. Also, the country that speeds ahead earns a big advantage. The payoffs for this scenario are shown in the table below.
Again, let's find the Nash equilibrium using the best response method. In this game, the best responses are:
- If Country B chooses to pause development, Country A's best response is to speed ahead (since a payoff of 10 is better than a payoff of 0);
- If Country B chooses to speed ahead, Country A's best response is to speed ahead (since utopia is better than a payoff of -2);
- If Country A chooses to pause development, Country B's best response is to speed ahead (since a payoff of 10 is better than a payoff of 0); and
- If Country A chooses to speed ahead, Country B's best response is to speed ahead (since utopia is better than a payoff of -2).
In this scenario, both countries have a dominant strategy to speed ahead. Speeding ahead is always better for a country, no matter what the other country decides to do (speeding ahead is always the best response).
For anyone who believes in this scenario, speeding ahead will seem like a no-brainer, since it is a dominant strategy.
Which is the 'true' scenario? I have no idea. No one has any idea. We could ask ChatGPT, but I strongly suspect that ChatGPT will have no idea as well. [*] What the experts believe we should do depends on which of the scenarios they believe is likely to be playing out. Or perhaps, with a chance that any of the three scenarios (or any other of millions of other potential scenarios with different players and payoffs) is playing out, perhaps the precautionary principle should apply? The problem there, though, is if any country pauses development, the best response in any of the scenarios except the first one is for other countries to speed ahead. So, unless all countries can be convinced to apply the precautionary principle, pausing development is simply unlikely.
We live in interesting times.
*****
[*] Actually, I tried this, and ChatGPT refused to offer an opinion, instead it said: "...it is crucial that policymakers and stakeholders work together to develop standards and guidelines for responsible AI development and deployment to minimize potential risks and maximize benefits for society as a whole." Thanks ChatGPT.