How to make the future better (other than by reducing extinction risk)
A summary of a new essay.
What projects today could most improve a post-AGI world?
In “How to make the future better”, I lay out some areas I see as high-priority, beyond reducing risks from AI takeover and engineered pandemics.
These areas include:
Preventing post-AGI autocracy
Improving the governance of projects to build superintelligence
Deep space governance
Working on AI value-alignment; figuring out what character AI should have
Developing a regime of AI rights
Improving AI for reasoning, coordination and decision-making.
Here’s an overview.
First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else; and the military (and whole economy) can in principle be aligned to a single person.
The risk from AI-enabled coups in particular is detailed at length here. To reduce this risk, we can try to introduce constraints on coup-assisting uses of AI, diversify military AI suppliers, slow autocracies via export controls, and promote credible benefit-sharing.
Second, governance of ASI projects. If there’s a successful national project to build superintelligence it will wield world-shaping power. We therefore need governance structures—ideally multilateral or at least widely distributed—that can be trusted to reflect global interests, embed checks and balances, and resist drift toward monopoly or dictatorship. Rose Hadshar and I give a potential model here: Intelsat, a successful US-led multilateral project to build the world’s first global communications satellite network.
What’s more, for any new major institutions like this, I think we should make their governance explicitly temporary: coming with reauthorization clauses, explicitly stating that the law or institution must be reauthorized after some period of time.
Intelsat gives an illustration: it was created under “interim agreements”; after five years, negotiations began for “definitive agreements”, which came into force four years after that. The fact that the initial agreements were only temporary helped get non-US countries on board.
Third, deep space governance. This is crucial for two reasons: (i) the acquisition of resources within our solar system is a way in which one country or company could get more power than the rest of the world combined, and (ii) almost all the resources that can ever be used are outside of our solar system, so decisions about who owns these resources are decisions about almost everything that will ever happen.
Here, we could try to prevent lock-in, by pushing for international understanding of the Outer Space Treaty such that de facto grabs of space resources (“seizers keepers”) are clearly illegal.
Or, assuming the current “commons” regime breaks down given how valuable space resources will become, we could try to figure out in advance what a good alternative regime for allocating space resources might look like.
Fourth, working on AI value-alignment. Though corrigibility and control are important to reduce takeover risk, we want to also focus on ensuring that the AI we create positively influences society in the worlds where it doesn’t take over. That is, we need to figure out the “model spec” for superintelligence - what character it should have - and how to ensure it has that character.
I think we want AI advisors that aren’t sycophants, and aren’t merely trying to fulfill their users’ narrow self-interest - at least in the highest-stakes situations, like AI for political advice. Instead, we should at least want them to nudge us to act in accordance with the better angels of our nature.
(And, though it might be more difficult to achieve, we can also try to ensure that, even if superintelligent AI does take over, it (i) treats humans well, and (ii) creates a more-flourishing AI-civilisation than it would have done otherwise.)
Fifth, AI rights. Even just for the mundane reasons that it will be economically useful to give AIs rights to make contracts (etc), as we do with corporations, I think it’s likely we’ll start soon giving AIs at least some rights.
But what rights are appropriate? An AI rights regime will affect many things: the risk of AI takeover; the extent to which AI decision-making guides society; and the wellbeing of AIs themselves, if and when they become conscious.
In the future, it’s very likely that almost all beings will be digital. The first legal decisions we make here could set precedent for how they’re treated. But there are huge unresolved questions about what a good society involving both human beings and superintelligent AIs would look like. We’re currently stumbling blind into one of the most momentous decisions that will ever be made.
Finally, deliberative AI. AI has the potential to be enormously beneficial for our ability to think clearly and make good decisions, both individually and collectively. (And, yes, has the ability to be enormously destructive here, too.)
We could try to build and widely deploy AI tools for fact-checking, forecasting, policy advice, macrostrategy research and coordination; this could help ensure that the most crucial decisions are made as wisely as possible.
I’m aware that there’s a lot of different ideas here, and I’m aware that these are just potential ideas - more like proof of concept, rather than fully-fleshed out proposals. But my hope is that work on these areas - taking them from inchoate to tractable - could help society to keep its options open, to steer any potential lock-in events in better directions, and to equip decision-maker with the clarity and incentives needed to build a flourishing, rather than a merely surviving, future.