What sort of post-superintelligence society should we aim for?
The case for ‘viatopia’
Many of the biggest companies in the world are racing to build superintelligence — artificial intelligence that far exceeds the capability of the best humans across all domains. This will not merely be one more invention. The magnitude of the transformation will be beyond that of the printing press, or the steam engine, or electricity; more on a par with the evolution of Homo sapiens, or of life itself.
Yet almost no one has articulated a positive vision for what comes after superintelligence. Few people are even asking, “What if we succeed?” Even fewer have tried to answer.1
The speed and scale of the transition means we can’t just muddle through. Without a positive vision, we risk defaulting to whatever emerges from market and geopolitical dynamics, with little reason to think that the result will be anywhere close to as good as it could be. We need a north star, but we have none.
This essay is the first in a series that discusses what a good north star might be. I begin by describing a concept that I find helpful in this regard:
Viatopia: an intermediate state of society that is on track for a near-best future, whatever that might look like.2
Viatopia is a waystation rather than a final destination; etymologically, it means “by way of this place”. We can often describe good waystations even if we have little idea what the ultimate destination should be. A teenager might have little idea what they want to do with their life, but know that a good education will keep their options open. Adventurers lost in the wilderness might not know where they should ultimately be going, but still know they should move to higher ground where they can survey the terrain. Similarly, we can identify what puts humanity in a good position to navigate towards excellent futures, even if we don’t yet know exactly what those futures look like.
In the past, Toby Ord and I have promoted the related idea of the “long reflection”: a stable state of the world where we are safe from calamity, and where we reflect on and debate the nature of the good life, working out what the most flourishing society would be. Viatopia is a more general concept: the long reflection is one proposal for what viatopia would look like, but it need not be the only one.34
I think that some sufficiently-specified conception of viatopia should act as our north star during the transition to superintelligence. In later essays I’ll discuss what viatopia, concretely, might look like; this note will just focus on explaining the concept.
We can contrast the viatopian perspective with two others. First, utopianism: that we should figure out what an ideal end-state for society is, and aim towards that. Needless to say, utopianism has a bad track record.5 From Plato’s Republic onwards, fiction and philosophy have given us scores of alleged utopias that look quite dystopian to us now. Members of every generation have been confident they understood what a perfect society would look like, and they have been wrong in ways their descendants found obvious. We should expect our situation to be no different, such that any utopia we design today would look abhorrent to our more-enlightened descendants. We should have more humility than the utopian perspective suggests.
The second perspective, which futurist Kevin Kelly called “protopianism” and Karl Popper decades earlier called “piecemeal engineering”, is motivated by the rejection of utopianism.6 On this alternative perspective, we shouldn’t act on any big-picture view of where society should be going. Instead, we should just identify whatever the most urgent near-term problems are, and solve such problems one by one.7
There is a lot to be said in favour of protopianism, but it seems insufficient as a framework to deal with the transition to superintelligence. Over the course of this transition, we will face many huge problems all at once, and we’ll need a way of prioritising among them. Should we accelerate AI, to cure disease and achieve radical abundance as fast as possible? Or should we slow down and invest in increased wisdom, security, and ability to coordinate? Protopianism alone can’t help us; or, if it does, it might encourage us to grab short-term wins at the expense of humanity’s long-term flourishing.
Viatopianism offers a distinctive third perspective. Unlike utopianism, it cautions against the idea of having some ultimate end-state in mind. Unlike protopianism, it attempts to offer a vision for where society should be going. It focuses on achieving whatever society needs to be able to steer itself towards a truly wonderful outcome.
What would a viatopia look like? To answer this question, we need to identify what makes a society well-positioned to reach excellent futures. John Rawls coined the idea of primary goods: things that rational people want whatever else they want.8 These include health, intelligence, freedom of thought, free choice of occupation, and material wealth. We could suggest an analogous concept of societal primary goods: things that it would be beneficial for a society to have, whatever futures people in that society are aiming towards.
What might these societal primary goods be? They could include:
Material abundance
Scientific knowledge and technological capability
The ability to coordinate to avoid war and other negative-sum competition
The ability to reap gains from trade
Very low levels of catastrophic risk
Beyond societal primary goods, we should also favour conditions that enable society to steer itself towards the best states, and away from dystopias. This could include:
Preserving optionality, so a wide variety of futures remain possible.
Cultivating people’s ability and motivation to reflect on their values.
Structuring collective deliberations so that better arguments and ideas win out over time.
Designing decision-making processes that help people realize what they value as fully as possible.
Ensuring sufficient stability that these viatopian structures cannot be easily overturned.
But this list is provisional: intended to illustrate what viatopia might look like, rather than define it.
The transition to superintelligence will be the most consequential period in human history, and it is beginning now. During this time, people will need to make some enormously high-stakes decisions, which could set the course of the future indefinitely. Aiming toward some narrow conception of an ideal society would be a mistake, but so would just trying to solve problems in an ad-hoc and piecemeal manner. Instead, I think we should make decisions that move us towards viatopia: a society that, even if it doesn’t know its ultimate destination, has equipped itself with the resources, wisdom, and flexibility it needs to steer itself towards a future that’s as good as it could be.
You can read this post on the Forethought website, along with the rest of our research.
AI company leaders have typically pointed to particular ways in which AI will be beneficial for society. Dario Amodei describes this at most length in Machines of Loving Grace; Sam Altman in Moore’s Law for Everything and Planning for AGI and beyond; Demis Hassabis and Elon Musk have made comments across various interviews (see e.g. here and here for Hassabis and here and here for Musk). Some of the named benefits include curing disease, improving mental health, radical abundance and prosperity, and very high-quality education.
But this is a far cry from a complete positive vision for a post-AGI future. AGI won’t result in a world that’s just like ours except we’re richer and have better health; it will transform society. Such a vision needs to grapple with the many changes that AGI would bring about; I give an overview of these challenges in Preparing for the Intelligence Explosion (co-authored with Fin Moorhouse).
There are some other limited exceptions that tackle parts of the problem. For example, Nick Bostrom’s Letter from Utopia describes just how good things could get in a post-AGI world. In Deep Utopia, Bostrom has an extended and interesting discussion of how life could be meaningful once survival, work, and progress no longer require us.
And Eric Drexler has introduced the concept of Paretopia. He powerfully makes the case that (i) AI-driven abundance means that everyone, by working together, can get vastly more of what they want and that (ii) for most people, as long as they get some share of the post-AI abundance, ensuring that such abundance occurs at all is much more important than trying to get an even larger share if it does come about.
More precisely: a viatopia is a society whose expected value is at least 50% that of a guarantee of a best feasible outcome.
A best feasible outcome is an outcome at the 99.99th percentile in terms of how well things could go, judged from today. The probabilities here invoked are epistemic probabilities: the subjective credences a highly intelligent and well-informed observer would have.
I define an “outcome” as the whole history of a society. So, for example, one could have the characteristically nonconsequentialist view that any future for society that is achieved via a bad process (e.g. a dictator seizes power and then implements their benevolent will) could not amount to a near-best outcome.
I intend for the concept of viatopia to be useful for those with many different moral perspectives, including non-consequentialism; in some cases that might require minor departures from the above definition. For views that reject the idea that value can be cardinal, we could define viatopia directly as a state that has a very high probability of resulting in a near-best outcome and a very low probability of resulting in an astronomically bad outcome. Some forms of non-consequentialism reject the idea of impartial value altogether; on such views, we could talk about the expected choiceworthiness of different states of society instead.
And, in particular, given the sheer scale of cognitive abundance that superintelligence could unlock, the reflective process might not need to last very long in calendar time. So I think it’s unwise to bake in the idea that the viatopian state needs to last a long time.
Another account which you could interpret as a proposal for viatopia is Robert Nozick’s idea of “meta-utopia” where many different communities pursue different utopian visions, which people are free to leave as they wish, and where no one can impose their utopian vision on others (Anarchy, State and Utopia, p.312). Scott Alexander’s concept of “Archipelago” is similar, as is my concept of a “morally exploratory world”, in What We Owe The Future. In my account, at least, the core idea is that individual free choice would lead to the best societies winning out over time.
And it has a bad track record even if we put aside the atrocities that have been done in the name of utopian ideals, or its tendency towards totalitarianism.
It’s also related to the ideal of “liberal neutrality” in political philosophy: that the state should have no view on the moral good.
This is like the idea of “hill-climbing” algorithms: take whatever small actions will improve things from where you currently are, rather than trying to work out what hill in the landscape is highest and walking straight towards it, even if that means going downhill initially.



This is well meaning but suffers from the usual ambiguous use of "we". If "we" were able to have a productive discussion about where we wanted to go "we" would already have done it. The problem is that the vast majority of "us" are unwilling or incapable of having such a discussion and, even the more enlightened of "us" can't seem to agree on anything of any significance, so the idea that "we" might reach some sort of useful concensus is an intellectual utopianism in itself.
Great post! Excited for the series. What do you think of the idea that we should be aiming for a Viatopia "all the way down"? Perhaps we should *always* maintain epistemic humility, never giving a non-provisional answer to the Socratic question writ large ("How ought we (the society/cosmos) to live?"), even eons onwards. Perhaps we should aim for a society that is always on-track to become an even better society that is capable of being on-track to become an— . Not sure if this is fully coherent, but the prospect of the world keeping its options always open, always open to evolving, always open to positive paradigm shifts, seems much more attractive than the prospect of the world initially being a Viatopia and then succumbing to what *seemed to it* a grand excellent vision but what is in objective truth a narrow end state (that its benighted condition couldn't fathom to be a narrow pitiable end state).
Also, wouldn't a safe, aligned, general superintelligence be vastly more suited to answering these difficult questions than us humans?