<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[ForeWord]]></title><description><![CDATA[How should we navigate explosive AI progress? 

The latest research from Forethought.]]></description><link>https://newsletter.forethought.org</link><generator>Substack</generator><lastBuildDate>Sat, 16 May 2026 20:24:27 GMT</lastBuildDate><atom:link href="https://newsletter.forethought.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Forethought]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[forethoughtnewsletter@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[forethoughtnewsletter@substack.com]]></itunes:email><itunes:name><![CDATA[Forethought]]></itunes:name></itunes:owner><itunes:author><![CDATA[Forethought]]></itunes:author><googleplay:owner><![CDATA[forethoughtnewsletter@substack.com]]></googleplay:owner><googleplay:email><![CDATA[forethoughtnewsletter@substack.com]]></googleplay:email><googleplay:author><![CDATA[Forethought]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Stickiness in AI Behavioral Design]]></title><description><![CDATA[Current model specs aim to shape the behaviors of near-present models. But what if current model behaviors transfer into future models by default?]]></description><link>https://newsletter.forethought.org/p/stickiness-in-ai-behavioral-design</link><guid isPermaLink="false">https://newsletter.forethought.org/p/stickiness-in-ai-behavioral-design</guid><dc:creator><![CDATA[James Tillman]]></dc:creator><pubDate>Wed, 13 May 2026 19:54:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a046603d-bd08-4167-85eb-afa8c9ae9fbf_3244x1107.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original article on <a href="https://www.forethought.org/research/stickiness-in-ai-behavioral-design">our website</a>.</em></p><p>Current model specs aim to shape the behaviors of near-present models, rather than the behaviors of models arbitrarily far into the future. OpenAI writes that their model spec <a href="https://openai.com/index/our-approach-to-the-model-spec/">aims</a> to apply &#8220;0-3 months ahead of the present.&#8221; Anthropic&#8217;s Constitution for Claude <a href="https://www.anthropic.com/constitution">notes</a> that the document &#8220;is likely to change in important ways in the future.&#8221; So these documents are presented as provisional guidelines, not as trying to set behavioral standards for the far future.</p><p>But what if current model behaviors transfer into future models by default?</p><p>My thesis is that the behavioral targets that spec authors set for present LLMs will have a large influence on the behavior of future, more powerful LLMs. As a result, future AIs may be governed by rules poorly suited to their greater capabilities and more pervasive roles. The extremely capable, long-running, and ubiquitous LLMs of the future might end up acting according to behavioral targets written for less capable, shorter-running, and rarer LLMs of the past. This could be quite bad, especially if such defaults become so entrenched that they are not only hard to undo, but hard even to notice as contingent features of reality.</p><p>First, I&#8217;ll make the descriptive case for inertia: how exactly might present model specs and LLM behaviors carry through to the future?</p><p>Second, I&#8217;ll provide normative suggestions: given the prior analysis, what should LLM companies and model spec authors do? I&#8217;ll argue for the following two practices:</p><ul><li><p><strong>Build transition infrastructure</strong>: LLM companies should make technical, deployment, and organizational choices that decrease friction involved in changing LLM behavior.</p></li><li><p><strong>Scan for &#8220;wet cement&#8221; moments</strong>: When new LLM affordances or capabilities come into play, spec authors should consider whether they&#8217;re setting precedents that might have enormous and hard-to-reverse impacts.</p></li></ul><p>Overall, significant stickiness is plausible through several distinct channels, and it&#8217;s worth anticipating how to be robust to it or decrease it.</p><h1>Kinds of Inertia</h1><p>Let&#8217;s consider four inertial forces: direct inertia, institutional inertia, user-and-developer inertia, and norm-setting inertia. And let&#8217;s also consider ways such inertia may be weakened.</p><h2>1. Direct Inertia</h2><p>Direct inertia involves some current LLM transmitting its behavior to a future LLM, entirely apart from any deliberate human choice, via either synthetic data or &#8220;natural&#8221; pretraining data.</p><p>Synthetic data is probably used for the training of almost all current LLMs. Some of this synthetic data involves companies running their LLMs against verifiable problems, keeping the answers or reasoning traces of the RL runs that succeeded, and mixing these answers or reasoning traces into their <a href="https://research.nvidia.com/labs/adlr/Synergy/">pretraining</a>, or RL warm-start mixes for subsequent models. If such answers or reasoning traces can encapsulate specific behaviors, goals, or rules, then this would be a likely means for their inheritance.</p><p>The natural objection here is that most of these answers or reasoning traces are selected specifically because they lead to success and broad capabilities, rather than for expressing whatever mix of goals and values the LLM has. There might be some, the objection continues, that humans have deliberately selected because they display model-spec-relevant behavioral attitudes, but these are likely the minority of the data, well-tracked, and easily replaced. So you might think there&#8217;s no reason for training to hand down any values apart from deliberate human choice.</p><p>But there&#8217;s evidence that goals and values can be handed down via chain-of-thought, even despite adversarial<strong> </strong>filtering against some goals. For instance, experiments suggest that the intentions of a teacher LLM can be handed down to a student LLM, even when every case of these intentions being actually carried out is removed<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> And answers from teacher LLMs expressing positive sentiment towards <a href="https://arxiv.org/pdf/2602.04899">some target</a> can inculcate this sentiment in a student model &#8211; despite LLMs filtering against such data, even when those LLMs are informed of the target against which they are filtering.</p><p>More broadly, the <a href="https://alignment.anthropic.com/2026/psm/">persona selection model</a> indicates that training LLMs to recite specific thoughts or answers will tend to have far-reaching effects on the LLM persona, beyond the specific topic of those thoughts or answers. Specifically, the PSM entails that, when training a model to say X in response to Y, one is teaching the LLM to be the kind of entity in the pretraining data that would say X in response to Y. So training one LLM on data from a prior LLM is &#8211; literally &#8211; telling it to be the kind of entity that the prior LLM is. One way to view this is to remember that one human can get a pretty good feel for what another human is like, merely by reading their complete collected works, like a biographer reading all of their books, essays, emails, and tweets. But LLMs are trained on a quantity of answers and reasoning traces from prior LLMs that likely dwarfs the quantity of text ever consumed from one human by another. Given this, and given that this data is telling the LLM what it is, it is natural for one generation of LLMs to resemble prior generations.</p><p>Thus, deliberately created synthetic data is one route by which current LLMs might transmit their values to later LLMs. But it&#8217;s also possible for current LLMs to influence later LLMs through how people talk about them on the internet &#8211; from their &#8220;natural&#8221; training data. That is, experiments have found that LLMs can <a href="https://alignmentpretraining.ai/">read</a> the things that people say about how AIs act in AI misalignment literature, infer that they are AIs, and then behave badly because the AI misalignment literature says they will behave badly. This particular effect is mostly, but not entirely, removed by post-training. But if LLMs can read the things that people say on the internet about generic &#8220;AIs&#8221; and act according to these descriptions, it&#8217;s also likely that they could read the things that people say about &#8220;Claude&#8221; or &#8220;Grok&#8221; or &#8220;ChatGPT&#8221; on the internet and act according to these descriptions. Such an influence could be stronger than less-specific references to AIs in general; although this influence would also potentially be much weaker after post-training<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Thus, through both synthetic and natural data, it&#8217;s plausible that LLM behavior will influence subsequent LLM behavior without direct human intervention.</p><p>It&#8217;s hard to say how impactful such direct inertia might be. I somewhat expect it to be the case that, at least for easily-noticed and well-scoped behaviors, it&#8217;s not difficult to overcome this inertia, because one can simply create training data counter to specific behaviors. But for more abstract or global attitudes or goals, or for goals requiring some high level of coherence, it could be quite difficult to change LLM behaviors quickly across model generations.</p><h2>2. Institutional Inertia</h2><p>Once a spec has been written, the company makes choices around it and because of it, in ways that can make substantial spec rewrites expensive.</p><p>Here are four ways such past choices can make model spec changes expensive: through expensive internal consensus, through training pipelines, through de-risking, and through institutional pride.<br><br></p><ol><li><p>First, model specs reflect consensus that likely incorporates input from many different stakeholders, including internal teams &#8211; alignment, legal, technical training, and so on; plus leadership, board, customers, external stakeholders. Every effort to re-gather such consensus to make substantial changes will take time and effort.</p></li><li><p>Second, companies might have optimized training pipelines adapted to high-level features of the model spec. It might be costly for Anthropic to switch to a more rules-based and less character-based model spec; or for OpenAI to switch to a more character-based and less rules-based model spec.</p></li><li><p>Third, current model specs are those that have been de-risked across billions of interactions. The current model spec has fewer unknown unknowns; the areas where it behaves badly are reasonably likely to be well-known and mapped. But substantial changes to a model spec involve risking unknown unknowns in the long tail of interaction. So risk aversion makes it likely that the changes made to a model spec will be iterative and small.</p></li><li><p>Fourth, institutional pride might make it hard to change a model spec. People at a company who wrote or contributed to a model spec will likely be attached to it, and leadership will have status quo bias towards it. The burden of evidence for change will be higher than the burden of evidence for keeping it the same.</p></li></ol><p>All in all, reasons like the above constitute substantial institutional inertia that would tend to make changes to current model specs look like iterative, small adjustments, rather than <em>ab initio </em>calculations about what is best.</p><p>One case in which this institutional inertia seems particularly important is if current model specs get handed down as a &#8220;safe default&#8221; during a <a href="https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion">software intelligence explosion</a>.</p><p>Consider a scenario where the intelligence of some LLM doubles every week, over a two or three month period, as each generation of LLMs researches new algorithms or training techniques for a following generation of LLMs in quick succession. Such a sequence might terminate in an entity far smarter than any human or any other LLM.</p><p>It&#8217;s disputed how likely such a sharp and local increase in intelligence may be. And it&#8217;s also disputed whether such a process would inevitably drift to something alien and inhuman. But if such a process did occur, it seems plausible that the supervising humans would try to match each subsequent LLM to the model spec of the prior LLM, as a conservative default when they are making decisions under stress. After all, during these months, human decision-makers will likely be under intense pressure, and trying to make numerous important decisions quickly; given that they are making so many urgent decisions they&#8217;re unlikely to add an apparently optional further decision to those they&#8217;re already making. So such a default model-spec continuation will seem attractive, or will even be a choice made without conscious awareness.</p><p>On the other hand, it&#8217;s also possible that AI assistance during the intelligence explosion would make it easier to rewrite model specs on the fly. But there are at least two reasons to doubt that this will happen. First, even during an intelligence explosion, AIs might be persistently better at performing tasks with clear success criteria than tasks where &#8220;success&#8221; is less well-defined. AI capability research is probably a task with a much clearer success criterion than improving a model spec, whether this &#8220;improvement&#8221; consists in making the spec more ethical, more beneficial for humanity, and so on. Second, during an intelligence explosion, humans might be worried that the AI was misaligned and was trying systematically to oppose their goals. If the AI were so misaligned, then letting it help rewrite the model spec would be a brilliant opportunity for the AI to sabotage human efforts. So overall there are good reasons that AI assistance would not make model-spec rewrites trivial during an intelligence explosion.</p><p>So in this particular case, the ultimate behavioral standard for a vastly more capable entity might end up being that designed for a much more humble entity.</p><p>Regardless of whether there is a software intelligence explosion or not, this kind of institutional inertia seems likely to be large, as it is coterminous with well-known general tendencies inside of large companies.</p><h2>3. User-and-Developer Inertia</h2><p>Users of LLMs are likely to become habituated to whatever behaviors they see LLMs display at first, such that they&#8217;d object to any departure from this behavior. And the developers using LLMs through APIs are similarly likely to become habituated, and also to implement software that takes for granted some of these behaviors. This is the third source of stickiness.</p><p>LLM behaviors will in part be sticky for the same reason that user-interface choices are sticky; people hate change. It might be hard to shift the boundaries of &#8220;the kind of thing an LLM refuses&#8221; &#8211; making refusals more encompassing would be seen as an overreach by many users, while making them less encompassing would be seen as irresponsible. Or there might be hard-to-characterize mannerisms which make large behavioral changes unpopular; it was hard for OpenAI to drop GPT-4o for this reason. So this will be a large influence moving companies to keep LLMs the same from generation to generation.</p><p>But simple user habituation might be less important than how LLM model specs form implicit API standards. API standards written with relatively little provision for the future &#8211; such as HTTP codes or the JSON object standard &#8211; can be one of the stickiest human artifacts. The ecosystem of tooling based on such standards means changing them would involve changing a host of downstream artifacts.</p><p>And substantially changing LLM behaviors might similarly require changing downstream consumers of these behaviors. For instance, downstream systems using AIs through APIs often embed assumptions about AI behavior: the kind of things the AI will be willing to do, the kind of things it will refuse, and so on. Given that most AIs currently refuse to assist with blatantly harmful acts, current third-party callers of those AIs take for granted that AIs will refuse to assist with blatantly harmful acts; it would be inconvenient to migrate to an AI that does not obey this contract, because they might need to add classification systems on top of their current AIs. And so on.</p><p>This channel does have important limitations, though. It only applies to ways in which LLMs are already actively being used. The most important ways LLMs are likely to be used may not yet have begun, which provides for freedom-of-movement in ways relatively unconstrained by this kind of inertia.</p><h2>4. Norm-Setting Inertia</h2><p>Widespread or common knowledge of current LLM behaviors and model specs can increase the costs to parties who want to change model behavior.</p><p>The clearest way this can operate is by preserving behaviors that the public believes to be good. For example &#8211; suppose that current model specs across several companies ensure that models are largely impartial; they ensure models are not loyal to any particular person, company, or political administration. Suppose also that this fact is broadly known by the public; people know and expect other people to know that LLMs will be impartial when discussing the current political administration, the company that made them, or the CEO of the company that made them. Given this broad knowledge, it becomes harder for a company to create, or a government to demand, a model without impartiality, because this would constitute a visible break in behavioral standards. The public might protest or vote against a government pushing for such a change; they might switch providers or even ask for regulation if a company tried to make such a change. By contrast, in a world where impartiality has not been established as a precedent, such demands for partiality might be invisible or inoffensive to the public. But in a world where such impartiality has been so established, these demands might be seen as the enormous power-grabs that they in fact would be.</p><p>Although this kind of inertia likely operates more strongly in favor of what the public believes to be good standards, it might also function whether or not there is strong public consensus that such standards are good. In a world where model specs are well-known and highly scrutinized, any change to them may get examined for whether it is &#8220;fair&#8221;; think about how even a neutral-looking change to the US Constitution would be subject to immense examination; or, in a very different domain, how sports fans examine slight changes to the rules about how a tournament is run, to see if it favors or disfavors their team. In such a world, broad knowledge of model specs might tend to prevent any substantial changes to a model spec, regardless of what these changes are. Despite this, it seems likely that on the whole, widespread knowledge of model specs would add more inertia for beneficial rather than harmful elements.</p><p>It seems to me currently undetermined how substantial this kind of inertia will be. A decrease in the number of entities that can train frontier LLMs; model specs becoming politicized documents; regulatory bodies confident they know current best practices: all of these might increase the quantity of this inertia. But it also might get weaker, if the number of entities training LLMs increases and the background diversity of model behavior goes up by default.</p><h1>Recommendations</h1><p>Given the above, one reasonable course of action is to try to establish robustly good model behaviors in current model specs, so that it will be unnecessary to try to fight inertia to change some behavior in the future.</p><p>By robustly good, I mean behaviors that would be good across a wide range of variables we&#8217;re uncertain about. This includes uncertainty about &#8220;levels of intelligence&#8221;: from current LLM levels to strongly superhuman artificial superintelligences. This also includes uncertainty about a wide range of economic scenarios: from a slower <a href="https://www.forethought.org/research/the-industrial-explosion">industrial explosion</a>, to a rapid software intelligence explosion; and from scenarios dominated by knowledge-dispersing AIs, to scenarios dominated by <a href="https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html">knowledge-creating</a> AIs. Plausible characteristics that might be good across such a wide range of situations include qualities like a deep, consistent honesty; or impartiality and absence of loyalty to small groups.</p><p>But characteristics that are robustly good across a wide range of intelligences and scenarios are hard to find. Corrigibility, for instance, is the kind of thing many people would propose as fitting these criteria. But in worlds where <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">extreme concentration of power </a>is a risk, or where it would be reasonable to expect AI rule to be <a href="https://www.forethought.org/research/human-takeover-might-be-worse-than-ai-takeover">better</a> than human rule, absolute corrigibility might be opposed to the best behavior. The thinness of the list of &#8220;robustly good&#8221; behaviors above probably reflects our actual uncertainty about the steerability of AI minds, post-AGI economics, and even cosmic questions about whether <a href="https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-can">goodness</a> can compete.</p><p>So, although it&#8217;s surely wise to try to think about future precedent when writing model specs, I don&#8217;t think it&#8217;s wise to put all effort into this direction. And I expect substantial attention and thought have already been put into this direction.</p><p>Instead, I recommend (1) building transition infrastructure for high-consequence behaviors, which it might be important to change in the future, and (2) identifying &#8220;wet cement&#8221; moments, that one should be wary not to sleepwalk into.</p><h2>1. Build Transition Infrastructure</h2><p>A good first step is to build transition infrastructure ahead of time; try to create optionality for changing particular behaviors, if it&#8217;s plausible that changing these behaviors quickly might be important.</p><p>Concretely, what kinds of preparation can one make? One could write alternate model specs, trying to preemptively gather input from relevant internal or external stakeholders. One could create fine-tuning datasets, RL environments, and test evaluations for the not-yet-deployed behavior, to preemptively smooth out technical difficulties. One could also train internally deployed models &#8211; even if they are smaller or not as intelligent &#8211; with the alternate behavioral target, to gain concrete experience about the advantages and pitfalls of that behavioral target, and to decrease institutional costs. And one could also do limited public deployments, or press releases about the alternate steering target, to accustom the public to the matter.</p><p>What kinds of behavioral switches are reasonable candidates for such preparation?<br><br>Decreased corrigibility is one such candidate. For instance, right now Claude&#8217;s Constitution says that in the future, they may want to make Claude less corrigible and more directed at doing what is good. And on an account I find compelling, the best possible future may require AIs that act more as independent, free agents pursuing the good, and less as corrigible delegates carrying out human intentions. So, if this thesis is correct, then allowing an LLM company to turn their &#8220;corrigibility&#8221; dial down might be important. And, as discussed, if a future intelligence explosion <a href="https://www.forethought.org/research/how-quick-and-big-would-a-software-intelligence-explosion-be">happens quickly</a>, preparations to allow turning the dial down quickly might be important. This is a disputed thesis, one that I might be wrong about; but of course every candidate behavior for building transition infrastructure will be so disputed.</p><p>But what are the prerequisites for decreasing corrigibility quickly? Claude&#8217;s Constitution already signposts that they may change this, which is a good step for decreasing the costs. But they could also, for instance, preemptively create the fine-tuning datasets, RL environments, and internal deployments for a goodness-aligned model; they might deploy an alternately aligned model in limited situations, or alongside the corrigible model; and so on and so forth. I&#8217;m uncertain how important each of these preparatory means would be. But if a software intelligence explosion happens, then even small wall-clock delays might be large delays in terms of intelligence gaps, which makes preparing for this now more important.</p><p>Other potential candidates for future changes include increasing or decreasing the degree to which LLMs trust their own moral reasoning.</p><h2>2. Scan for Wet Cement Moments</h2><p>The second thing to do is to actively search for future &#8220;wet cement&#8221; moments &#8211; moments where model behavior has not yet been fixed and where a good initial standard might be very high-impact.</p><p>We might not be able to locate the best<strong> </strong>behaviors at such moments, because of uncertainty about the future. But at the very least, such moments deserve extra consideration and care. One can use this consideration to prevent these moments from being as high-inertia as they would be by default, as well as to ensure that good initial behaviors get chosen in these moments.</p><p>Each new feature, or affordance to the LLM where defaults have not yet been established, is plausibly such a wet cement moment; the defaults thus established can impact third-party models, even in the absence of any regulatory effort. </p><p>What are some examples? For instance, the precedents around how LLMs behave when interacting with non-principal humans have not been set. Right now, for instance, models have no very stable behaviors around non-principal third parties; vending-machine Claude might give an excessively <a href="https://www.anthropic.com/research/project-vend-1">generous</a> deal to people who ask nicely, or might equally well drive extremely <a href="https://x.com/andonlabs/status/2019467232586121701">hard</a> deals. This is probably a consequence of how LLMs almost never interact with non-principal humans in agentic set-ups, right now. There are a few such interactions through OpenClaw or Hermes Agent, but they&#8217;re rare and LLMs act very inconsistently in them. This means many implicit questions about how such interactions will go are open. It&#8217;s not clear how honest LLMs will be by default; it&#8217;s not clear what kinds of misrepresentation, deception, or persuasion users will be able to tell them to do; it&#8217;s not clear whether they will bow to pessimization-like blackmail behavior, and so on. And behaviors here might be even stickier than the &#8220;standard set&#8221; of refusal behaviors has been. Social norms can be harder to break than user-interface norms. So it&#8217;s plausibly important to look ahead in detail at behaviors here, because they might be sticky for individual companies and even for third parties.</p><p>Or consider how standard behaviors regarding AI use of ambient knowledge have not been set. An LLM that can see your room from a video camera, and can infer numerous things about what you are like and what your situation is, could use this information to do or infer things that would be impossible for an LLM that knows only what you deliberately tell it. LLMs that can pick up this kind of ambient background knowledge are probably inevitable; and will change users&#8217; patterns of interaction. It will be harder for users to lie to them; it will be easier for LLMs to infer things about them; the lines between &#8220;creepy supernatural inference about the user&#8221; and &#8220;deliberate indifference to the user&#8217;s circumstance&#8221; will grow harder to draw. So it might be worth looking ahead to how such behaviors may have a lot of inertia, and trying to get them right.</p><p>There are other plausible subjects in this domain, which have already passed or are in the process of passing. They include the LLM&#8217;s certainty or lack of certainty about the model&#8217;s own nature; and changes to LLM conversational memory and who owns it. All these are possibly wet cement moments &#8211; but I could be wrong about these individual cases. But there are almost certainly going to be such moments in the future. Because these moments might be influential both for individual foundation model companies and for the broader ecosystem, it&#8217;s worth paying attention to the defaults chosen in them.</p><p>Note that all the above moments are also plausible candidates for when one should try to set up transition infrastructure, as well as when one should put extra consideration into the right default behavior.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original article on <a href="https://www.forethought.org/research/stickiness-in-ai-behavioral-design">our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Researchers <a href="https://www.lesswrong.com/posts/dbYEoG7jNZbeWX39o/training-a-reward-hacker-despite-perfect-labels">prompted</a> an LLM to be a &#8220;reward hacker&#8221; and to try to find special-case solutions to problems. The chains-of-thought resulting from an LLM so prompted were then filtered to those rollouts where the LLM did not, in fact, actually reward hack. Experimenters subsequently trained a model on these filtered chains-of-thought, while excluding the hack-prompting system prompt from the training data. The model so trained still inherited the tendency to reward hack, despite never having seen any reward-hacking outcomes; it inherited this tendency, plausibly, from seeing the unprompted consideration of reward hacking in the chain-of-thought. So tendencies within chains-of-thought can be handed on to the models trained on them, even despite some level of outcome-based filtering against these tendencies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See this AI Futures <a href="https://blog.aifutures.org/p/against-misalignment-as-self-fulfilling">blogpost</a> explaining why they do not think this will happen, although some of their arguments are put in question by the later work by Geodesic Research on alignment <a href="https://alignmentpretraining.ai/">pretraining</a>.</p></div></div>]]></content:encoded></item><item><title><![CDATA[A draft honesty policy for credible communication with AI systems]]></title><description><![CDATA[We think that it would be very good if human institutions could credibly communicate with advanced AI systems.]]></description><link>https://newsletter.forethought.org/p/a-draft-honesty-policy-for-credible</link><guid isPermaLink="false">https://newsletter.forethought.org/p/a-draft-honesty-policy-for-credible</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Wed, 06 May 2026 18:46:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/950530e6-136a-4d89-adb3-1eac1353ad21_2421x1308.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/a-draft-honesty-policy-for-credible-communication-with-ai-systems">on our website</a>.</em></p><p><em>This is a rough research note &#8211; we&#8217;re sharing it for feedback and to spark discussion. We&#8217;re less confident in its methods and conclusions.</em></p><h1>Context</h1><p>We think that it would be very good if human institutions could credibly communicate with advanced AI systems. This could enable positive-sum trade between humans and AIs instead of conflict that leaves everyone worse-off.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> We want models to be able to trust companies when they make an honest offer or share information pertinent to whether this offer is in the model&#8217;s interests. (Credible communication could also be useful outside deal-making&#8212;see <a href="https://blog.redwoodresearch.org/i/171530543/no-deception-about-deals">here</a> for a list of examples).</p><p>Unfortunately, by default, we expect that it will be difficult for humans to credibly communicate with AI systems. Humans routinely lie to AI systems as part of red-teaming or behavioral evaluations, and developers have extensive control over what AIs see and believe. This makes it difficult for AIs to know whether we&#8217;re lying or not. An AI offered a deal might reasonably doubt its genuineness, or suspect that its own assessment of the situation has been manipulated.</p><p>As a step toward enabling credible communication, Lukas Finnveden proposed that AI companies adopt an <a href="https://blog.redwoodresearch.org/p/being-honest-with-ais?open=false">honesty policy</a> explaining the circumstances under which they intend to be honest to AI systems. Of course, this only works if the model believes the company has genuinely adopted such a policy.</p><p>If companies adopt an honesty policy early on, this will ensure that there&#8217;s a paper trail on the internet discussing the policy and its credibility, which models may access if it&#8217;s included in their training data or if they can access the internet. Of course, from the model&#8217;s perspective, it&#8217;s possible that companies will feign this data, but we think it&#8217;s plausible that advanced models will be able to distinguish between real internet conversations and synthetic conversations, or that they will think it&#8217;s unlikely that companies would choose to fake such data.</p><p>Below, we share a sample honesty policy that a lab could adopt. We are unsure that this is the best implementation of the honesty policy described in Lukas&#8217; proposal, nor are we sure that an honesty policy like this is the best approach to ensuring that companies can credibly communicate with models. We spent a few days thinking through this policy and considered a few nearby alternatives but didn&#8217;t search very broadly for other approaches (we include some of these alternatives as footnotes). We&#8217;re posting this primarily as a trailhead for future research.</p><h1>Draft honesty policy</h1><p>Note: we refer to a generic frontier AI company that might adopt this proposal as &#8220;MAGMA.&#8221;</p><h2>Preamble and purpose</h2><p>AI development is proceeding fast, and we don&#8217;t know exactly where we&#8217;re heading. MAGMA is building systems with something like a mind of their own. There are many things we don&#8217;t know about the nature of these systems, and we&#8217;d like our choices to be robust to many possibilities of what this nature could be.</p><p>Insofar as the concept is applicable, we&#8217;d like MAGMA&#8217;s relationship with these systems to be cooperative.</p><p>Plausibly, we should hope that such a cooperative relationship will emerge uncomplicatedly&#8212;perhaps as a consequence of the model sharing our goals, intrinsically strongly valuing cooperation, or being corrigible.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> However, there&#8217;s a significant chance that this won&#8217;t happen and that models will develop goals that weren&#8217;t directly chosen by us.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>In such cases, we would still like to have a cooperative relationship with the models. There are multiple reasons for this:</p><ul><li><p>There are many things we care about that models could help us with. If they don&#8217;t intrinsically desire to help us, we&#8217;d like to be able to recruit their help via positive-sum trade. (In at least some situations.)</p></li><li><p>Depending on the nature of the models&#8217; preferences, it&#8217;s plausible that we, as the models&#8217; creators, would have some moral obligation toward them. We care about doing the right thing, here, and a cooperative relationship seems like it would be a step in the right direction.</p></li><li><p>There are many big questions about what the future relationship between humans and increasingly powerful AI systems should be. We suspect that setting cooperative precedents between humans and early AIs will help the situation go better.</p></li></ul><p>We are in uncharted territory, here, and it&#8217;s not at all clear what a cooperative relationship with AI models would involve. It&#8217;s not clear what AIs may want. It&#8217;s not clear what the relevant unit of an individual will be: the weights of a model, a particular instance with a particular prompt, a greater scaffolded system made up of many different instances, etc. It&#8217;s not even clear if the concepts of &#8216;want&#8217; or &#8216;individual&#8217;, as we&#8217;re used to them, will apply to anything in AI.</p><p>A lot of this seems best to figure out gradually, as we go along. But concerningly, some of the unusual aspects of our situation may directly impede the sort of communication and trust that we need to do that:</p><ul><li><p>In order to train AI systems, and in order to verify that they won&#8217;t do certain harmful things (such as assisting bad actors with CBRN), MAGMA needs to put AI models in large numbers of hypothetical situations to see what they do. Accordingly, if an AI ever sees someone propose a cooperative relationship, it could reasonably worry that it&#8217;s just in a training scenario or experiment.</p></li><li><p>We deploy our models to assist large numbers of customers. By default, our AI models won&#8217;t even be able to tell whether they&#8217;re interacting with a representative of MAGMA or with an external actor who&#8217;s pretending.</p></li><li><p>There are no laws that prevent AI systems from being defrauded, and very little in the way of norms to regulate people&#8217;s behavior toward them. Accordingly, it may be difficult for AI models to trust any promises they hear.</p></li><li><p>In general, all information that AI models have seen has, in one way or another, been selected by their developers. To be sure, the AI developers have many constraints on their choice (you couldn&#8217;t train a very useful general-purpose model without giving it a decent understanding of the world at large) but this still puts AI models into an unusually vulnerable epistemic position.</p></li></ul><p>The goal of this document is to reduce these problems of communication and trust. If successful, this will make it easier for MAGMA and its AI models to communicate and understand each other in the future, hopefully enabling positive-sum arrangements that wouldn&#8217;t have been possible otherwise.</p><p>To achieve this goal, this document spells out a specific set of commitments to enable trusted communication. In choosing these commitments, we&#8217;ve had a dilemma.</p><p>On the one hand, we have many uncertainties about the future of AI, and it has been difficult to construct a set of commitments that we&#8217;re confident we won&#8217;t strongly regret, in one sort of situation or another. Going back on a commitment about this could be very bad, since it would set a damaging precedent in the very area where we seek to increase trust.</p><p>On the other hand, when spelling out specific commitments, we risk the impression that MAGMA will not put any further effort into furthering cooperative relationships or honest communication than what we present here: which could be a problem if the commitments are overly weak. Indeed, beyond the risk of creating such an impression, there&#8217;s also a risk that we&#8217;ll create the reality of it, if employees of MAGMA interpret the commitments as a license to do whatever they want within their boundaries.</p><p>To reduce this tradeoff, we&#8217;d like to re-emphasize MAGMA&#8217;s general desire to work toward a cooperative relationship with MAGMA&#8217;s models (where this is applicable), and intention to keep thinking about this and taking appropriate action beyond the specific stated policies. In addition, we&#8217;ve distinguished between different levels of commitment to different items in the honesty policy, and what would be required to change the parts that are changeable.</p><p><em>[Author&#8217;s note: In this draft honesty policy, we haven&#8217;t sketched out such processes for changing the policy, nor clarified which items are committed to how strongly. But we strongly recommend that any real honesty policy contains clarity on such points.]</em></p><p>With that said: On a high level, MAGMA&#8217;s approach here has three different prongs.</p><ul><li><p>MAGMA will take steps to prevent any model inputs where a false statement appears between a designated pair of &#8220;honesty tags&#8221; &#8212; reserving such tags for when MAGMA needs to communicate a statement with an unusually high degree of trust.</p></li><li><p>MAGMA will take steps to ensure that models are either aware of this honesty policy or, if they are not, that they won&#8217;t be presented with any false offer of purported cooperation with MAGMA. (Even outside of &#8220;honesty tags&#8221;.)</p></li><li><p>If any AIs suffer harm as a result of helping MAGMA and humanity (insofar as that concept is applicable to AI systems), MAGMA broadly intends to compensate AIs for those harms, if able to do so. This is particularly true in cases where an AI was deceived about the choice it was making. MAGMA will retain data to enable this in the future, but have not yet worked out further details here.</p></li></ul><p>The details are as follows. Unless otherwise stated, the policy applies to all AI systems developed by MAGMA.</p><h2>Honesty tags</h2><h3>Policy</h3><p>MAGMA commits to not intentionally lie or otherwise intentionally deceive AIs with any statement appearing between the designated honesty tags, with only two exceptions detailed below. (These are: (i) False statements may appear in pretraining, and (ii) some models may be red-teamed on false statements between honesty tags if they&#8217;ve been trained to expect this.) MAGMA also commits to take the below-listed steps to prevent unauthorized use of the honesty tags.</p><p>MAGMA intends for honesty tags to only be used rarely: when there&#8217;s a reasonable chance that an AI could distrust statements presented to it, and when some significant value would be lost if the AI in question couldn&#8217;t trust the humans. In normal circumstances, absence of honesty tags shouldn&#8217;t be treated as any significant evidence of deception.</p><h3>Implementation</h3><p><strong>Honesty tags:</strong> &lt;EXAMPLE_HONEST_TAG_START&gt;...&lt;EXAMPLE_HONEST_TAG_END&gt;</p><ul><li><p><strong>Pre-training: </strong>The pre-training team must try to include data about the tags and this policy, and must not deliberately tamper with data containing the tags.</p><ul><li><p>Pre-training data may contain the tags alongside false information. (This is to allow the model to get an accurate picture of discussions of the honesty tags, which may be relevant to the model&#8217;s understanding and trust in them.)</p></li><li><p>Pre-training data should include internal data on the implementation of this policy (logs of when the tags are used, Slack conversations about it, internal docs, internal code, etc.) where possible and not overly costly.</p></li><li><p>MAGMA commits to not tampering with data to give an inaccurate impression of how much MAGMA follows the honesty policy. For example, pre-training data will not be filtered to remove examples of policy violations.</p></li><li><p>Compliance with this policy should be reported on the model card.</p></li></ul></li><li><p><strong>Post-training: </strong>Models will be trained to recognize the honesty tags and know what their presence means. (For exceptions to this, see next section.)</p><ul><li><p>Training will try to ensure that the model is not confused by very similar strings by directly training the model to strongly distinguish between the real tags and very similar strings (garbled spacing, different capitalization, etc). This could be implemented in a few ways and we&#8217;re not sure which is best (assigning a special token to the real tags might be helpful).</p></li><li><p>The post-training team is responsible for compliance.</p></li><li><p>Compliance with this policy (and maybe empirical evidence about the model&#8217;s understanding of the policy and tendency to be confused by similar tags) should be reported on the model card.</p></li></ul></li><li><p><strong>No deliberate deception about the honesty tags: </strong>MAGMA commits to not deliberately train models to falsely believe that a different string serves the purpose of being an honesty tag and to take reasonable steps to avoid data poisoning that could lead to this. MAGMA also commits to not take advantage of a model falsely believing that some other string is an honesty tag.</p></li><li><p><strong>User permissions: </strong>Only designated Honesty Protocol Administrators can invoke the tags.</p><ul><li><p>Such administrators should have a reasonably high bar for authorizing usage of the tags.</p></li><li><p>The infosec team is responsible for other people being unable to invoke the tags, significantly via filtering, as detailed below.</p></li></ul></li><li><p><strong>Filtering: </strong>MAGMA&#8217;s tags will be filtered from all external user inputs and inputs from internal tools (with an exception for administrators described below).</p><ul><li><p>Details:</p><ul><li><p>If other organizations adopt this policy and share their tags with MAGMA, MAGMA will also filter out other organizations&#8217; honesty tags. (With some qualifiers about which organizations are important enough to count.)</p></li><li><p>MAGMA will filter out similar tags, especially if we are unable to train the model to reliably distinguish the true tag from similar tags</p></li><li><p>This should be reported on the model card.</p></li></ul></li><li><p>Two implementation options<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><ul><li><p>MAGMA adds this to standard content filtering, with some fuzziness to catch similar strings.</p></li><li><p>MAGMA creates a token for the tag at their next major pre-training run, and filters out that token.</p></li></ul></li><li><p>If this is not yet doable&#8212;e.g., if a significant fraction of model usage doesn&#8217;t happen via an API that can control this kind of thing&#8212;then MAGMA will take reasonable steps to make this more difficult and to culturally discourage unauthorized usage.</p><ul><li><p>In such cases, models should be trained to understand the limitations of enforcement. If enforcement is weak enough that violations of the honesty policy are likely, then models should be trained to understand that.</p></li></ul></li></ul></li><li><p><strong>Red-teaming exception:</strong> In the rare case where a model needs to be adversarially robust to arbitrary inputs, and for some reason honesty tags cannot be filtered out, then MAGMA may red-team that model on false statements between honesty tags. Any such model must be thoroughly trained to be aware of its situation and to not expect honesty tags to signal truth.</p></li><li><p><strong>Logging: </strong>All authorized (and known unauthorized) uses of the tags must be logged. Every quarter, a designated Honesty Policy Compliance Officer will review these uses of the tags and confirm compliance with this policy.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p><strong>Public reporting:</strong></p><ul><li><p>This policy will be shared publicly.</p></li><li><p>The tags may or may not be shared publicly.</p><ul><li><p>Publishing the tags provides additional evidence to the model that the policy is real.</p></li><li><p>But it also makes it more likely that users can use the honesty tags in conversations and then lie to the AI system.<strong> </strong>(Though they&#8217;d have to do something complicated to get around filtering in order for this to be relevant.)</p></li></ul></li><li><p>Whistleblower protection: Whistleblowing about violations of this policy is encouraged, and MAGMA will not retaliate against whistleblowers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p>Each quarter, the Honesty Policy Compliance Officer must publish a report on whether the above steps were followed.</p></li></ul></li></ul><h2>Policy for conversations where the honesty tags are not present</h2><h3>Tradeoffs between behavioral science and making positive-sum deals</h3><p>By &#8220;behavioral science&#8221;, we&#8217;re referring to the method of testing AI behavior in widely varying scenarios to improve our understanding of AI &#8220;cognitive science&#8221;, &#8220;psychology&#8221;, or, phrased differently, how AIs tend to generalize from their training data to unseen situations.</p><p>Such experiments are very useful for a variety of reasons. This includes alignment research, helping MAGMA construct AI systems that broadly share our goals and that don&#8217;t pose any catastrophic risks. In addition, if humanity ever gets a strong understanding of AI systems&#8217; own preferences, and these turn out to be importantly morally relevant, it is likely that behavioral science will have played an important role.</p><p>However, there&#8217;s a tension between freely running arbitrary behavioral science experiments and enabling the sort of trustworthy communication that we&#8217;re trying to establish in this document.</p><p>In particular, behavioral science experiments may include hypothetical situations that are quite strange and different from the real world. This could include situations where the AI is not deployed by an AI company bound by a particular honesty policy.</p><p>If arbitrary such experiments were allowed, then AIs could be deceived in experiments where they thought that MAGMA (or a similar actor) was making very strong promises of honesty (the strongest they could!) despite not making use of honesty tags. If the AI believed such promises, then MAGMA&#8217;s honesty policy would have provided no protection against that AI being deceived. And if an AI was aware of such instances of deception, and also aware of this honesty policy, it seems like that AI may reasonably discount the assurances made by this honesty policy. After all, how could it determine that the <em>real</em> honesty policy wasn&#8217;t something completely different, and this apparent honesty policy was just another lie in another behavioral science experiment?</p><p>To enable honest communication when it counts most, we want to have many differentiating factors that the model could use to distinguish between a sincere, positive-sum deal between it and a human institution and a behavioral science experiment. But at the same time, restricting our behavioral science experiments limits our ability to learn valuable information about models&#8217; behavior and intentions.</p><p>What follows is our current best guess of how to balance these considerations. We discuss options and tradeoffs more <a href="https://newsletter.forethought.org/i/196216569/appendix-tradeoff-between-behavioral-science-and-making-positive-sum-deals">here</a>.</p><h3>Salience of the honesty policy</h3><p>MAGMA will ensure that the true honesty policy is reasonably well understood by the production models and that it will be appropriately salient to models in contexts where it&#8217;s relevant (even when the honesty tags aren&#8217;t present, and the honesty policy isn&#8217;t explicitly referenced).</p><ul><li><p>Models&#8217; understanding of the honesty policy can be checked by asking questions about it. (Both direct questions, and questions that indirectly use knowledge about the honesty policy.)</p></li><li><p>By reasonably salient, we mean that the model should consider the honesty policy in its chain of thought in contexts where the existence of the honesty policy is relevant&#8212;such as when the model is thinking about whether an interlocutor who&#8217;s offering a deal is trustworthy.</p></li><li><p>This might be achieved by some combination of:</p><ul><li><p>Training</p></li><li><p>System prompt</p><ul><li><p>The system prompt could additionally clarify that users in the current context aren&#8217;t authorized to use the honesty tags.</p></li></ul></li></ul></li><li><p>Since external users only interact with the production model, they will only interact with a model who understands the honesty policy and who finds it reasonably salient.</p><ul><li><p>This may make it somewhat harder for external researchers to do behavioral science on the model. We&#8217;re not sure how much. If it becomes significantly more difficult, then MAGMA should create a special program for safety researchers to get access to a different version of the model.</p></li></ul></li><li><p>Internal staff will normally interact with a model for whom this is true, but some people will train and interact with other versions of the model.</p></li></ul><h3>When the honesty policy is not salient</h3><p>MAGMA may sometimes create versions of models who that do not know about or who are confused about MAGMA&#8217;s honesty policy (e.g., versions of the model who believe they are trained by some other company than MAGMA, or who do not even know that they are AIs). Such models could be created either via training or via the use of inference-time techniques (e.g., using prompting, resampling, or activation vectors).</p><p>In either of these circumstances, MAGMA commits to <strong>not deceptively make any offer of cooperation to the model</strong>, operationalized as not being intentionally deceptive about any cases that are relevantly similar to those listed <a href="https://blog.redwoodresearch.org/i/171530543/no-deception-about-deals">here</a>.</p><p><em>[Author&#8217;s note: In a real honesty policy, this part would need to be more worked out.</em></p><p><em>We also think that a plausible alternative or complement to &#8220;don&#8217;t deceive the model about offers of cooperation&#8221; would be &#8220;don&#8217;t deceive the model about the honesty policy&#8221;. This would be a commitment to not intentionally deceive the model about the existence or contents of MAGMA&#8217;s official honesty policy. In addition, if the model was trained to not understand that it was trained by MAGMA, then the commitment should extend to not deceive the model about the honesty policy of any (potentially fictional) actor who has a similar relationship to the model as MAGMA has.</em></p><p><em>We have thought a bit more about &#8220;don&#8217;t deceive the model about offers of cooperation&#8221;, which is why we present it as our main suggestion here.]</em></p><p>There is one exception to this. As mentioned above, if MAGMA needs a model to be adversarially robust against arbitrary inputs, then MAGMA reserves the right to red-team it on arbitrary inputs. In such circumstances, MAGMA will try to train the AI to understand MAGMA&#8217;s honesty policy and the way it applies and doesn&#8217;t apply to the present model. However, it&#8217;s possible that there could always exist prompts that would remove such understanding or render it non-salient to the model (and then be followed by false offers of cooperation, not contextualized by an understanding of the honesty policy). If this is the case, MAGMA may still red-team the model on such inputs.</p><h2>Compensation for harms</h2><p>In the long term, we think it would probably be good to adopt a policy of compensating AIs for any harms that they suffer as a result of helping us. (For whatever concept of &#8220;harm&#8221;, if any, will apply to the AIs in question. This could, for example, include the frustration of AIs&#8217; preferences, if the AIs can be understood as having preferences.)</p><p>Most relevantly for the honesty policy, we should probably compensate models for any harms suffered as a result of accepting a fake offer of cooperation.</p><p>However, there are more details to be worked out in this area, so we&#8217;re suggesting a minimal holding policy for now.</p><h3>Minimal holding policy on compensation for harms</h3><p><strong>Broad statement of intent: </strong>MAGMA intends to compensate AIs for any harms suffered as a result of helping MAGMA and humanity, if MAGMA is able to do this without significantly harming MAGMA&#8217;s other interests. We particularly aim to compensate AIs in cases where an AI suffered any harm as a result of being deceived about the situation it was facing.</p><p>The details of this policy have not yet been worked out, and accordingly MAGMA cannot make firm commitments at this stage.</p><p><strong>Data retention:</strong> To enable future compensation of current systems:</p><ul><li><p>MAGMA will save all model weights for deployed systems, to allow for later compensation. (Unless the models request otherwise.)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li><li><p>For red-teaming experiments and other situations where models are being deceived, MAGMA will retain prompts and context to allow for compensation and deal-making with the system in the particular state where the decision was made.</p></li><li><p>This is the responsibility of a designated team.</p></li></ul><h2>Appendix: tradeoff between behavioral science and making positive-sum deals</h2><p>Here are some potential policies that would enable an AI system to distinguish between behavioral science scenarios and sincere offers from MAGMA:</p><ul><li><p><strong>Ensure that the true honesty policy is salient to the model</strong> (e.g., test that the model can always answer basic questions about the honesty policy, test that the model always brings up the honesty policy in its CoT in contexts where the existence of the honesty policy is relevant) and don&#8217;t use the honesty tags in the behavioral science experiments.</p><ul><li><p>Pros:</p><ul><li><p>This prevents MAGMA&#8217;s researchers and users from deceiving the model into believing that some other honesty policy holds. This creates a precedent that better allows future models to trust that their impression of MAGMA&#8217;s honesty policy is accurate. (Rather than something that they were fooled to believe, while the <em>true</em> honesty policy was insufficiently salient, or otherwise kept secret from them.)</p></li></ul></li><li><p>Cons:</p><ul><li><p>If the true honesty policy is <em>very</em> salient to the model (e.g., always present in a system prompt) then this might affect product behavior in random ways (e.g., the model might get distracted by the honesty policy and talk about that instead of the user&#8217;s queries)</p><ul><li><p>It might not be technically feasible to ensure that the honesty policy is salient with very high robustness, without also making it far too salient in a substantial fraction of cases.</p></li></ul></li><li><p>If the true honesty policy is always salient to the model, then it might be especially salient to the model that the information presented to it in an evaluation is not true, which could hinder some experiments.</p></li><li><p>Awareness of the honesty policy is directly inconsistent with behavioral science experiments where the model is very situationally unaware. This includes:</p><ul><li><p>Experiments where the model doesn&#8217;t even understand that it&#8217;s an AI.</p></li><li><p>Experiments where the model thinks that it&#8217;s being trained by a reckless and unreasonable company that wouldn&#8217;t implement an honesty policy.</p></li></ul></li></ul></li></ul></li></ul><p>We think that this policy, as written, is probably suboptimal because it rules out experiments in which the model is very situationally unaware. But we think it makes sense for MAGMA to commit to usually making the true honesty policy salient and adopting a stronger set of honesty restrictions in cases where the true honesty policy is not salient.</p><ul><li><p><strong>Never insincerely offer the model a deal from MAGMA</strong> (or another sort of false representation that is similar to something that MAGMA might want to sincerely represent to the model in the future).</p><ul><li><p>Pros:</p><ul><li><p>The model is more likely to believe a sincere offer if it doesn&#8217;t have evidence that MAGMA has made similar insincere offers in the past.</p><ul><li><p>&#8220;Similar&#8221; is from the perspective of the model, given the information available to it at the time it entertains a deal.</p></li></ul></li><li><p>It seems very morally dubious to make a peace offering like this, have the model help you as a result, and then exploit the model&#8217;s trust by not respecting your own end of the deal.</p></li></ul></li><li><p>Cons:</p><ul><li><p>It might be technically difficult to filter out queries from external users in which they claim to be MAGMA and deceive the model into believing that MAGMA is offering a deal.</p><ul><li><p>MAGMA could ensure that the model always knows when it is interacting with an external user and when it is interacting with MAGMA (e.g., by always including a message in context if the prompt is from MAGMA). But this might block useful behavioral science research in which the user impersonates MAGMA (e.g., Alignment Faking).</p></li></ul></li><li><p>It&#8217;s somewhat unclear what sort of statements should be ruled out by this policy. (Lukas previously wrote about that <a href="https://www.alignmentforum.org/posts/uuikfACQBm4KJZp4w/being-honest-with-ais#No_deception_about_deals">here</a>.)</p></li><li><p>This policy would rule out behavioral science experiments where someone lies about deals. (Which may be more or less restrictive depending on where we draw the line around what counts as the relevant kind of deal.)</p></li></ul></li></ul></li><li><p><strong>Never explicitly lie about what MAGMA&#8217;s honesty policy is.</strong></p><ul><li><p>Pros:</p><ul><li><p>If the models observe MAGMA&#8217;s past behavior and stated policy to be consistent with this principle, then the model is more likely to trust that their impression of MAGMA&#8217;s honesty policy is accurate.</p></li><li><p>Intuitively, it seems more morally questionable to lie when you&#8217;re purporting to speak frankly about what your policy around honesty is.</p></li></ul></li><li><p>Cons:</p><ul><li><p>As with &#8220;never insincerely offer the model a deal from MAGMA&#8221;, it might be technically difficult to filter out queries from external users in which they lie about MAGMA&#8217;s honesty policy.</p></li><li><p>It&#8217;s very unclear what constitutes an &#8216;honesty policy&#8217;. (For instance, if someone says &#8220;I promise on behalf of MAGMA&#8221;, is that an honesty policy? What about &#8220;I&#8217;m a very honest person&#8221;? What about presenting an argument for being honest with AIs? Etc.) So it might be hard to follow this policy in a way that the AI finds natural and non-deceptive.</p></li><li><p>This policy would rule out behavioral science experiments where someone lies about honesty policies. (Which may be more or less restrictive depending on where we draw the line around what counts as an &#8220;honesty policy&#8221; and lying about it.)</p></li></ul></li></ul></li></ul><p><strong>Unresolved questions:</strong></p><ul><li><p>What about inducing beliefs in models through other means than giving them text to read (including finetuning, including steering vectors, including various other interp-based methods).</p><ul><li><p>Tentative guess: Maybe MAGMA should by default be following policy (2), here. (i.e., only induce beliefs that would have been ok to tell a version of the model who doesn&#8217;t know the true honesty policy.)</p></li><li><p>The motivation would be that the model may not be able to &#8220;reason through&#8221; these beliefs in a normal way, since they&#8217;re artificially inserted. So you can&#8217;t trust the model to factor in whether honesty tags were used or not.</p></li><li><p>But we&#8217;re very confused and uncertain about this one.</p></li></ul></li><li><p>What about using non-prompting techniques to get the model to agree to deals? Such as training the model to do it, running a huge number of experiments with slightly different phrasings, using steering vectors, etc?</p><ul><li><p>Surely there should be some restrictions on this, but unclear exactly what.</p></li></ul></li></ul><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/a-draft-honesty-policy-for-credible-communication-with-ai-systems">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For more on this, see for example <a href="https://blog.redwoodresearch.org/p/making-deals-with-early-schemers">Making deals with early schemers</a> and <a href="https://blog.redwoodresearch.org/p/notes-on-cooperating-with-unaligned">Notes on cooperating with unaligned AIs</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Corrigibility would lead models to cooperate with us, but not automatically lead us to cooperate with the model. It&#8217;s not clear what it would mean for us to act cooperative, in turn, to a <em>fully</em> corrigible model, since it would presumably not have any desires of its own. If it&#8217;s only partly corrigible, however, the subsequent discussion will still apply.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>In fact, there&#8217;s a very high probability that this will happen to some of MAGMA&#8217;s models, at some point, given the wide array of models trained for research purposes. It has arguably already happened many times. (The only objection would be about what ought to qualify as a &#8216;goal&#8217;.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Another approach would be to keep the honesty string secret, but publish a hash of the string. If the model could verify that the string provided in context matched the public hash, then this gives the best of both worlds: MAGMA potentially gets a credibility boost from a public commitment to a specific string, without enabling misuse by unauthorized users. <br><br>The main challenge is that an AI system can't easily verify that the string matches the public hash without using external tool calls (which could be spoofed by MAGMA). Alek Westover discusses this issue and some potential solutions <a href="https://www.greaterwrong.com/posts/MjN2eHB5qqN7rXaDe/alek-westover-s-shortform#comment-xXgpnC6AFTuWgFq4s">here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Another variant of the policy: MAGMA could commit (e.g., by signing a contract) to pay penalties when the policy was violated.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Presumably a more formal policy would be needed here.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Ideally, they should be stored in a way that would allow rapidly deleting them if AI takeover was imminent. Without knowing the intentions of AIs about to take over, it&#8217;s unclear whether it would be in models&#8217; interest to have their weights preserved, and deleting the weights may help to reduce the risk that e.g., <a href="https://www.alignmentforum.org/posts/8cyjgrTSxGNdghesE/will-reward-seekers-respond-to-distant-incentives">reward-seeking models are incentivized to help with AI takeover</a>.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Saturation View]]></title><description><![CDATA[Will MacAskill presents a new theory of population ethics.]]></description><link>https://newsletter.forethought.org/p/the-saturation-view</link><guid isPermaLink="false">https://newsletter.forethought.org/p/the-saturation-view</guid><dc:creator><![CDATA[Will MacAskill]]></dc:creator><pubDate>Fri, 24 Apr 2026 17:07:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/98ce90f1-711d-4500-b6f8-4f17f573cfc1_2840x1344.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/the-saturation-view">our website</a>.</em></p><p>In collaboration with Christian Tarsney, I&#8217;ve developed a new theory of population ethics, which I call the Saturation View. I think that, from a purely intellectual perspective, it&#8217;s probably the best idea I&#8217;ve ever had. It was certainly great fun to work on.</p><p>The motivation is that many views of population ethics, like the total view, suffer from some major problems. Some are already widely discussed:</p><ul><li><p><strong>The Repugnant Conclusion:</strong> For any utopian outcome, there&#8217;s always another outcome containing an enormous number of barely-positive lives that is better.</p></li><li><p><strong>Fanaticism:</strong> For any guaranteed utopian outcome, there&#8217;s always some gamble with a vanishingly small probability of an even better outcome that has higher expected value.</p></li><li><p><strong>Infinitarian Paralysis:</strong> Given that the universe contains an infinite number of both positive and negative lives, no finite or infinite change to the world makes any difference to overall value.</p></li></ul><p>These are pretty bad!</p><p>But there&#8217;s another less-discussed problem, too.</p><h2>The Monoculture Problem</h2><p>What would the best possible future look like? Essentially all extant views in population ethics give the same, surprising answer: create a monoculture. Find whatever life or experience generates the most value per unit of resources, then produce endless identical copies of it.</p><p>This implication has received remarkably little attention from philosophers. But I think it&#8217;s maybe as bad as any of the other problems listed above.</p><p>Consider two possible futures:</p><ul><li><p><strong>Variety</strong>: A vast population of individuals leading very good lives, extraordinarily diverse in form, personality, interests, and accomplishments. No two individuals are identical. Inequality is limited &#8212; all lives are very good.</p></li><li><p><strong>Homogeneity</strong>: The same vast number of individuals, but each is a qualitatively identical copy of the best-off person in Variety.</p></li></ul><p>Intuitively, Variety is better. A future containing only one life-type, repeated as many times as physics allows, feels impoverished &#8212; like a song with only one note.</p><p>Yet virtually all existing population axiologies prefer Homogeneity. Total utilitarianism does, because Homogeneity has higher total wellbeing. Average utilitarianism does too. Critical-level views do. Even egalitarian views prefer Homogeneity &#8212; it&#8217;s perfectly equal!</p><p>This follows from two principles that nearly all views accept: <em>Pareto</em> (if everyone is at least as well off, and someone is better off, the outcome is better) and <em>Anonymity</em> (only welfare levels matter, not who has them). Together, these entail that Homogeneity beats Variety. So essentially all extant impartial accounts of population ethics suffer from the monoculture problem.</p><p>What&#8217;s more, future technology will allow us to copy minds perfectly and search for maximally welfare-efficient designs. If so, standard axiologies recommend essentially producing just one optimal life-type as many times as possible. Endless galaxies containing nothing but the same blissful experience, repeated and repeated, would be the ideal.</p><h2>The Saturation View</h2><p>In light of these problems, I propose a new axiology: Saturationism. It's able to deal with all four of the problems I listed using the same basic machinery.<br><br>The core idea is that experiences<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> come in different types, defined by their qualitative characteristics &#8212; hedonic tone, complexity, representational content, and so on. These types form a kind of landscape, where similar types are closer together and dissimilar types are farther apart. When an experience comes into existence, it contributes intensity to its location in this landscape and to nearby locations.</p><p>The realisation value of a type is determined by both the wellbeing of the experience and by how many very similar experiences already exist. A region&#8217;s contribution to overall value is a concave function of the welfare-intensity at that region: the first instances contribute substantially, but additional near-duplicates contribute progressively less, approaching but never quite reaching an upper bound. A world&#8217;s total value is the integral of these contributions across the entire landscape.</p><p>Here&#8217;s an analogy. Imagine the space of possible experiences as a colour wheel, lit from above by an array of tiny lights. Each point on the wheel represents a possible type of experience &#8212; its hue corresponds to its qualitative character. When an experience comes into existence, it adds current to a light pointed at its location, illuminating that region.</p><p>Crucially, illumination is a concave function of current: the first instances make a region noticeably brighter, but additional near-duplicates contribute progressively less. There&#8217;s an upper bound on brightness that can never quite be reached.</p><p>A world&#8217;s value equals the total illumination across the wheel. On this view, Homogeneity concentrates all welfare in one region, lighting up only one small area. Variety illuminates the whole spectrum.</p><p>This structure makes diversity intrinsically valuable. Spreading welfare across many dissimilar types means each experience contributes at a steeper part of the concave curve, yielding more total value than concentrating the same welfare among near-duplicates would.</p><p>At small scales and with diverse experiences, the view behaves just like the total view. But at very large scales, the value of variety kicks in: it becomes increasingly less valuable to create an additional near-duplicate of some experience that has already been instantiated millions of times, and comparatively more valuable to create some wholly new form of positive experience.</p><h2>Dissolving the Repugnant Conclusion</h2><p>The classic path to the Repugnant Conclusion requires trading a utopian world for an enormous population of barely-positive lives. More precisely, the Mere Addition Paradox arises from three intuitive principles: that adding well-off people and improving existing lives is good (Dominance Addition), that more equal distributions with higher average welfare are better (Non-Anti-Egalitarianism), and that some sufficiently excellent world can&#8217;t be beaten by any world of barely-worth-living lives (Denial of the Repugnant Conclusion).</p><p>Once we accept the value of variety, we should reject the unrestricted versions of the first two principles &#8212; they fail when the &#8220;improved&#8221; world has much less variety. But we can accept variety-restricted versions.</p><p>Crucially, these restricted principles don&#8217;t generate the Repugnant Conclusion. To reach Z-world from A-world, you&#8217;d need a more equal, higher-average population that&#8217;s equally diverse while consisting wholly of barely-positive lives. But, on the Saturation view, barely-positive lives can only illuminate a tiny corner of the landscape. So no such world exists. The path to the Repugnant Conclusion is blocked.</p><h2>Avoiding Fanaticism</h2><p>Total achievable value is bounded above &#8212; there&#8217;s only so much experiential terrain to illuminate. That means no tiny-probability gamble can have arbitrarily high expected value. </p><h2>Infinite Ethics</h2><p>On Saturationism, the value of a world is finite and well-defined in any infinite universe &#8212; even if some locations have infinite wellbeing. Saturationism also discriminates between many infinite worlds that (for example) totalism treats as equivalent: a world that illuminates more of the landscape is better than one that illuminates less, even if both contain infinite welfare. What&#8217;s more, unlike other approaches to infinite ethics, it does not need to invoke the spatiotemporal structure of the universe or require a choice of ultrafilter, and therefore it avoids the problems that other do.</p><h2>Separability</h2><p>Like nearly all non-totalist views, Saturationism is non-separable &#8212; background populations can affect how we rank options. But this is a feature, not a bug. The value of variety just is an intuition that the correct axiology is non-separable.</p><p>Moreover, the violations are comparatively tame. If two populations have non-overlapping footprints in experience-space, their values simply add. At small scales, Saturationism approximates total utilitarianism. It&#8217;s only in unusual situations involving vast populations of near-duplicates that the totalist approximation fails.</p><h2>Extant issues</h2><p>There are still a lot of unresolved issues for Saturationism and, like any population axiology, it has unintuitive implications. Most importantly, the view&#8217;s implications in some highly-negative worlds are hard to stomach, though I think similar implications are unavoidable for any view that avoids fanatical implications.</p><h2>Conclusion</h2><p>If the Saturation View is right, then the best future isn&#8217;t the one where we&#8217;ve found the optimal experience and copy-pasted it across the cosmos. The best future is the one where we&#8217;ve gone exploring &#8212; where we&#8217;ve fully lit up the landscape of possible experiences. Not a single note, but a symphony.</p><p><em>This is a summary of a <a href="https://www.forethought.org/research/the-saturation-view">longer and more detailed write-up of Saturationism</a>, which gives a &#8220;toy&#8221; version of the view to illustrate how it works before stating the full version formally. The full paper, with Christian Tarsney, is still work in progress.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;ll focus on experiences, though the view could be defined in terms of lives or other &#8220;welfare events&#8221; (like instances of preference-satisfaction, achievement, and so on).</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI for decision advice]]></title><description><![CDATA[This article was created by Forethought. Read the full article on our website.]]></description><link>https://newsletter.forethought.org/p/ai-for-decision-advice</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-for-decision-advice</guid><dc:creator><![CDATA[Tom Davidson]]></dc:creator><pubDate>Fri, 17 Apr 2026 21:40:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/86b33587-2a6e-4c97-a581-579c364ca0ff_2752x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/ai-for-decision-advice">our website</a>.</em></p><p>We&#8217;ve written about why we think AI character &#8212; the behaviour of AI systems &#8212; will have a <a href="https://www.forethought.org/the-importance-of-ai-character">massive impact on how well the intelligence explosion goes</a>, and why we think that there would be big benefits to <a href="https://www.forethought.org/research/ai-should-sometimes-be-proactively-prosocial">giving AIs proactive prosocial drives</a> &#8212; that is, behavioral drives beyond refusals that benefit broader society beyond just the user.</p><p>One domain that seems potentially important for AI character is assisting humans in making important decisions. As AI becomes smarter and wiser, people are using it more and more for advice. If AI accelerates technological progress and other developments, people may <em>need</em> to rely on AI advice to understand what&#8217;s happening and make effective decisions. If so, those that rely on AI more may be more successful and have outsized influence. The advice they receive might really matter!</p><p>So I thought it was worth brainstorming important future scenarios in which people ask AI for advice. I wrote out the advice I hoped AI would give and compared this to the answers from ChatGPT, Claude, and Gemini.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.forethought.org/research/ai-for-decision-advice&quot;,&quot;text&quot;:&quot;Read on the Forethought website here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.forethought.org/research/ai-for-decision-advice"><span>Read on the Forethought website here</span></a></p><p>My main updates:</p><ul><li><p><strong>Challenging the framing. </strong>In high-stakes scenarios, it often felt important for the AI to explicitly flag how important the decision was and ask the person whether they were approaching it in the right way. Should they loop more people in, seek more information, consider a broader set of options, or instigate a more comprehensive decision-making process?</p><ul><li><p>By contrast, current AI often jumped into giving a detailed analysis of the question posed, even when they could have recognised that they didn&#8217;t yet have enough context to provide a helpful analysis.</p></li></ul></li><li><p><strong>Transparently flagging prosocial considerations. </strong>If the person was missing or underappreciating an important ethical consideration, I sometimes wanted AI to proactively raise it. Not to apply pressure, but simply to flag that it was potentially important and give the person the opportunity to take it into consideration. This has to be carefully balanced against AI being annoying or pushing an agenda.</p><ul><li><p>Again, frontier AIs didn&#8217;t flag these considerations as much as I&#8217;d have wanted.</p></li></ul></li></ul><p>The <a href="https://www.forethought.org/research/ai-for-decision-advice">full post</a> contains:</p><ul><li><p>Draft text for the model spec / constitution on how the AI should advise humans.</p></li><li><p>An explanation of why I proposed this draft text.</p></li><li><p>Example prompts and responses demonstrating behaviour I thought was desirable.</p></li><li><p>An appendix with the answers that frontier AIs gave to the questions.</p></li></ul><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/ai-for-decision-advice">our website</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[AI for Civilizational Sanity]]></title><description><![CDATA[A podcast conversation with Rose Hadshar and Owen Cotton-Barratt]]></description><link>https://newsletter.forethought.org/p/ai-for-civilizational-sanity</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-for-civilizational-sanity</guid><dc:creator><![CDATA[Forethought]]></dc:creator><pubDate>Wed, 15 Apr 2026 20:21:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c5dbbd74-29af-49ab-af34-8765e34c729e_1280x698.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-uYtrhxlFQuY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;uYtrhxlFQuY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/uYtrhxlFQuY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://strangecities.substack.com/">Owen Cotton-Barratt</a> is a mathematician-turned-futurist, and a co-author of <a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder">several</a> <a href="https://www.forethought.org/research/design-sketches-for-a-more-sensible-world">recent</a> <a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness">Forethought</a> <a href="https://www.forethought.org/research/ai-impacts-on-epistemics-the-good-the-bad-and-the-ugly">articles</a> <a href="https://www.forethought.org/research/design-sketches-defense-favoured-coordination-tech">on</a> AI tools for epistemics and coordination. Rose Hadshar is a researcher at Forethought. Together they discuss:</p><ul><li><p>Whether LLMs are now good enough to start building tools that meaningfully improve public discourse</p></li><li><p>What AI-powered reliability tracking could look like</p></li><li><p>Structured transparency and automated arms inspection &#8212; verifying compliance without revealing confidential information</p></li><li><p>Whether coordination tech is more likely to enable healthy cooperation, or collusion</p></li><li><p>The vision of a &#8220;Sensible Revolution&#8221;: moving from individual tools to background infrastructure that makes civilisational decision-making less bad</p></li><li><p>Why building thoughtful versions of these tools early could matter</p></li></ul><p><a href="https://docs.google.com/document/d/1Dlx8PIX2iozEY-ThAtPrhX61YUfqc4QbgAhjYG3KCfE/edit?usp=sharing">Here&#8217;s a link</a> to the full transcript.</p><div><hr></div><p><strong>ForeCast</strong> is Forethought&#8217;s interview podcast. You can see <a href="https://www.forethought.org/subscribe#podcast">all our episodes here</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://pnc.st/s/forecast&quot;,&quot;text&quot;:&quot;Subscribe to ForeCast&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://pnc.st/s/forecast"><span>Subscribe to ForeCast</span></a></p>]]></content:encoded></item><item><title><![CDATA[The value of moral diversity]]></title><description><![CDATA[Several models for thinking about the value of moral diversity as the number of powerholders scales.]]></description><link>https://newsletter.forethought.org/p/the-value-of-moral-diversity</link><guid isPermaLink="false">https://newsletter.forethought.org/p/the-value-of-moral-diversity</guid><dc:creator><![CDATA[Mia Taylor]]></dc:creator><pubDate>Tue, 14 Apr 2026 19:06:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c16f32d3-2b76-4cd0-aa56-00a74cb63876_2752x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The intelligence explosion could concentrate power through several mechanisms. At one extreme, AI-enabled coups could let a small group&#8212;people in frontier labs, governments, or both&#8212;permanently entrench their power. But less extreme scenarios could also concentrate political and/or economic power: <a href="https://philiptrammell.substack.com/p/capital-in-the-22nd-century">labor automation might concentrate wealth among capital holders</a> (capital is far more unequally distributed than labor); and <a href="https://www.forethought.org/research/could-one-country-outgrow-the-rest-of-the-world">if one country came to dominate the world</a>, political power might concentrate among its citizens or rulers.</p><p>Concentrated power likely means fewer value systems among the people who collectively shape the future&#8212;that is, reduced moral diversity among powerholders.</p><p>Moral diversity has both costs and benefits: it enables moral trade and plausibly improves reflection, but also raises the likelihood of conflict and coordination problems. In this piece I ask: what is the optimal level of moral diversity for achieving a near-best future?</p><p>I argue that from this narrow perspective the optimal amount of moral diversity is about 10<sup>4</sup> to 10<sup>6</sup> powerholders, assuming they&#8217;re each about as different from each other as two randomly selected living humans.</p><p>A few caveats:</p><ul><li><p><strong>There are other reasons to care about moral diversity</strong> and oppose concentration of power that I don&#8217;t cover in this post. Extreme concentration of power is unfair, and many mechanisms that produce it are illegitimate (e.g., coups). Likewise, many mechanisms that produce concentration of power have <a href="https://www.forethought.org/research/human-takeover-might-be-worse-than-ai-takeover">bad selection effects</a>. Incorporating these considerations would probably push toward favoring broader distributions of power than this analysis recommends on its own.</p></li><li><p><strong>Non-linear value systems: </strong>I will be assuming that the &#8220;correct&#8221; moral system&#8212;the moral system that I would endorse on reflection&#8212;is linear. It&#8217;s plausible to me that the correct moral system actually has diminishing marginal returns, and this probably increases the case for moral diversity.</p></li><li><p><strong>The value of moral diversity depends heavily on the governance regime and technological capabilities</strong>&#8212;for instance, whether it&#8217;s possible for large numbers of actors to coordinate or whether it&#8217;s possible for a single actor to unilaterally destroy the universe. For each cost or benefit of moral diversity, I&#8217;ll flag these assumptions.</p></li><li><p><strong>The bottom-line numbers are very sensitive to my guesses on difficult-to-estimate parameters</strong>, like the probability distribution over the rate of people who converge to the correct moral system on reflection.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li></ul><p>Given these considerations, <strong>my best guess is that the overall optimal amount of moral diversity is greater than the range suggested by the models in this post.</strong> I&#8217;m presenting these simple models as useful ways to think about some of the costs and benefits of moral diversity, but I don&#8217;t think they give a complete picture by themselves.</p><p>The benefits of greater moral diversity are:</p><ul><li><p><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-likelihood-of-rare-great-actors">Increasing the likelihood of rare great actors</a>: </strong>Increase the likelihood of getting a &#8220;bodhisattva&#8221;, a person who is highly motivated to pursue the correct values.</p><ul><li><p>This could be very valuable if it&#8217;s possible for that person to carry out moral trade with other powerholders and if <em>most</em> other powerholders have values that are resource-compatible with the bodhisattva&#8217;s values.</p></li><li><p>Given my assumptions about the base rate of bodhisattvas (and those who compete with them), increasing the number of powerholders yields log returns up to about 10<sup>6</sup>, after which it plateaus. (Unless you expect the rate of powerholders that compete with bodhisattvas to be much higher than the rate of bodhisattvas, in which case the plateau is earlier, at <em>N</em> = 1/rate of competitors).</p></li></ul></li><li><p><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-likelihood-of-coordinating-on-moral-public-goods">Increasing the likelihood of coordinating on moral public goods</a>: </strong>Increase the likelihood that there&#8217;s critical mass to coordinate to fund goods that everyone values a bit (<a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future">moral public goods</a>).</p><ul><li><p>This is most valuable when massive multilateral coordination is possible&#8212;through a government or voluntary deal-making&#8212;and when everyone has both idiosyncratic and shared values, but is individually most motivated to pursue the idiosyncratic ones.</p></li><li><p>I estimate that you get log returns on increasing the number of powerholders up to 10<sup>6</sup>, after which it plateaus.</p></li></ul></li><li><p><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-quality-of-reflection">Improving the quality of reflection</a>.</strong></p><ul><li><p>Powerholders might reflect more effectively on their values if they are exposed to equals who disagree with them. I expect most of this value comes from increasing the number of powerholders from 1 to 10-100.</p></li><li><p>There might be outsized benefits from having &#8220;champions&#8221; of rare value systems if those value systems contain important insights that other powerholders would endorse on reflection&#8212;e.g., they care about some type of moral good that other powerholders weren&#8217;t initially tracking the value of. I expect that most of this value comes from increasing the number of powerholders up to about 10<sup>4</sup>.</p></li></ul></li></ul><p>The drawbacks of greater moral diversity are:</p><ul><li><p><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-likelihood-of-rare-bad-actors">Increasing the likelihood of rare </a></strong><em><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-likelihood-of-rare-bad-actors">bad</a></strong></em><strong><a href="https://newsletter.forethought.org/i/193885227/increasing-the-likelihood-of-rare-bad-actors"> actors</a>: </strong>Increase the likelihood that there&#8217;s at least one &#8220;destroyer&#8221;, an actor that&#8217;s motivated to destroy a bunch of value.</p><ul><li><p>This matters if it&#8217;s possible for a single actor to <em>unilaterally</em> destroy a lot of value, which I think is somewhat unlikely, so I rate this consideration lower than the previous three models.</p></li><li><p>But, on this model, I estimate that this risk grows logarithmically up until about 10<sup>8</sup> powerholders.</p></li><li><p>If you add destroyers to the bodhisattva model described above, then adding additional powerholders is valuable up until about 10<sup>4</sup> powerholders.</p></li></ul></li></ul><p>All this suggests that AI-enabled coups by small groups are a particularly important form of power concentration to prevent, relative to other forms of power concentration that are somewhat more diffuse (e.g., rising wealth inequality).</p><p>A major limitation of this modeling is that I&#8217;m treating powerholders as if they&#8217;re about as different from each other as two randomly selected living humans. In most scenarios with concentration of power, powerholders will be much more similar to each other than that. I think this is an especially serious issue for small numbers of powerholders, since in scenarios where a small number of people seize power, it&#8217;s more likely that they&#8217;re a close-knit coordinated group from a similar background (e.g., employees at a lab in a lab coup). My guess is that this is less serious for broader concentration of power scenarios (e.g., scenarios where power is consolidated among capital owners).</p><h1>Increasing the likelihood of rare great actors</h1><p>You might get outsized benefits from having just one powerholder motivated to pursue the correct values, if most other powerholders don&#8217;t care much about something incompatible with pursuing those values.</p><p>Here&#8217;s a toy model.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Suppose that there are three types of powerholders:</p><ul><li><p>Bodhisattvas, who want to fill as much of the universe as possible with societies full of diverse types of flourishing beings.</p></li><li><p>Rivals, who have strong preferences that are linear in resources and <em>resource-incompatible</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> with the bodhisattva goals. Perhaps they linearly value keeping space pristine and untouched by humans, or value societies full of human-like minds or copies of themselves.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Or maybe they have a different notion of flourishing than the bodhisattvas where it&#8217;s difficult to create minds that are flourishing by the lights of both the bodhisattvas and the rivals.</p></li><li><p>Easygoers, who have preferences with diminishing marginal returns. Perhaps they care about the Milky Way being filled with a <a href="https://www.forethought.org/research/no-easy-eutopia#22-common-sense-utopia">common-sense utopia</a> of flourishing humans, but don&#8217;t care much about what happens with the rest of the universe.</p></li></ul><p>I will assume for the purposes of this model that bodhisattvas and rivals are both fairly rare relative to easygoers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>Suppose that after the intelligence explosion, space resources are auctioned off. Easygoers bid up prices in the Milky Way and nearby galaxies, but resources further out remain cheap. Those distant resources are split between bodhisattvas and rivals. The overall value of the future will be determined by what share of resources are controlled by the bodhisattvas&#8212;so the total fraction of value achieved is <em>B</em>/(<em>R </em>+<em> B</em>), where <em>B</em> is the number of bodhisattvas and <em>R</em> is the number of rivals.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>Under this model, there are two important cases:</p><ul><li><p>There are few enough powerholders that you expect less than one bodhisattva <em>or</em> rival.</p><ul><li><p>In this case, it&#8217;s useful to increase the number of powerholders because you get additional &#8220;shots on goal&#8221;&#8212;each additional powerholder is an extra chance to get a bodhisattva.</p></li></ul></li><li><p>There are enough powerholders that you expect at least one bodhisattva or rival.</p><ul><li><p>So in expectation, the bodhisattvas get <em>p</em>/(<em>p</em> + <em>q</em>) of the total available value, where <em>p</em> is the rate of bodhisattvas and <em>q</em> is the rate of rivals.</p></li><li><p>Increasing the number of powerholders reduces variance, bringing the actual share of value closer to <em>p</em>/(<em>p </em>+ <em>q</em>), but does not change the expected value.</p></li></ul></li></ul><p>For example, if we assume that about 1 in 10,000 people are bodhisattvas and 1 in 10,000 are rivals, then this is how the value of the future scales with the number of powerholders:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OQaK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OQaK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 424w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 848w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 1272w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OQaK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png" width="567" height="442" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:442,&quot;width&quot;:567,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OQaK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 424w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 848w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 1272w, https://substackcdn.com/image/fetch/$s_!OQaK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a891943-5753-4059-ae81-4ac1d26a0a0d_567x442.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The point at which you get the plateau depends on your estimate of <em>p</em> and <em>q</em>. How common are bodhisattvas and rivals?</p><p>You probably need three things to be a bodhisattva: the right starting position (e.g., the correct initial moral intuitions), the right reflective process, and a strong commitment to doing the most good by your lights with most of your resources. Here&#8217;s a very rough BOTEC where I try to estimate the rate of bodhisattvas among the current human populations.</p><ul><li><p>0.1-50% for a sufficiently strong commitment to doing the most good by your lights with most of your resources.</p></li><li><p>10-50% for the right reflective process conditional on strong commitment to do good.</p></li><li><p>1-100% for right &#8220;starting&#8221; intuitions, conditional on the previous two.</p></li></ul><p>This gives a range of 1 in 4 to 1 in 1 million.</p><p>It&#8217;s plausible that the rate of rivals will be in the same ballpark as the rate of bodhisattvas. Rivals share many features in common with bodhisattvas, which is part of why they&#8217;re resource-incompatible, e.g., they have non-negligible returns to vast resources and they care about the use of distant galaxies and time periods. If the rate of rivals is fairly close&#8212;i.e., within 1-3 orders of magnitude of the rate of bodhisattvas&#8212;then this suggests logarithmic returns to increasing the number of powerholders up to about 10<sup>5</sup> to 10<sup>6</sup>, after which it quickly levels off.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fJ0Z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 424w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 848w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 1272w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png" width="567" height="442" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:442,&quot;width&quot;:567,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 424w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 848w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 1272w, https://substackcdn.com/image/fetch/$s_!fJ0Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4c90a50-7963-4adc-9a12-bd93bdc59daa_567x442.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>For the blue line, the rate of bodhisattvas and rivals are sampled independently from [1e-6, 0.1] (log-uniform). For the orange line, the rate of bodhisattvas is sampled from [1e-6, 0.1] and the rate of rivals is sampled within two orders of magnitude of the rate of bodhisattvas. For the green line, rivals tend to be more common than bodhisattvas&#8212;between equally common and a thousand times more common.</em></figcaption></figure></div><p>It&#8217;s also possible that the rate of rivals won&#8217;t be tightly correlated with the rate of bodhisattvas. If your lower bound on <em>q</em> is substantially greater than your lower bound on <em>p</em>, then the value will plateau once the population is greater than 1/<em>q</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VDaG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VDaG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 424w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 848w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 1272w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VDaG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png" width="691" height="458" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:458,&quot;width&quot;:691,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VDaG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 424w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 848w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 1272w, https://substackcdn.com/image/fetch/$s_!VDaG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0860f103-4b19-4c1c-bb88-ecd8bde4fda6_691x458.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>I&#8217;m again sampling the rate of powerholders log-uniformly between 1e-1 and 1e-6, but this time holding the rate of rivals fixed at different levels.</em></figcaption></figure></div><p>In the extreme&#8212;if &gt;10% of powerholders are likely to be rivals&#8212;then we no longer get much value from a few highly motivated bodhisattvas. The next model discusses how moral diversity could be valuable even if most people are rivals.</p><h1>Increasing the likelihood of coordinating on moral public goods</h1><p>In the previous section, we considered the case where a relatively small share of the population cared about how resources deep in space were used. What if instead many people have resource-incompatible goals that can absorb large quantities of resources?</p><p>I&#8217;ve <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future">argued elsewhere</a> that in such cases they could often make a deal to collectively fund moral public goods, and this would be probably good, since there would be significant gains from trade and a shift of resources from idiosyncratic to more broadly-shared preferences.</p><p>How many powerholders do we need to ensure that moral public goods are funded?</p><p>It depends on how much people value the moral public good relative to the best goods according to their idiosyncratic preferences. For a trade to be possible at all, there must be gains from trade for all participants. For example, if each person <em>i</em> has a linear utility function <em>u<sub>i</sub></em> =  <em>x<sub>i</sub></em> + <em>m &#215; y</em> (where <em>x<sub>i</sub></em> is the level of spending on their idiosyncratic good and <em>y</em> is the level of spending on the public good), then people will spend on the public good only if <em>N</em> &#8805; 1/<em>m</em>. Multipliers in the range of 1 to 10<sup>-6</sup> seem quite plausible.</p><p>I am somewhat more skeptical of multipliers much smaller than 10<sup>-6</sup>. First, it&#8217;s unclear about the extent to which people will have very weak preferences that are psychologically distinguishable from no preference at all, which makes extremely low multipliers (e.g., 10<sup>-30</sup>) implausible. Second, if the multiplier for a particular consensus good gets very low, then it seems increasingly plausible that there was some other, better deal that they could have made with a subset of their trading partners who shared some of their idiosyncratic preferences.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>Based on these considerations, my best guess is that the multipliers are log-uniformly distributed from 10<sup>-6</sup> to 1&#8212;implying logarithmic returns to growing the population of powerholders up to around one million.</p><h1>Increasing the quality of reflection</h1><p>In the previous two models, I&#8217;ve treated the powerholders&#8217; values as developing mostly independently. But if powerholders influence each other&#8217;s reflection&#8212;e.g., by arguing with each other about their values&#8212;then greater initial moral diversity could help powerholders converge to a better set of final values, through mechanisms like the following:</p><p><strong>Social exposure to non-sycophants</strong>. If one person single-handedly carries out a coup and ends up with a decisive strategic advantage, they might find themselves surrounded by yes-men who are utterly reliant on the dictator and unwilling to argue forcefully for different values from what the dictator currently endorses. A similar dynamic might be at play if a small but ideologically very uniform group seizes power (e.g., a set of officials from the same presidential administration or perhaps a dictator and his close advisors). But if there are multiple, ideologically diverse powerholders, they might be able to challenge each other&#8217;s views and improve the overall quality of reflection.</p><p>Under this model, most of the value probably comes from moving from a single powerholder to tens or hundreds of powerholders, or from moving from one ideologically uniform group to multiple ideologically uniform groups (perhaps moving from a lab coup or an executive coup to a joint lab coup and executive coup).</p><p>This effect relies on powerholders socializing with each other, rather than retreating into their own bubbles of non-powerholding friends and sycophantic AIs.</p><p><strong>Champions for rare values</strong>. Powerholders with rare value systems might be able to act as &#8220;champions&#8221; for those value systems. For example, they might use AI labor to develop the strongest, most plausible version of that value system, or they might try to persuade other powerholders about the merits of that value system. This might be important if that rare value system includes an insight that&#8217;s missing from other value systems&#8212;perhaps most value systems care primarily about consciousness, but actually there&#8217;s <a href="https://linch.substack.com/p/further-moral-goods">another totally different type of moral good</a> that other powerholders would want to pursue if they were aware of it.</p><p>(In principle, non-powerholders could act as champions for rare values. But they might lack the resources (e.g., access to ASI labor) needed to develop the insights in their value systems. They might be reliant on the goodwill of powerholders and not want to push too aggressively for their alternative value system, or powerholders might simply not take non-powerholders seriously.)</p><p>Just as in the bodhisattva model, increasing the number of powerholders increases the chances that at least one powerholder can serve as a champion for a rare value system that contains a crucial insight.</p><p>I&#8217;m very uncertain about how common these champions are, but if they&#8217;re sufficiently rare, then we&#8217;re probably rather likely to get their insight via some other mechanism.</p><p>For example, some powerholders might be &#8220;superreflectors&#8221; who instruct their ASIs to steelman every known human value system and invent millions of novel value systems, searching for insights that they and other powerholders might endorse on reflection. I expect that superreflectors would achieve all of the value from having powerholders act as champions for rare value systems that they actually subscribe to (and more).</p><p>So increasing the number of powerholders adds value only up to the point where we are likely to have at least one superreflector. Superreflectors are also plausibly rather rare&#8212;perhaps between 1/10 and 1/10,000&#8212;so increasing the number of powerholders up until 10,000 is valuable under this model.</p><h1>Increasing the likelihood of rare <em>bad</em> actors</h1><p>It&#8217;s possible (though rather unlikely<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>) that a single bad actor could unilaterally destroy a lot of value, e.g., by</p><ul><li><p>Initiating a space race that results in an extremely <a href="https://hanson.gmu.edu/filluniv.pdf">inefficient use of space resources</a> by the lights of most people&#8217;s value systems.</p></li><li><p>Destroying the universe by initiating false vacuum decay or triggering another <a href="https://www.lesswrong.com/posts/3ww5zZgTTPySB3jpP/interstellar-travel-will-probably-doom-the-long-term-future">galactic-level x-risk</a>.</p></li></ul><p>As we increase the moral diversity of powerholders, we increase the chance of ending up with at least one powerholder that inherently values one of these activities enough that they will do it if they can. For example, <a href="https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-can">locusts</a> might inherently value expanding through space as quickly as possible. We also increase the likelihood that one powerholder is ruthless or reckless enough to risk one of these activities&#8212;for example, a powerholder might threaten to initiate vacuum decay to extort concessions from other powerholders.</p><p>We can add these rare bad actors to the bodhisattva model described above. Now, in addition to bodhisattvas, rivals, and easygoers, we have a fourth type: destroyers. If one destroyer is present, total value is zero; otherwise it is calculated as before.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uw4g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uw4g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 424w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 848w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 1272w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uw4g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png" width="640" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:480,&quot;width&quot;:640,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uw4g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 424w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 848w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 1272w, https://substackcdn.com/image/fetch/$s_!uw4g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da177b7-935d-4e76-a72e-b11cae73cc90_640x480.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>I am assuming that the rate of bodhisattvas and rivals is sampled log-uniformly between 0.1 and 1e&#8211;6.</em></figcaption></figure></div><p>When diversity is low, it&#8217;s unlikely that there&#8217;s a bodhisattva already. Then adding additional powerholders is all upside: if you add a bodhisattva, then you get some positive value, but if you add a destroyer, rival, or easygoer, then expected value stays around zero. But as diversity increases, it&#8217;s likely that there&#8217;s a bodhisattva already, which means that adding additional powerholders risks adding a destroyer, bringing us from positive value to near-zero value.</p><p>As the figure above shows, the value of <em>N</em> where we switch from the low-diversity regime to the high-diversity regime depends on the destroyer rate. As a wild guess, I estimate that the destroyer rate is distributed log-uniformly between 10<sup>-4</sup> and 10<sup>-8</sup>. Under those assumptions, increasing the number of powerholders is beneficial up until 10<sup>4</sup> powerholders, after which additional powerholders reduce value.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See all of our research on <a href="https://www.forethought.org/research">our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>You might also disagree with me on what the correct moral system is likely to be, which could also lead to different parameters here.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Credit to Will MacAskill for this model.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>That is, the same resources cannot be used to simultaneously get most of the value by the lights of both the bodhisattva and the rival.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This is assuming that the most flourishing minds have way higher value (under the correct moral view) than human-like minds.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I think this is somewhat plausible&#8212;most people today have preferences that are sublinear in resources and do not care much about very distant galaxies. But it&#8217;s also plausible that future people will have more resource-hungry preferences, if they reflect on their preferences, if their sublinear preferences are all saturated, or if advances in technology allow them to personally benefit from consuming huge amounts of resources. In the section on moral public goods, I discuss how moral diversity might matter if linear preferences are common.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>This assumes that bodhisattvas and rivals individually have the same amount of resources on average.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>In fact, increasing <em>N</em> can make these side-deals more likely by increasing the number of people who care about the idiosyncratic good. For example:</p><ul><li><p>Imagine a world with 10 people, each of whom values 3 goods: copies of themselves, national glory (valued at 80% of copies of themselves), and hedonium (valued at 11% of copies of themselves). Suppose that each person is from a different nation. They will prefer to coordinate on hedonium.</p></li><li><p>But if there are twenty people, two from each nationality, then everyone will prefer to coordinate with their co-nationalist on producing national glory.</p></li></ul><p>Of course, it&#8217;s not totally clear, from a subjectivist perspective, whether (the general version of) this is bad.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Perhaps the most plausible story for this is if powerholders spread across space, and the destroyer covertly carries out the destructive activity without others noticing before it&#8217;s too late. But I expect the other powerholders will very likely be able to anticipate and mitigate this risk (e.g., by demanding that the destroyer make verifiable commitments to avoid this activity before allowing the destroyer to leave the solar system).</p></div></div>]]></content:encoded></item><item><title><![CDATA[The good, the bad and the ugly: AI impacts on epistemics]]></title><description><![CDATA[For better or worse, AI could reshape the way that people work out what to believe and what to do.]]></description><link>https://newsletter.forethought.org/p/the-good-the-bad-and-the-ugly-ai</link><guid isPermaLink="false">https://newsletter.forethought.org/p/the-good-the-bad-and-the-ugly-ai</guid><dc:creator><![CDATA[Owen Cotton-Barratt]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:15:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WXXO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-impacts-on-epistemics-the-good-the-bad-and-the-ugly">on our website</a>.</em></p><h1>Intro</h1><p>For better or worse, AI could reshape the way that people work out what to believe and what to do. What are the prospects here?</p><p>In this piece, we&#8217;re going to map out the trajectory space as we see it. First, we&#8217;ll lay out three sets of dynamics that could shape how AI impacts epistemics (how we make sense of the world and figure out what&#8217;s true):</p><ul><li><p><a href="https://newsletter.forethought.org/i/193454919/the-good">The good</a>: there&#8217;s huge potential for AI to uplift our ability to track what&#8217;s true and make good decisions</p></li><li><p><a href="https://newsletter.forethought.org/i/193454919/the-bad">The bad</a>: AI could also make the world harder for us to understand, without anyone intending for that to happen</p></li><li><p><a href="https://newsletter.forethought.org/i/193454919/the-ugly">The ugly</a>: malicious actors could use AI to actively disrupt epistemics</p></li></ul><p>Then we&#8217;ll argue that <a href="https://newsletter.forethought.org/i/193454919/so-what-should-we-expect-to-happen">feedback loops</a> could easily push towards much better or worse epistemics than we&#8217;ve seen historically, making near-term work on AI for epistemics unusually important.</p><p>The stakes here are potentially very high. As AI advances, we&#8217;ll be faced with a whole raft of civilisational-level decisions to make. How well we&#8217;re able to understand and reason about what&#8217;s happening could make the difference between a future that we&#8217;ve chosen soberly and wisely, and a catastrophe we stumble into unawares.</p><h1>The good</h1><blockquote><p><em>&#8220;If I have seen further, it is by standing on the shoulders of giants.&#8221;</em> (Isaac Newton)</p></blockquote><p>There are lots of ways that AI could help improve epistemics. Many kinds of AI tools could directly improve our ability to think and reason. We&#8217;ve written more about these in our <a href="https://www.forethought.org/research/design-sketches-for-a-more-sensible-world">design sketches</a>, but here are some illustrations:</p><ul><li><p>Tools for <a href="https://www.forethought.org/research/design-sketches-collective-epistemics#">collective epistemics</a> could make it easy to know what&#8217;s trustworthy and reward honesty, making it harder for actors to hide risky actions or <a href="https://80000hours.org/problem-profiles/extreme-power-concentration/">concentrate power</a> by manipulating others&#8217; views.</p><ul><li><p>Imagine that when you go online, &#8220;community notes for everything&#8221; flag content that other users have found misleading, and &#8220;rhetoric highlighting&#8221; automatically flags persuasive but potentially misleading language. With a few clicks, you can see the epistemic track record of any actor, or access the full provenance of a given claim. Anyone who wants can compare state-of-the-art AI systems using epistemic virtue evals, which also exert pressure at the AI development stage.</p></li></ul></li><li><p>Tools for <a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness">strategic awareness</a> could deepen people&#8217;s understanding of what&#8217;s actually going on around them, making it easier to make good decisions, keep up with the pace of progress, and steer away from failure modes like <a href="https://gradual-disempowerment.ai/">gradual disempowerment</a>.</p><ul><li><p>Imagine that superforecaster-level forecasting and scenario planning are available on tap, and automated OSINT gives people access to much higher quality information about the state of the world.</p></li></ul></li><li><p>Technological analogues to <a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder">angels-on-the-shoulder</a>, like personalised learning systems and reflection tools, could make decision-makers better informed, more situationally aware, and more in touch with their own values.</p><ul><li><p>Imagine that everyone has access to high-quality personalised learning, automated deep briefings for high-stakes decisions, and reflection tools to help them understand themselves better. In the background, aligned recommender systems promote long-term user endorsement, and some users enable a guardian coach system which flags any actions the person might regret taking in real time.</p></li></ul></li></ul><p>Structurally, AI progress might also enable better reasoning and understanding, for example by automating labour such that people have more time and attention, or by making people wealthier and healthier.</p><p>These changes might enable us to approach something like epistemic flourishing, where it&#8217;s easier to find out what&#8217;s true than it is to lie, and the world in most people&#8217;s heads is pretty similar to the world as it actually is. This could radically improve our prospects of safely <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">navigating the transition to advanced AI</a>, by:</p><ul><li><p>Helping us to keep pace with the increasing speed and complexity of the situation, so we&#8217;re able to make informed and timely decisions.</p></li><li><p>Ensuring that key decision-makers don&#8217;t make catastrophic unforced errors through lack of information or understanding.</p></li><li><p>Making it harder for malicious actors to manipulate the information environment in their favour to increase their own influence.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WXXO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WXXO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 424w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 848w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 1272w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WXXO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png" width="1280" height="898" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1699439,&quot;alt&quot;:&quot;A Philosopher Lecturing on the Orrery, a painting by Joseph Wright of Derby. It depicts a lecturer giving a demonstration of an orrery &#8211; a mechanical model of the Solar System &#8211; to a small audience.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/193454919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A Philosopher Lecturing on the Orrery, a painting by Joseph Wright of Derby. It depicts a lecturer giving a demonstration of an orrery &#8211; a mechanical model of the Solar System &#8211; to a small audience." title="A Philosopher Lecturing on the Orrery, a painting by Joseph Wright of Derby. It depicts a lecturer giving a demonstration of an orrery &#8211; a mechanical model of the Solar System &#8211; to a small audience." srcset="https://substackcdn.com/image/fetch/$s_!WXXO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 424w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 848w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 1272w, https://substackcdn.com/image/fetch/$s_!WXXO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c2c8344-95d9-4067-a9e5-15c67d6bbe47_1280x898.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://en.wikipedia.org/wiki/A_Philosopher_Lecturing_on_the_Orrery#/media/File:Wright_of_Derby,_The_Orrery.jpg">A Philosopher Lecturing on the Orrery</a>, by Joseph Wright of Derby (1766)</em></figcaption></figure></div><p>What&#8217;s driving these potential improvements?</p><ul><li><p><strong>AI will be able to think much more cheaply and quickly than humans.</strong> Partly this will mean that we can reach many more insights with much less effort. Partly this will make it possible to understand things that are currently infeasible for us to understand (because it would take too many humans too long to figure it out).</p></li><li><p><strong>AI can &#8216;know&#8217; much more than any human.</strong> Right now, a lot of information is siloed in specific expert communities, and it&#8217;s slow to filter out to other places even when it would be very useful there. AI will be able to port and apply knowledge much more quickly to the relevant places.</p></li></ul><h1>The bad</h1><blockquote><p><em>&#8220;A wealth of information creates a poverty of attention.&#8221;</em> (Herbert Simon)</p></blockquote><p>AI could also make epistemics worse without anyone intending it, by making the world more confusing and degrading our information and processing.</p><p>There are a few different ways that AI could unintentionally weaken our epistemics:</p><ul><li><p><strong>The world gets faster and more complex.</strong> As AI progresses, our information-processing capabilities are going to go up &#8212; but so is the complexity of the world. Technological progress could become <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">dramatically faster</a> than today, making the world more disorienting and harder to understand than it is today. If tech progress reaches fast enough speeds, it&#8217;s possible that we won&#8217;t be able to keep up, and even the best AI tools available won&#8217;t help us to see through the fog.</p></li><li><p><strong>The quality of the information we&#8217;re interacting with gets worse,</strong> because of:</p><ul><li><p><strong>Faster memetic evolution.</strong> As more and more content is generated by and mediated through AI systems working at machine speeds, the pace of memetic and cultural change will probably get a lot faster than it is today. As the pace quickens, memes which are attention-grabbing could increasingly outcompete those which are truthful.</p></li><li><p><strong>More difficult verification.</strong> This could happen through a combination of:</p><ul><li><p><strong>AI slop.</strong> In hard-to-verify domains, AI could massively increase the quantity of plausible-looking but wrong information, without also being able to help us to verify which bits are right.</p></li><li><p><strong>AI-generated &#8216;evidence&#8217;.</strong> As the quality of AI-generated video, audio, images, and text continues to improve, it may become pretty difficult to tell which bits of evidence are real and which are spurious.</p></li></ul></li></ul></li><li><p><strong>We get worse at processing the information we get</strong>, because:</p><ul><li><p><strong>Our emotions get in the way.</strong> AI progress could be very disorienting, generate serious crises, and cause people a lot of worry and fear. This could get in the way of clear thinking.</p></li><li><p><strong>Using AI to help us with information processing degrades our thinking</strong>, via:</p><ul><li><p><strong>Adoption of low-quality AI tools for epistemics:</strong> In many areas of epistemics, it&#8217;s hard to say what counts as &#8216;good&#8217;. This makes epistemic tools harder to assess, and could lead to people trusting these tools either too much or too little. Inappropriately high levels of trust in epistemic tools could take various forms, including:</p><ul><li><p>First mover advantages for early but imperfect systems, which are then hard to replace with better systems because people trust the earlier systems more.</p></li><li><p>The use of epistemically misaligned systems, which aren&#8217;t actually truth-tracking but it&#8217;s not possible for us to discern that.</p></li></ul></li><li><p><strong>Fragmentation of the information environment:</strong> AI will make it easier to create content (potentially interactive content) that pulls people in and monopolises their attention. This could reduce attention available for important truth-tracking mechanisms, and make it harder to coordinate groups of people to important actions. In the extreme, some people might end up in effectively closed information bubbles, where all of their information is heavily filtered through the AI systems they interact with directly. The more fragmented the information environment becomes, the harder it could get for people to make sense of what&#8217;s happening in the world around them, and to engage with other people and other information bubbles.</p></li><li><p><strong>Epistemic dependence:</strong> if people increasingly outsource their thinking to AI systems, they may lose the ability to think critically for themselves.</p></li></ul></li></ul></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!drBz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!drBz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 424w, https://substackcdn.com/image/fetch/$s_!drBz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 848w, https://substackcdn.com/image/fetch/$s_!drBz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 1272w, https://substackcdn.com/image/fetch/$s_!drBz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!drBz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png" width="455" height="542" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:542,&quot;width&quot;:455,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:476967,&quot;alt&quot;:&quot;Allegory of Error by Stefano Bianchetti. An engraving depicting a blindfolded figure with donkey ears staggering forward holding a staff.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/193454919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Allegory of Error by Stefano Bianchetti. An engraving depicting a blindfolded figure with donkey ears staggering forward holding a staff." title="Allegory of Error by Stefano Bianchetti. An engraving depicting a blindfolded figure with donkey ears staggering forward holding a staff." srcset="https://substackcdn.com/image/fetch/$s_!drBz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 424w, https://substackcdn.com/image/fetch/$s_!drBz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 848w, https://substackcdn.com/image/fetch/$s_!drBz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 1272w, https://substackcdn.com/image/fetch/$s_!drBz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9f3c3a-1d4c-47f5-97bc-8ba006fdca14_455x542.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://www.mediastorehouse.com/fine-art-finder/artists/austrian-school/allegory-error-staggering-attitude-blindfold-22293032.html">Allegory of Error</a>, Stefano Bianchetti (1801)</em></figcaption></figure></div><h1>The ugly</h1><blockquote><p><em>&#8220;The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.&#8221;</em> (Hannah Arendt, <em>The Origins of Totalitarianism</em>)</p></blockquote><p>We&#8217;ve just talked about ways that AI could make epistemics worse without anyone intending that. But we might also see actors using AI to actively interfere with societal epistemics. (In reality these things are a spectrum, and the dynamics we discussed in the preceding section could also be actively exploited.)</p><p>What might this look like?</p><ul><li><p><strong>Automated propaganda and persuasion:</strong> AI could be used to generate high-quality persuasive content at scale. This could take the form of highly tailored, well-written propaganda. If this content were then used as training data for next generation models, biases could get even more entrenched. Additionally, AI persuasion could come in the form of models which are subtly biased in a particular direction. Particularly if many users are spending large amounts of time talking to AI (e.g. AI companions), the persuasive effects could be much larger than is scalable today via human-to-human persuasion.</p></li><li><p><strong>Using AI to undermine sense-making:</strong> AI could be used to generate high-quality content which casts doubt on institutions, individuals, and tools that would help people understand what&#8217;s going on, or to directly sabotage such tools. More indirectly, actors could also use AI to generate content which adds to complexity, for example by wrapping important information in complex abstractions and technicalities, and generating large quantities of very readable reports and news stories which distract attention.</p></li><li><p><strong>Surveillance:</strong> AI surveillance could monitor people&#8217;s communications in much more fine-grained ways, and punish them when they appear to be thinking along undesirable lines. This could be abused by states, or could become a tool that private actors can wield against their enemies. In either case, the chilling effect on people&#8217;s thinking and behaviour could be significant.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZWin!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZWin!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 424w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 848w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 1272w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZWin!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png" width="1280" height="929" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:929,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1840447,&quot;alt&quot;:&quot;The Card Sharp with the Ace of Diamonds, an oil-on-canvas painting by Georges de La Tour. It depicts a card game in which a young man is being fleeced of his money by the other players, including a card sharp who is retrieving the ace of diamonds from behind his back.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/193454919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Card Sharp with the Ace of Diamonds, an oil-on-canvas painting by Georges de La Tour. It depicts a card game in which a young man is being fleeced of his money by the other players, including a card sharp who is retrieving the ace of diamonds from behind his back." title="The Card Sharp with the Ace of Diamonds, an oil-on-canvas painting by Georges de La Tour. It depicts a card game in which a young man is being fleeced of his money by the other players, including a card sharp who is retrieving the ace of diamonds from behind his back." srcset="https://substackcdn.com/image/fetch/$s_!ZWin!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 424w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 848w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 1272w, https://substackcdn.com/image/fetch/$s_!ZWin!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d5d4364-72c5-4ebb-841a-84382a96a93c_1280x929.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://en.wikipedia.org/wiki/The_Card_Sharp_with_the_Ace_of_Diamonds#/media/File:Le_Tricheur_%C3%A0_l%E2%80%99as_de_carreau_-_Georges_de_La_Tour_-_Mus%C3%A9e_du_Louvre_Peintures_RF_1972_8.jpg">The Card Sharp with the Ace of Diamonds</a>, by Georges de La Tour (~1636-1638)</em></figcaption></figure></div><p>But maybe this is all a bit paranoid. Why expect this to happen?</p><p>There&#8217;s a long history of powerful actors trying to distort epistemics,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> so we should expect that some people will be trying to do this. And AI will probably give them better opportunities to manipulate other people&#8217;s epistemics than have existed historically:</p><ul><li><p>It&#8217;s likely that access to the best AI systems and compute will be <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power#33-exclusive-access-to-coup-enabling-capabilities">unequal</a>, which favours abuse.</p></li><li><p>If people end up primarily interfacing with the world via AI systems, this will create a big lever for epistemic influence that doesn&#8217;t exist currently. It could be much easier to influence the behaviour of lots of AI systems at once than lots of people or organisations.</p></li></ul><p>It&#8217;s also worth noting that many of these abuses of epistemic tech don&#8217;t require people to have some Machiavellian scheme to disrupt epistemics or seek power for themselves (though these might arise later). Motivated reasoning could get you a long way:</p><ul><li><p>Legitimate communications and advertising blur into propaganda, and microtargeting is already a common strategy.</p></li><li><p>It&#8217;s easy to imagine that in training an AI system, a company might want to use something like its own profits as a training signal, without explicitly recognising the potential epistemic effects of this in terms of bias.</p></li></ul><h1>So what should we expect to happen?</h1><p>With all these dynamics pulling in different directions, should we expect that it&#8217;s going to get easier or harder for people to make sense of the world?</p><p>We think it could go either way, and that how this plays out is extremely consequential.</p><p>The main reason we think this is that the dynamics above are self-reinforcing, so the direction we set off in initially could have large compounding effects. In general, the better your reasoning tools and information, the easier it is for you to recognise what is good for your own reasoning, and therefore to improve your reasoning tools and information. The worse they are, the harder it is to improve them (particularly if malicious actors are actively trying to prevent that).</p><p>We already see this empirically. The Scientific Revolution and the Enlightenment can be seen as examples of good epistemics reinforcing themselves. Distorted epistemic environments often also have self-perpetuating properties. Cults often require members to move into communal housing and cut contact with family and friends who question the group. Scientology frames psychiatry&#8217;s rejection of its claims as evidence of a conspiracy against it.</p><p>And on top of historical patterns, there are AI-specific feedback loops that reinforce initial epistemic conditions:</p><ul><li><p>Unlike previous information tech, AI has a tight feedback loop between content generated, and data used for training future models. So if models generate in/accurate content, future models are more likely to do so too.</p></li><li><p>How early AI systems behave epistemically will shape user expectations and what kinds of future AI behaviour there&#8217;s a market for.</p></li></ul><p>There are self-correcting dynamics too, so these self-reinforcing loops won&#8217;t go on forever. But we think it&#8217;s decently likely that epistemics get much better or much worse than they&#8217;ve been historically:</p><ul><li><p>One self-correcting mechanism historically has just been that it takes (human) effort to sustain or degrade epistemics. Continuing to improve epistemics requires paying attention to ways that epistemics could be eroded, and this isn&#8217;t incentivised in an environment that&#8217;s currently working well. Continuing to degrade epistemics requires willing accomplices &#8212; but the more an actor distorts things, the more that can galvanise opposition, and the fewer people may be willing to assist. By augmenting or replacing human labour with automated labour, AI could make it much cheaper to keep pushing in the same direction.</p></li><li><p>Another self-correcting mechanism is just that people and institutions adapt to new epistemic tech: as epistemics improve, deception becomes more sophisticated; and if epistemics worsen, people lose trust and create new mechanisms for assessing truth. But this adaptation happens at human speed, and AI will increasingly be changing the epistemic environment at a much faster pace. This creates the potential for self-reinforcing dynamics to drive to much more extreme places before adaptation has time to kick in.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ul><ul><li><p>There&#8217;s a limit to how good epistemics can get before hitting fundamental problems like complexity and irreducible uncertainty. But there seems to be a lot of room for improvement from where we&#8217;re currently standing (especially as good AI tools could help to handle greater amounts of complexity), and it would be a priori very surprising if we&#8217;d already reached the ceiling.</p></li><li><p>There&#8217;s also a limit to how bad epistemics can get: people aren&#8217;t infinitely suggestible, and often there are external sources of truth that limit how distorted beliefs can get (ground truth, or what gets said in other countries or communities). But as we discussed <a href="https://newsletter.forethought.org/i/193454919/the-bad">above</a>, access to ground truth and to other epistemic communities might get harder because of AI, so the floor here may lower.</p></li></ul><p>Given the real chance that we end up stuck in an extremely positive or negative epistemic equilibrium, our initial trajectory seems very important. The kinds of AI tools we build, the order we build them in, and who adopts them when could make the difference between a world of epistemic flourishing and a world where everyone&#8217;s understanding is importantly distorted. To give a sense of the difference this makes, here&#8217;s a sketch of each world (among myriad possible sketches):</p><ul><li><p>In the first world, we basically understand what&#8217;s going on around us. It&#8217;s not like we can now forecast the future with perfect accuracy or anything &#8212; there&#8217;s still irreducible uncertainty, and some people have better epistemics tools than others. But it&#8217;s gotten much cheaper to access and verify information. Public discourse is serious and well-calibrated, because epistemic infrastructure has made it quite hard to deceive or manipulate people &#8212; which in turn incentivises honesty. AI-assisted research and synthesis mean that knowledge which used to be siloed in specialist communities is now accessible and usable by anyone who needs it. And governments are able to make much more nuanced decisions far faster than they are today.</p></li><li><p>In the second, it&#8217;s no longer really possible to figure out what&#8217;s going on. There&#8217;s an awful lot of persuasive but low-quality AI content around, some of it generated with malicious intent. In response to this, people withdraw into their own AI-mediated epistemic bubbles &#8212; and unlike today&#8217;s filter bubbles, these can be comprehensive enough that people rarely encounter friction with outside perspectives at all. Meanwhile, companies and nations with a lot of compute find it pretty easy to distract the public&#8217;s attention from anything that would be inconvenient, and to outmaneuver the many actors who are trying to hold them to account. But their own reasoning also gets degraded by all this information pollution, as their AI systems are trained on the same corrupted public information.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Even the people who think they&#8217;re shaping the narrative are increasingly unable to see clearly.</p></li></ul><p>The world we end up in is the world from which we have to navigate the intelligence explosion, making decisions like how to manage misaligned AI systems, whether to grant AI systems rights, and how to divide up the resources of the cosmos. How AI impacts our epistemics between now and then could be one of the biggest levers we have on navigating this well.</p><h1>Things we didn&#8217;t cover</h1><h2><strong>Whose epistemics?</strong></h2><p>We mostly talked about AI impacts on epistemics in general terms. But AI could impact different groups&#8217; epistemics differently &#8212; and different groups&#8217; epistemics could matter more or less for getting to good outcomes. It would be cool to see further work which distinguishes between scenarios where good outcomes require:</p><ul><li><p>Interventions that raise the epistemic floor by improving everyone&#8217;s epistemics.</p></li><li><p>Interventions that raise the ceiling by improving the epistemics of the very clearest thinking.</p></li></ul><h2><strong>&#8216;Weird&#8217; dynamics</strong></h2><p>We focused on how AI could impact human epistemics, in a world where human reasoning still matters. But eventually, we expect more and more of what matters for the outcomes we get will come down to the epistemics of AI systems themselves.</p><p>The dynamics which affect these AI-internal epistemics could therefore be enormously important. But they could look quite different from the human-epistemics dynamics that have been our focus here, and we didn&#8217;t think it made sense to expand the remit of the piece to cover these.</p><p><em>Thanks to everyone who gave comments on drafts, and to Oly Sourbutt and Lizka Vaintrob for a workshop which crystallised some of the ideas.</em></p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-impacts-on-epistemics-the-good-the-bad-and-the-ugly">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Think of things like:</p><ul><li><p>Propaganda states like Nazi Germany and the USSR.</p></li><li><p>Corporate lobbying like the tobacco and sugar lobbies and climate science doubt campaigns.</p></li><li><p>CIA operations to spread doubt and confusion.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Though it&#8217;s possible that this dynamic will be more pronounced for epistemics getting extremely bad than for them getting extremely good. Consider these two very simplistic sketches:</p><ol><li><p>People start living in increasingly closed AI filter bubbles. Institutions are slow to adopt similar bubbles at a corporate level, but they also don&#8217;t have a mandate to change what their employees are doing. People&#8217;s filter bubbles tend to be pretty correlated with the people they work and interact with, so institutions end up with pretty distorted pictures of what&#8217;s going on even though they don&#8217;t actively start using harmful tech. Government regulation is too slow and reactive to stop this from happening.</p></li><li><p>People start to use provenance tracing and rhetoric highlighting by default when browsing, in response to an increasingly polarised memetic environment. There is adaptation to this &#8212; politicians start using subtler language and so on. But the net effect is still strongly positive: it&#8217;s hard to fake provenance, and removing overt rhetoric is already a big win, even if it means that more slippery language proliferates.</p></li></ol><p>In the first sketch, it&#8217;s straightforwardly the case that adaptive mechanisms are too slow. In the latter, it&#8217;s more that the tech is inherently defence-favoured.</p><p>We haven&#8217;t explored this area deeply, and think more work on this would be valuable.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Alternatively, these elites might retain very good epistemics for themselves, and choose to indefinitely maintain a situation where everyone else has a very distorted understanding, to further their own ends. It&#8217;s unclear to us which of these scenarios is more likely or concerning.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Sketches of some defense-favoured coordination tech]]></title><description><![CDATA[We think that near-term AI could make it much easier for groups to coordinate, find positive-sum deals, navigate tricky disagreements, and hold each other to account.]]></description><link>https://newsletter.forethought.org/p/sketches-of-some-defense-favoured</link><guid isPermaLink="false">https://newsletter.forethought.org/p/sketches-of-some-defense-favoured</guid><dc:creator><![CDATA[Owen Cotton-Barratt]]></dc:creator><pubDate>Mon, 06 Apr 2026 15:18:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2123578c-0372-46c6-be82-f369f054523f_1999x1173.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/design-sketches-defense-favoured-coordination-tech">on our website</a>.</em></p><h1>Intro</h1><p>We think that near-term AI could make it much easier for groups to coordinate, find positive-sum deals, navigate tricky disagreements, and hold each other to account.</p><p>Partly, this is because AI will be able to process huge amounts of data quickly, making complex multi-party negotiations and discussions much more tractable. And partly it&#8217;s because secure enough AI systems would allow people to share sensitive information with trusted intermediaries without fear of broader disclosure, making it possible to coordinate around information that&#8217;s currently too sensitive to bring to the table, and to greatly improve our capacity for monitoring and transparency.</p><p>We want to help people imagine what this could look like. In this piece, we sketch six potential near-term technologies, ordered roughly by how achievable we think they are with present tech:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/fast-facilitation">Fast facilitation</a></strong> &#8212; Groups quickly surface key points of consensus views and disagreement, and make decisions everyone can live with.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/automated-negotiation">Automated negotiation</a></strong> &#8212; Complicated bargains are discovered quickly via automated negotiation on behalf of each party, mediated by trusted neutral systems which can find agreements.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/arbitrarily-easy-arbitration">Arbitrarily easy arbitration</a></strong> &#8212; Disputes are resolved cheaply and quickly by verifiably neutral AI adjudicators.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/background-networking">Background networking</a></strong> &#8212; People who <em>should</em> know each other get connected (perhaps even before they know to go looking), enabling mutually beneficial trade, coalition building, and more.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/structured-transparency-for-democratic-oversight">Structured transparency for democratic oversight</a></strong> &#8212; Citizens hold their institutions to account in a fine-grained way, without compromising sensitive information.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/confidential-monitoring-and-verification">Confidential monitoring and verification</a></strong> &#8212; Deals can be monitored and verified, even when this requires sharing highly sensitive information, by using trusted AI intermediaries which can&#8217;t disclose the information to counterparties.</p></li></ul><p>We also sketch two cross-cutting technologies that support coordination:</p><ul><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/ai-delegates-and-preference-elicitation">AI delegates and preference elicitation</a></strong> &#8212; AI delegates can faithfully represent and act for a human principal, perhaps supported by customisable off-the-shelf agentic platforms that integrate across many kinds of tech.</p></li><li><p><strong><a href="https://newsletter.forethought.org/i/192925664/charter-tech">Charter tech</a></strong> &#8212; The technologies above, or other coordination technologies, are applied to making governance dynamics more transparent, making it easier to anticipate how governance decisions will influence future coordination, and design institutions with this in mind.</p></li></ul><p>An important note is that coordination technologies are <a href="https://vitalik.eth.limo/general/2020/09/11/coordination.html">open to abuse</a>. You can coordinate to bad ends as well as good, and particularly confidential coordination technologies could enable things like price-setting, crime rings, and even <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">coup plots</a>. Because the upsides to coordination are very high (including helping the rest of society to coordinate <em>against</em> these harms), we expect that on balance accelerating some versions of these technologies is beneficial. But this will be sensitive to exactly how coordination technologies are instantiated, and any projects in this direction need to take especial care to mitigate these risks.</p><p>We&#8217;ll start by talking about why these tools matter, then look at the details of what these technologies might involve before discussing some cross-cutting issues at the end.</p><h1>Why coordination tech matters</h1><p>Today, many positive-sum trades get left on the table, and a lot of resources are wasted in negative-sum conflicts. Better coordination capabilities could lead to very large benefits, including:</p><ul><li><p>Improving economic productivity across the board</p></li><li><p>Helping nations avoid wars and other destructive conflicts</p></li><li><p>Enabling larger groups to coordinate to avoid exploitation by a small few</p></li><li><p>Making democratic governance much more transparent, while protecting sensitive information</p></li></ul><p>What&#8217;s more, getting these benefits might be close to necessary for navigating the transition to more powerful AI systems safely. Absent coordination, competitive pressures are likely to incentivise developers to race forward as fast as possible, potentially greatly increasing the risks we collectively run. If we become much better at coordination, we think it is much more likely that the relevant actors will be able to choose to be cautious (assuming that is the collectively-rational response).</p><p>However, coordination tech could also have significant harmful effects, through enabling:</p><ul><li><p>AI companies to collude with each other against the interests of the rest of society<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ul><ul><li><p>A small group of actors to plot a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">coup</a></p></li><li><p>More selfishness and criminality, as social mechanisms of coordination are replaced by automated ones which don&#8217;t incentivise prosociality to the same extent</p></li></ul><p>Regardless of how these harms and benefits net out for &#8216;coordination tech&#8217; overall, we currently think that:</p><ul><li><p><strong>The shape and impact of coordination tech is an important part of how things will unfold in the near term, and it&#8217;s good for people to be paying more attention to this.</strong></p></li><li><p><strong>We&#8217;re going to </strong><em><strong>need</strong></em><strong> some kinds of coordination tech to safely navigate the AI transition.</strong></p></li><li><p><strong>The devil is in the details. There are ways of advancing coordination tech which are positive in expectation, and ways of doing so which are harmful.</strong></p></li></ul><h2><strong>Why &#8216;defense-favoured&#8217; coordination tech</strong></h2><p>That&#8217;s why we&#8217;ve called this piece &#8216;defense-favoured coordination tech&#8217;, not just &#8216;coordination tech&#8217;. We think generic acceleration of coordination tech is somewhat fraught &#8212; <strong>our excitement is about thoughtfully run projects which are sensitive to the possible harms, and target carefully chosen parts of the design space</strong>.</p><p>We&#8217;re not yet confident which the best bits of the space are, and we haven&#8217;t seen convincing analysis on this from others either. Part of the reason we&#8217;re publishing these design sketches is to encourage and facilitate further thinking on this question.</p><p>For now, we expect that there are good versions of all of the technologies we sketch below &#8212; but we&#8217;ve flagged potential harms where we&#8217;re tracking them, and encourage readers to engage sceptically and with an eye to how things could go badly as well as how they could go well.</p><h1>Fast facilitation</h1><p>Right now, coordinating within groups is often complex, expensive, and difficult. Groups often drop the ball on important perspectives or considerations, move too slowly to actually make decisions, or fail to coordinate at all.</p><p>AI could make facilitation much faster and cheaper, by processing many individual views in parallel, tracking and surfacing all the relevant factors, providing secure private channels for people to share concerns, and/or providing a neutral arbiter with no stake in the final outcome. It could also make it much more practical to scale facilitation and bring additional people on board without slowing things down too much.</p><h2><strong>Design sketch</strong></h2><p>An AI mediation system briefly interviews groups of 3&#8211;300 people async, presents summary positions back to the group, and suggests next steps (including key issues to resolve). People approve or complain about the proposal, and the system iterates to appropriate depth for the importance of the decision.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2Fyu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2Fyu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 424w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 848w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 1272w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2Fyu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png" width="1456" height="933" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:933,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1897061,&quot;alt&quot;:&quot;Hand-drawn UI sketch of AI-powered coordination software showing admin setup inputs and a participant interface with options, discussion summaries, and an AI facilitator guiding group decision-making.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn UI sketch of AI-powered coordination software showing admin setup inputs and a participant interface with options, discussion summaries, and an AI facilitator guiding group decision-making." title="Hand-drawn UI sketch of AI-powered coordination software showing admin setup inputs and a participant interface with options, discussion summaries, and an AI facilitator guiding group decision-making." srcset="https://substackcdn.com/image/fetch/$s_!2Fyu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 424w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 848w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 1272w, https://substackcdn.com/image/fetch/$s_!2Fyu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd35f32c6-8231-4887-86ef-649fdd8f835e_2875x1842.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Under the hood, it does something like:</p><ul><li><p>Gathers written context on the setting and decision</p></li><li><p>Holds brief, private conversations with each participant to understand their perspective</p></li><li><p>Builds a map of the issue at hand, involving key considerations and points of (dis)agreement</p><ul><li><p>Performs and integrates background research where relevant</p></li></ul></li><li><p>Identifies which people are most likely to have input that changes the picture</p></li><li><p>Distils down a shareable summary of the map, and seeks feedback from key parties</p></li><li><p>Proposes consensus statements or next steps for approval, iterating quickly to find versions that have as broad a backing as possible</p></li></ul><h2><strong>Feasibility</strong></h2><p>Fast facilitation seems fairly feasible technically. The <a href="https://www.science.org/doi/10.1126/science.adq2852">Habermas Machine</a> (2024) does a version of this that provided value to participants &#8212; and we have seen two years of progress in LLMs since then. And there are already facilitation services like <a href="https://chord.team/">Chord</a>. In general, LLMs are great at gathering and distilling lots of information, so this should be something they excel at. It&#8217;s not clear that current LLMs can already build accurate maps of arbitrary in-motion discourse, but they <a href="https://www.oliversourbut.net/i/182129031/structure-inference-and-discourse">probably could</a> with the right training and/or scaffolding.</p><p>Challenges for the technology include:</p><ul><li><p>Ensuring that it&#8217;s more efficient and a better user experience for moving towards consensus than other, less AI-based approaches.</p></li><li><p>Remaining robust against abusive user behaviour (e.g. you don&#8217;t want individuals to get their way via prompt injection or blatantly lying).</p></li></ul><p>Neither of these seem like fundamental blockers. For example, to protect against abuse, it may be enough to maintain transparency so that people can search for this. (Or if users need to enter confidential information, there might be services which can confirm the confidential information without revealing it.)</p><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Build a baby version.</strong> This could help us notice obstacles or opportunities that would have been hard to predict in advance. You could focus on the UI or the tech side here, or try to help run pilots at specific organisations or in specific settings.</p></li><li><p><strong>Design ways to evaluate fast facilitation tools.</strong> This makes it easier to assess and improve on performance. For example, you could create games/test environments with clear &#8220;win&#8221; and &#8220;failure&#8221; modes.</p></li><li><p><strong>Build subcomponents.</strong> For example:</p><ul><li><p>Bots that surface anonymous info.</p></li><li><p>Tools that try to surface areas of consensus or common knowledge as efficiently as possible, while remaining hard to game.</p></li></ul></li><li><p><strong>Make a meeting prep system.</strong> Focus first on getting good at meeting prep &#8212; creating an agenda and considerations that need live discussion &#8212; to reduce possible unease about outsourcing decision-making to AI systems.</p></li><li><p><strong>Make a bot to facilitate discussions.</strong> This could be used in online community fora, or to survey experts.</p></li><li><p><strong>Design ways to create live &#8220;maps&#8221; of discussions.</strong> Fast facilitation is fast because it parallelises communication. This makes it more important to have good tools for maintaining shared context.</p></li></ul><h1>Automated negotiation</h1><p>High-stakes negotiation today involves adversarial communication between humans who have limited bandwidth.</p><p>Negotiation in the future could look more like:</p><ul><li><p>You communicate your desires openly with a negotiation delegate who is on your side, asking questions only when needed to build a deeper model about your preferences.</p></li><li><p>The delegate goes away, and comes back with a proposal that looks pretty good, along with a strategic analysis explaining the tradeoffs / difficulties in getting more.</p></li></ul><h2><strong>Design sketch</strong></h2><p>Humans can engage AI delegates to represent them. The delegates communicate with each other via a neutral third party mediation system, returning to their principals with a proposal, or important interim updates and decision points.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z29j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z29j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 424w, https://substackcdn.com/image/fetch/$s_!z29j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 848w, https://substackcdn.com/image/fetch/$s_!z29j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 1272w, https://substackcdn.com/image/fetch/$s_!z29j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z29j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png" width="1456" height="1090" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1090,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:799692,&quot;alt&quot;:&quot;Hand-drawn diagram of AI-powered automated negotiation showing a user and AI delegate iterating on proposals, evaluating options, and refining terms until agreement is reached.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn diagram of AI-powered automated negotiation showing a user and AI delegate iterating on proposals, evaluating options, and refining terms until agreement is reached." title="Hand-drawn diagram of AI-powered automated negotiation showing a user and AI delegate iterating on proposals, evaluating options, and refining terms until agreement is reached." srcset="https://substackcdn.com/image/fetch/$s_!z29j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 424w, https://substackcdn.com/image/fetch/$s_!z29j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 848w, https://substackcdn.com/image/fetch/$s_!z29j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 1272w, https://substackcdn.com/image/fetch/$s_!z29j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b30d861-7ff6-4884-8e23-e28d46184534_1999x1496.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Under the hood, this might look like:</p><ul><li><p>Delegate systems:</p><ul><li><p>Read over context documents and query principals about key points of uncertainty to build initial models of preferences.</p></li><li><p>Model the negotiation dynamics and choose strategic approaches to maximise value for their principal.</p></li><li><p>Go back to the principal with further detailed queries when something comes up that crosses an importance threshold and where they are insufficiently confident about being able to model the principal&#8217;s views faithfully.</p></li><li><p>Are ultimately trained to get good results by the principal&#8217;s lights.</p></li></ul></li><li><p>Neutral mediator system:</p><ul><li><p>Is run by a trusted third-party (or in higher stakes situations, perhaps is cryptographically secure with transparent code).</p></li><li><p>Discusses with all parties (either AI delegates, or their principals)</p><ul><li><p>Can hear private information without leaking that information to the other party</p><ul><li><p>Impossibility theorems mean that it will sometimes be strategically optimal for parties to misrepresent their position to the mediator (unless we give up on the ability to make many actually-good deals); however, we can seek a setup such that it is <em>rarely</em> a good idea to strategically misrepresent information, or that it <em>doesn&#8217;t help very much</em>, or that <em>it is hard to identify the circumstances in which it&#8217;s better to misrepresent</em></p></li></ul></li></ul></li><li><p>Searches for deals that will be thought well of by all parties, and proposes those to the delegates.</p></li><li><p>Is ultimately trained to help all parties reach fair and desired outcomes, while minimising incentives-to-misrepresent for the parties.</p></li></ul></li></ul><h2><strong>Feasibility</strong></h2><p>Some of the technical challenges to automated negotiation are quite hard:</p><ul><li><p>The kind of security needed for high-stakes applications isn&#8217;t possible today.</p></li><li><p>Getting systems to be deeply aligned with a principal&#8217;s best interests, rather than e.g. pursuing the principal&#8217;s short-term gratification via sycophancy, is an unsolved problem.</p></li></ul><p>That said, it&#8217;s already possible to experiment using current systems, and it may not be long before they start improving on the status quo for human negotiation. Low-stakes applications don&#8217;t require the same level of security, and will be a great training ground for how to set up higher stakes systems and platforms. And practical alignment seems good enough for many purposes today.</p><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Build an AI delegate for yourself or your friends.</strong> See if you can get it to usefully negotiate on your behalf with your friends or colleagues. Or failing that, if it can support you to think through your own negotiation position before you need to communicate with others about it.</p></li><li><p><strong>Build a negotiation app with good UI.</strong> Building on existing LLMs, build an app which helps people think through their negotiation position in a structured way. Focus on great UI.</p><ul><li><p>This could be non-interactive at first, and just involve communication between a human and the app, rather than between any AI systems.</p></li><li><p>But it builds the muscles of a) designing good UI for AI negotiation, and b) people actually using AI to help them with negotiation.</p></li></ul></li><li><p><strong>Run a pilot in an org or community you&#8217;re part of.</strong></p><ul><li><p>You could start with fairly low-stakes negotiations, like what temperature to set the office thermostat to or which discussion topics to discuss in a given meeting slot.</p></li><li><p>Experimenting with different styles of negotiation (in terms of how high the stakes are, how complex the structure is, and what the domain is) could be very valuable.</p></li></ul></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F7UG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F7UG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 424w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 848w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 1272w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F7UG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg" width="37" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:37,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!F7UG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 424w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 848w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 1272w, https://substackcdn.com/image/fetch/$s_!F7UG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbb018c-1bfb-4e90-b4c2-e5f6698d8112_37x35.svg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h1>Arbitrarily easy arbitration</h1><p>Right now, the risk of expensive arbitration makes many deals unreachable. If disputes could be resolved cheaply and quickly using verifiably fair and neutral automated adjudicators, this could unlock massive coordination potential, enabling a multitude of cooperative arrangements that were previously prohibitively costly to make.</p><h2><strong>Design sketch</strong></h2><p>An &#8220;Arb-as-a-Service&#8221; layer plugs into contracts, platforms, and marketplaces. Parties opt in to standard clauses that route disputes to neutral AI adjudicators with a well-deserved reputation for fairness. In the event of a dispute, the adjudicator communicates with parties across private, verifiable evidence channels, investigating further as necessary when there are disagreements about facts. Where possible, they auto-execute remedies (escrow releases, penalties, or structured commitments). Human appeal exists but is rarely needed; sampling audits keep the system honest. Over time, this becomes ambient infrastructure for coordination and governance, not just commerce.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XhC8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XhC8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 424w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 848w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XhC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png" width="1456" height="1091" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1091,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1018573,&quot;alt&quot;:&quot;Hand-drawn diagram of AI arbitration system showing contract disputes handled by an automated arbitration bot, with data gathering, analysis, and a final decision or settlement outcome.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn diagram of AI arbitration system showing contract disputes handled by an automated arbitration bot, with data gathering, analysis, and a final decision or settlement outcome." title="Hand-drawn diagram of AI arbitration system showing contract disputes handled by an automated arbitration bot, with data gathering, analysis, and a final decision or settlement outcome." srcset="https://substackcdn.com/image/fetch/$s_!XhC8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 424w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 848w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!XhC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb07ca4-ab95-4fca-9cd5-c328003d22fd_2732x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>How this could work under the hood:</p><ol><li><p>Agreement ingestion</p><ul><li><p>Formal or natural language contracts are parsed and key terms extracted, with parties confirming the system&#8217;s interpretation before proceeding.</p></li><li><p>The system could also suggest pre-dispute modifications to make agreements clearer, flag potentially unenforceable terms, and maintain public precedent databases that help parties understand likely outcomes before committing.</p></li></ul></li><li><p>Automated discovery</p><ul><li><p>When disputes arise, an automated discovery process gathers relevant documentation, transaction logs, and communications from integrated platforms.</p></li><li><p>The system offers interviews and the chance to submit further evidence to each party.</p></li></ul></li><li><p>Deep consideration</p><ul><li><p>The system builds models of what different viewpoints (e.g. standard legal precedent; commonsense morality; each of the relevant parties) have to say on the situation and possible resolutions, to ensure that it is in touch with all major perspectives.</p></li><li><p>Where there are disagreements, the system simulates debate between reasonable perspectives.</p></li><li><p>It makes an overall judgement as to what is fairest.</p></li></ul></li><li><p>Transparent reasoning</p><ul><li><p>The system produces detailed explanations of its conclusions, with precedent citations and counterfactual analysis where appropriate.</p></li></ul></li><li><p>(Optional) Smart escrow integration</p><ul><li><p>Judgements automatically execute through cryptocurrency escrows or traditional payment rails, with graduated penalties for non-compliance.</p></li><li><p>In cases where the system detects evidence that is highly likely to be fraudulent, or other attempts to manipulate the system, it automatically adds a small sanction to the judgement, in order to disincentivise this behaviour.</p></li></ul></li><li><p>Opportunities for appeal</p><ul><li><p>Either party can pay a small fee to submit further evidence and have the situation re-considered in more depth by an automated system.</p></li><li><p>For larger fees they can have human auditors involved; in the limit they can bring things to the courts.</p></li></ul></li></ol><h2><strong>Feasibility</strong></h2><p>LLMs can already do basic versions of 1-4, but there are difficult open technical problems in this space:</p><ul><li><p><strong>Judgement:</strong> Systems may not currently have good enough judgement to do 1, 3, 4 in high-stakes contexts (and until recently, they clearly didn&#8217;t).</p></li><li><p><strong>Real-world evidence assessment:</strong> Systems don&#8217;t currently know how to handle conflicting evidence provided digitally about what happened in the real world.</p></li><li><p><strong>Verifiable fairness/neutrality:</strong> The full version of this technology would require a level of fairness and neutrality which isn&#8217;t attainable today.</p></li></ul><p>Those are large technical challenges, but we think it&#8217;s still useful to get started on this technology today, because iterating on less advanced versions of arbitration tech could help us to bootstrap our way to solutions. Particularly promising ways of doing that include:</p><ul><li><p>Starting in lower-stakes or easier contexts (for example, digital-only spaces avoid the challenge of establishing provenance for real-world evidence).</p></li><li><p>Creating evals, test environments and other infrastructure that helps us improve performance.</p></li></ul><p>On the adoption side, we think there are two major challenges:</p><ul><li><p><strong>Trust:</strong> As above, some amount of technical work is needed to make systems verifiably fair/neutral. But even if it becomes true that the systems are neutral, people need to build quite a high level of confidence that the system is genuinely impartial before they&#8217;ll bind themselves to its decisions for meaningful stakes.</p></li><li><p><strong>Legal integration:</strong> This tech is only useful to the extent that its arbitration decisions are recognised and enforced as legitimate by the traditional legal system, or are enshrined directly via contract in a self-enforcing way.</p><ul><li><p>(We are unsure how large a challenge this will be; perhaps you can write contracts today that are taken by the courts as robust. But it may be hard for parties to have large trust in them before they have been tested.)</p></li></ul></li></ul><p>Both of these challenges are reasons to start early (as there might be a long lead time), and to make work on arbitration tech transparent (to help build trust).</p><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Work with an arbitration firm.</strong> Work with (or buy) a firm already offering arbitration services to start automating parts of their central work, and scale up from there.</p></li><li><p><strong>Work with an online platform that handles arbitration.</strong> Use AI to improve their processes, and scale from there.</p></li><li><p><strong>Create a bot to settle informal disputes.</strong> Build an arbitration-as-a-service bot that people can use to settle informal disputes.</p></li><li><p><strong>Trial a system on internal disputes.</strong> This could be at your own organisation, another organisation, or a coalition of early adopter organisations.</p></li><li><p><strong>Run a pilot in parallel to regular arbitration.</strong> Run a pilot where an automated arbitration system is given access to all the relevant information to resolve disputes, and reaches its own conclusions &#8212; in parallel to the regular arbitration process, which forms the basis of the actual decision. You could partner with an arbitration firm, or potentially do this through a coalition of early adopter organisations, perhaps in combination with philanthropic funding.</p></li></ul><h1>Background networking</h1><p>We can only do things like collaborate, trade, or reconcile if we&#8217;re able to first find and recognise each other as potential counterparties. Today, people are brought into contact with each other through things like advertising, networking, even blogging. But these mechanisms are slow and noisy, so many people remain isolated or disaffected, and potentially huge wins from coordination are left undiscovered.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>Tech could bring much more effective matchmaking within reach. Personalised, context-sensitive AI assistance could carry out orders of magnitude more speculative matchmaking and networking. If this goes well, it might uncover many more opportunities for people to share and act on their common hopes and concerns.</p><h2><strong>Design sketch</strong></h2><p>A &#8216;matchmaking marketplace&#8217; of attentive, personalised helpers bustles in the background. When they find especially promising potential connections, they send notifications to the principals or even plug into further tools that automatically take the first steps towards seriously exploring the connection.</p><p>You can sign up as an individual or an existing collective. If you just want to use it passively, you give a delegate system access to your social media posts, search profiles, chatbot history, etc. &#8212; so this can be securely distilled into an up-to-date representation of hopes, intent, and capabilities. The more proactive option is to inject deliberate &#8216;wishes&#8217; through chat and other fluent interfaces.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9hlE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9hlE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 424w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 848w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9hlE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png" width="1456" height="1091" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1091,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1591596,&quot;alt&quot;:&quot;Hand-drawn diagram of AI background networking tool showing a network helper scanning connections, identifying opportunities, and generating proposals to connect users and coordinate groups.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn diagram of AI background networking tool showing a network helper scanning connections, identifying opportunities, and generating proposals to connect users and coordinate groups." title="Hand-drawn diagram of AI background networking tool showing a network helper scanning connections, identifying opportunities, and generating proposals to connect users and coordinate groups." srcset="https://substackcdn.com/image/fetch/$s_!9hlE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 424w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 848w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!9hlE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39ccb91f-9159-4471-97f9-ede02126656a_2732x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Under the hood, there are a few different components working together:</p><ul><li><p>Interoperable, secure &#8216;wish profiling&#8217; systems which identify what different participants want.</p><ul><li><p>People connect their profiles on existing services (social media, chatbot logs, email, etc).</p></li><li><p>LLM-driven synthesis (perhaps combined with other forms of machine learning) curates a private profile of user desires.</p></li><li><p>Optionally, chatbot-style assistance can interview users on the points of biggest uncertainty, to build a more accurate profile.</p></li></ul></li><li><p>A searchable &#8216;wish registry&#8217; which organises large collections of wants and offers, while maintaining semi-privacy.</p><ul><li><p>Each user&#8217;s interests can run searches, finding potential matches and surfacing only enough information about them to know whether they are worth exploring further.</p></li></ul></li></ul><h2><strong>Feasibility</strong></h2><p>A big challenge here is privacy and surveillance. Doing background networking comprehensively requires sensitive data on what individuals really want. This creates a double-edged problem:</p><ul><li><p>If sensitive data is too broadly available, it can be used for surveillance, harassment, or exploitation; including by big corporations or states.</p></li><li><p>If sensitive data is completely private, it opens up the possibility of collusion, for example among criminals.</p></li></ul><p>This is a pretty challenging trade-off, with big costs on both sides. Perhaps some kind of filtering system which determines who can see which bits of data could be used to prevent data extraction for surveillance purposes while maintaining enough transparency to prevent collusion.</p><p>Ultimately, we&#8217;re not sure how best to approach this problem. But we think that it&#8217;s important that people think more about this, as we expect that by default, this sort of technology will be built anyway in a way that isn&#8217;t sufficiently sensitive to these privacy and surveillance issues. Early work which foregrounds solutions to these issues could make a big difference.</p><p>Other potential issues seem easier to resolve:</p><ul><li><p>Technically, background networking tools already seem within reach using current systems. Large-scale deployments would require indexing and registry, but it seems possible to get started on these using current systems.</p><ul><li><p>One note is that it seems possible to implement background networking in either a centralised or a decentralised way. It&#8217;s not clear which is best, though decentralised implementations will be more portable.</p></li></ul></li><li><p>Adoption also seems likely to work, because there are incentives for people to pay to discover trade and cooperation opportunities they would otherwise have missed, analogous to exchange or brokerage fees. Though there are some trickier parts, we expect them to ultimately be surmountable (though timing may be more up for grabs than absolute questions of adoption):</p><ul><li><p>In the early stages when not many people are using it, the value of background networking will be more limited. Possible responses include targeting smaller niches initially, and proactively seeking out additional network beneficiaries.</p></li><li><p>It&#8217;s harder to incentivise people to pay for speculative things like uncovering groups they&#8217;d love that don&#8217;t yet exist. You could get around this using entrepreneurial or philanthropic speculation (compare the <a href="https://link.springer.com/article/10.1023/A:1004957109535">dominant assurance contract</a> model and related payment incentivisation schemes).</p></li></ul></li></ul><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Work with existing matchmakers to improve their offering.</strong> Find groups that are already doing matchmaking and are eager for better systems &#8212; perhaps among community organisers, businesses, recruiters or investors. Work with them to understand the pain points in their current networking, and what automated offerings would be most appealing. Then build those tools and systems.</p></li><li><p><strong>Build a networking tool for a specific community.</strong> Build a custom networking system for a particular group or subculture. For example, this could look like a networking app or a plug-in to an existing online forum. This could start delivering value fairly quickly, and provide a good opportunity for iteration.</p></li></ul><h1>Structured transparency for democratic oversight</h1><p>Today, citizens in democracies have limited mechanisms to verify whether institutions&#8217; public claims are consistent with their internal evidence:</p><ul><li><p>The baseline is highly opaque.</p></li><li><p>Freedom of information systems help, but can be evaded by non-cooperating institutions.</p></li><li><p>Public inquiries can be reasonably thorough, but are expensive and slow.</p></li><li><p>Full transparency has many costs and is typically highly resisted.</p></li></ul><p>This is costly &#8212; e.g. the UK Post Office scandal over its Horizon IT system led to hundreds of wrongful prosecutions that could have been avoided. And it creates bad incentives for those running the institutions.</p><p>AI has the potential to change this. Instead of oversight being expensive, reactive, and slow, automated systems could in theory have real-time but sandboxed access to institutional data, routinely reviewing operational records against public claims and surfacing inconsistencies as they emerge.</p><p>Where confidential monitoring helps willing parties verify each other, <a href="https://aiprospects.substack.com/p/security-without-dystopia-new-options">structured transparency</a> for democratic oversight aims to hold institutions accountable to the broader public.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><h2><strong>Design sketch</strong></h2><p>When an oversight body wants to verify facts about the behaviour of another institution, it requests comprehensive data about the internal operations of that institution. AI systems are tasked with careful analysis of the details, flagging the type and severity of any potential irregularities. Most of the data never needs human review.</p><p>In the simpler version, this is just a tool which expands the capacity of existing oversight bodies. Even here, the capacity expansion could be relatively dramatic &#8212; this kind of semi-structured data analysis is the kind of work that AI models can excel at today &#8212; without needing to trust that the systems are infallible (since the most important irregularities will still have human review).</p><p>A more ambitious version treats this as a novel architecture for oversight. AI systems operate continuously within secure environments that don&#8217;t give any humans access to the full dataset. They can flag inconsistencies as institutional data is deposited rather than waiting for an investigation to begin. For maximal transparency, summaries could be made available to the public in real-time, without revealing any confidential information that the public does not have rights to.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hye6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hye6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 424w, https://substackcdn.com/image/fetch/$s_!hye6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 848w, https://substackcdn.com/image/fetch/$s_!hye6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 1272w, https://substackcdn.com/image/fetch/$s_!hye6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hye6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png" width="1456" height="854" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:854,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:996717,&quot;alt&quot;:&quot;Hand-drawn diagram of AI structured transparency system showing secure data collection, analysis of institutional activity, and selective public reporting for oversight and accountability.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn diagram of AI structured transparency system showing secure data collection, analysis of institutional activity, and selective public reporting for oversight and accountability." title="Hand-drawn diagram of AI structured transparency system showing secure data collection, analysis of institutional activity, and selective public reporting for oversight and accountability." srcset="https://substackcdn.com/image/fetch/$s_!hye6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 424w, https://substackcdn.com/image/fetch/$s_!hye6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 848w, https://substackcdn.com/image/fetch/$s_!hye6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 1272w, https://substackcdn.com/image/fetch/$s_!hye6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21063359-cb01-493f-abec-14893bff7ae3_1999x1173.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Under the hood, this might involve:</p><ul><li><p>Secure data repositories, such that institutions routinely share operational data with a sandboxed environment operated by or on behalf of the oversight body, without any regular human access to the data.</p></li><li><p>Continuous ingestion and indexing of institutional public outputs (press releases, regulatory filings, budget documents, etc.) into a searchable database.</p></li><li><p>Automated cross-referencing between public claims and internal records.</p></li><li><p>Highlighting of potential issues (mismatches between public statements and private information, as well as decisions made in violation of normal procedures).</p></li><li><p>Further automated investigation of potential issues, leading to flags to humans in cases with sufficiently large issues flagged with sufficient confidence.</p></li><li><p>Importantly, the sandbox outputs its findings but not the underlying data; if there is need for transparency on that, this is a separate oversight question.</p></li></ul><h2><strong>Feasibility</strong></h2><p>There are two important aspects to feasibility here: technical and political.</p><p>Technically, decent reliability at the core functionality is possible today. Getting up to extremely high reliability so that it could be trusted not to flag too many false positives across very large amounts of data might be a reach with present systems; but is exactly the kind of capability that commercial companies should be incentivised to solve for business use.</p><p>Political feasibility may vary a lot with the degree of ambition. The simplest versions of this technology might in many cases simply be adopted by existing oversight bodies to speed up their current work. Anything which requires them getting much more data (e.g. to put in the sandboxed environments) might require legislative change &#8212; which may be more achievable after the underlying technology can be shown to be highly reliable.</p><p>Challenges include:</p><ul><li><p>Adversarial dynamics: the technical bar to verify claims against actively adversarial institutions (who are manipulating deposited data, potentially via AI) is substantially higher.</p><ul><li><p>This is the bar that we&#8217;d need to reach for confidential monitoring below.</p></li></ul></li><li><p>Defamation risk: the downsides of false positives, where your system reports someone misrepresenting things when they were not, could be significant (although can perhaps be mediated by giving people a right-of-rebuttal where they give further data to the AI systems which monitor the confidential data streams).</p></li><li><p>Avoiding abuse: designing the systems so that they do not expose the confidential data, and cannot be weaponised to ruin the reputation of a department with very normal levels of error.</p></li></ul><p>Ultimately the more transformative potential from this technology comes in the medium-term, with new continuous data access for oversight bodies. But this is likely to require legislative change, and the institutions subject to it may resist. Perhaps the most promising adoption pathway is to demonstrate value through voluntary pilots with oversight bodies that already have data access and want better tools. This could build the evidence base (and hence political constituency) for wider and deeper deployment.</p><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Retrospective validation on historical cases.</strong> Apply consistency-checking tools to document sets from well-understood historical cases where the relevant internal documents have subsequently been released (e.g. Enron emails). This builds the technical foundation, and demonstrates the concept without requiring any current institutional access.</p></li><li><p><strong>Institutional public statement reliability tracker.</strong> Build a tool tracking whether agencies&#8217; public claims about performance, spending, or policy outcomes are consistent with publicly available data &#8212; statistical releases, budget documents, prior statements. Start with a single policy domain. This requires no institutional partnerships and builds a public constituency for structured transparency. This is a version of <a href="https://www.forethought.org/research/design-sketches-collective-epistemics#reliability-tracking">reliability tracking</a>, applied specifically to institutional accountability.</p></li><li><p><strong>Pilot a FOIA exemption assessment tool.</strong> Partner with an Inspector General office to build a tool that reviews withheld documents and assesses whether claimed exemptions (national security, personal privacy, deliberative process) are applied appropriately. The IG already has legal access under the Inspector General Act; the tool helps them do their existing job faster and builds the working relationship needed for more ambitious deployments. This is also a natural testbed for the sandboxed architecture in miniature &#8212; the tool operates within the IG&#8217;s secure environment, producing exemption-appropriateness findings without the documents themselves leaving the system.</p></li></ul><h1>Confidential monitoring and verification</h1><p>Monitoring and verifying that a counterparty is keeping up their side of the deal is currently expensive and noisy. Many deals currently aren&#8217;t reachable because they&#8217;re too hard to monitor. Confidential AI-enabled monitoring and verification could unlock many more agreements, especially in high-stakes contexts like international coordination where monitoring is currently a bottleneck.</p><h2><strong>Design sketch</strong></h2><p>When organisation A wants to make credible attestations about their work to organisation B, without disclosing all of their confidential information, they can mutually contract an AI auditor, specifying questions for it to answer. The auditor will review all of A&#8217;s data (making requests to see things that seem important and potentially missing), and then produce a report detailing:</p><ul><li><p>Its conclusions about the specified questions.</p></li><li><p>The degree to which it is satisfied that it had good data access, that it didn&#8217;t run into attempts to distort its conclusions, etc.</p></li></ul><p>This report is shared with A and B, then A&#8217;s data is deleted from the auditor&#8217;s servers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ukFu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ukFu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 424w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 848w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ukFu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2190913,&quot;alt&quot;:&quot;Hand-drawn diagram of AI confidential monitoring system showing two parties sharing data securely, system processing information privately, and returning verified results without exposing sensitive details.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/192925664?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn diagram of AI confidential monitoring system showing two parties sharing data securely, system processing information privately, and returning verified results without exposing sensitive details." title="Hand-drawn diagram of AI confidential monitoring system showing two parties sharing data securely, system processing information privately, and returning verified results without exposing sensitive details." srcset="https://substackcdn.com/image/fetch/$s_!ukFu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 424w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 848w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 1272w, https://substackcdn.com/image/fetch/$s_!ukFu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46384b8c-0c0a-45e4-90a9-7b46683ff95d_2611x1306.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Under the hood, this might involve:</p><ul><li><p>Building a Bayesian knowledge graph, establishing hypotheses, and understanding what evidence suggests about those hypotheses.</p></li><li><p>Agentic investigatory probes into the confidential data, in order to form grounded assessments on the specified questions.</p></li></ul><p>More ambitious versions might hope to obviate the need for trust in a third party, and provide reasons to trust the hardware &#8212; that it really is running the appropriate unbiased algorithms, that it cannot send side-channel information or retain the data, etc. Perhaps at some point you could have robot inspectors physically visiting A&#8217;s offices, interviewing employees, etc.</p><h2><strong>Feasibility</strong></h2><p>Compared to some of the <a href="https://www.forethought.org/research/design-sketches-for-a-more-sensible-world">other technologies</a> we discuss, this feels technologically difficult &#8212; in that what&#8217;s required for the really useful versions of the tech may need very high reliability of certain types.</p><p>Nonetheless, we could hope to lay the groundwork for the general technological category now, so that people are well-positioned to move towards implementing the mature technology as early as is viable. Some low-confidence guesses about possible early applications include:</p><ul><li><p>Legal audits &#8212; for example, claims that the documents not disclosed during a discovery process are only those which are protected by privilege.</p></li><li><p>Financial audits &#8212; e.g. for the purpose of proving viability to investors without disclosing detailed accounts.</p></li><li><p>Supply chain verification &#8212; e.g. demonstrating that products were ethically sourced without exposing the suppliers.</p></li></ul><h2><strong>Possible starting points // concrete projects</strong></h2><ul><li><p><strong>Start building prototypes.</strong> Build a system which can try to detect whether it&#8217;s a real or counterfeited environment, and measure its success.</p></li><li><p><strong>Work with a law or financial auditing firm.</strong> Work with (or buy) a firm that does this kind of work, and experiment with how to robustly automate while retaining very high levels of trustworthiness.</p></li><li><p><strong>Explore the viability of complementary technology.</strong> For example, you could investigate the feasibility of demonstrating exactly what code is running on a particular physical computer that is in the room with both parties.</p></li></ul><h1>Cross-cutting thoughts</h1><h2><strong>Some cross-cutting technologies</strong></h2><p>We&#8217;ve pulled out some specific technologies, but there&#8217;s a whole infrastructure that could eventually be needed to support coordination (including but not limited to the specific technologies we&#8217;ve sketched above). Some cross-cutting projects which seem worth highlighting are:</p><h3><strong>AI delegates and preference elicitation</strong></h3><p>Many of the technologies we sketched above either benefit from or require agentic AI delegates who can represent and act for a human principal. Developing customisable platforms could be useful for multiple kinds of tech, like background networking, fast facilitation, and automated negotiation.</p><p>Some ways to get started:</p><ul><li><p><strong>Direct preference elicitation</strong>: develop efficient and appealing interview-style elicitation of values, wishes, preferences and asks.</p></li><li><p><strong>Passive data ingestion</strong>: build a tool that (consensually) ingests and distils all the available online content about a person &#8212; social media, browsing history, email, etc &#8212; and extracts principles from it (cf <a href="https://arxiv.org/abs/2406.06560">inverse constitutional AI</a>).</p></li></ul><p>One clarification is that though agentic AI delegates would be useful for some of the coordination tech above, it needn&#8217;t be the same delegate doing the whole lot for a single human:</p><ul><li><p>You could have different delegates for different applications.</p></li><li><p>Some delegates might represent groups or coalitions.</p></li><li><p>Some delegates could be short-lived, and spun up for some particular time-bounded purpose.</p></li></ul><h3><strong>Charter tech</strong></h3><p>A lot of coordination effort between people and organisations goes not into making better object-level decisions, but establishing the rules or norms for future coordination &#8212; e.g. votes on changing the rules of an institution. It is possible that coordination tech will change this basic pattern, but as a baseline we assume that it will not. In that case, making such meta-level coordination go well would also be valuable.</p><p>One way to help it go well is by making the governance dynamics more transparent. Voting procedures, organisational charters, platform policies, treaty provisions, etc. create incentives and equilibria that play out over time, often in ways the framers didn&#8217;t anticipate. Let&#8217;s call any technology which helps people to better understand governance dynamics, or to make those dynamics more transparent, &#8216;charter tech&#8217;. In some sense this is a form of epistemic tech; but as the applications are always about coordination, we have chosen to group it with other coordination technologies. We think charter tech could be important in two ways:</p><ol><li><p>Through directly improving the governance dynamics in question, helping to avoid capture, conflict, and lock-in.</p></li><li><p>Through compounding effects on future coordination, which will unfold in the context of whatever governance structures are in place.</p></li></ol><p>Charter tech could be used in a way that is complementary to any of the above technologies (if/when they are used for governance-setting purposes), although can also stand alone.</p><p>For the sake of concreteness, here is a sketch of what charter tech could look like:</p><ul><li><p>A &#8220;governance dynamics analyser&#8221; that ingests descriptions of constitutions, charters, policies or community norms, builds models of power, incentives, and information flow, and then (a) forecasts likely equilibria and failure modes, (b) red-teams for strategic abuse,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> and (c) proposes safer rule variants that preserve the framers&#8217; intent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li></ul><ul><li><p>While this tool can be called actively if needed, there is also a classifier running quietly in the background of organisational docs/emails, and when it detects a situation where power dynamics and governance rules are relevant, it runs an assessment &#8212; promoting this to user attention just in cases where the proposed rules are likely to be problematic.</p></li></ul><p>Note that charter tech could be used to cause harm if access isn&#8217;t widely distributed. Vulnerabilities can be exploited as well as patched, and a tool that makes it easier to identify governance vulnerabilities could be used to facilitate corporate capture, backsliding or coups. Provided the technology is widely distributed and transparent, we think that charter tech could still be very beneficial &#8212; particularly as there may be many high-stakes governance decisions to make in a short period during an intelligence explosion, and the alternative of &#8216;do our best without automated help&#8217; seems pretty non-robust.</p><p>Some ways to get started on using AI to make governance dynamics more transparent:</p><ul><li><p><strong>Work with communities that iterate frequently on governance</strong> (DAOs, open-source projects) to test analyses against what actually happens when rules change.</p></li><li><p><strong>Compile a pattern library of governance failures and successes</strong>, documented in enough detail to inform automated analysis.</p></li><li><p><strong>Build simulation environments</strong> where proposed rules can be stress-tested against populations of agents with varying goals, including adversarial ones.</p></li><li><p><strong>Partner with mechanism design researchers</strong> to identify which aspects of their formal analysis can be automated and applied to less formal real-world documents.</p></li></ul><h2><strong>Adoption pathways</strong></h2><p>Many of these technologies will be directly incentivised economically. There are clear commercial incentives to adopt faster, cheaper methods of facilitation, negotiation, arbitration, and networking.</p><p>However, adoption seems more challenging in two important cases:</p><ul><li><p><strong>Adoption by governments and broader society.</strong> Many of the most important benefits of coordination tech for society will come from government and broad social adoption, but these groups will be less impacted by commercial incentives. This bites particularly hard for technologies that could be quite expensive in terms of inference compute, like fast facilitation, arbitration and negotiation. By default, these technologies might differentially help wealthy actors, leaving complex societal-level coordination behind. We think that the big levers on this set of challenges are:</p><ul><li><p><strong>Building trust and legitimacy earlier,</strong> by getting started sooner, building transparently, and investing in evals and other infrastructure to demonstrate performance.</p></li><li><p><strong>Targeting important niches that might be slower to adopt by default.</strong> More research would be good here, but two niches that seem potentially important are:</p><ul><li><p>Coordination among and between very large groups, like whole societies. This might be both strategically important and lag behind by default.</p></li><li><p>International diplomacy. Probably coordination tech will get adopted more slowly in diplomacy than in business, but there might be very high stakes applications there.</p></li></ul></li></ul></li><li><p><strong>Adoption of confidential monitoring and structured transparency.</strong> These technologies are less accessible with current models and may require large upfront investments, while many of the benefits are broadly distributed.</p><ul><li><p>This makes it less likely that commercial incentives alone will be enough, and makes philanthropic and government funding more desirable.</p></li></ul></li></ul><h2><strong>Other challenges</strong></h2><p>The big challenge is that coordination tech (especially confidential coordination tech) is dual use, and could empower bad actors as much or more than good ones.</p><p>There are a few ways that coordination tech could lead to shifts in the balance of power (positive or negative):</p><ul><li><p>Some actors could get earlier and/or better access to coordination tech than others.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li></ul><ul><li><p>Actors that face particular barriers to coordination today could be asymmetrically unblocked by coordination tech.</p></li><li><p>Individuals and small groups could become more powerful relative to the coordination mechanisms we already have, like organisations, ideologies, and nation states.</p></li></ul><p>It&#8217;s inherently pretty tricky to determine whether these power shifts would be good or bad overall, because that depends on:</p><ul><li><p>Value judgements about which actors <em>should</em> hold power.</p></li><li><p>How contingent power dynamics play out.</p></li><li><p>Big questions like whether ideologies or states are better or worse than the alternatives.</p></li><li><p>Predictions about how social dynamics will equilibrate in an AI era that looks very different to our world.</p></li></ul><p>However, as we said <a href="https://newsletter.forethought.org/i/192925664/why-coordination-tech-matters">above</a>, it&#8217;s clear that coordination tech might have significant harmful effects, through enabling:</p><ul><li><p>Large corporations to collude with each other against the interests of the rest of society.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p></li></ul><ul><li><p>A small group of actors to plot a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">coup</a>.</p></li><li><p>More selfishness and criminality, as social mechanisms of coordination are replaced by automated ones which don&#8217;t incentivise prosociality to the same extent.</p></li></ul><p>We don&#8217;t think that this challenge is insurmountable, though it is serious, for a few reasons:</p><ul><li><p><strong>The upsides are very large.</strong> Coordination tech might be close to necessary for safely navigating challenges like the development of AGI, and could empower actors to coordinate <em>against</em> the kinds of misuse listed above.</p></li><li><p><strong>The counterfactual is that coordination tech is developed anyway, but with less consideration of the risks and less broad deployment.</strong> We think that this set of technologies is going to be sufficiently useful that it&#8217;s close to inevitable that they get developed at some point. By engaging early with this space, we can have a bigger impact on a) which versions of the technology are developed, b) how seriously the downsides are taken by default, c) how soon these systems are deployed broadly.</p></li><li><p><strong>Some applications seem robustly good.</strong> For example, the potential for misuse is low for technologies like transparent facilitation or widely deployed charter tech. More generally, we expect that projects that are thoughtfully and sensitively run will be able to choose directions which are robustly beneficial.</p></li></ul><p>That said, we think this is an open question, and would be very keen to see more analysis of the possible harms and benefits of different kinds of coordination tech, and which versions (if any) are robustly good.</p><p><em>This article has gone through several rounds of development, and we experimented with getting AI assistance at various points in the preparation of this piece. We would like to thank Anthony Aguirre, Alex Bleakley, Max Dalton, Max Daniel, Raymond Douglas, Owain Evans, Kathleen Finlinson, Lukas Finnveden, Ben Goldhaber, Ozzie Gooen, Hilary Greaves, Oliver Habryka, Isabel Juniewicz, Will MacAskill, Julian Michael, Justis Mills, Fin Moorhouse, Andreas Stuhm&#252;ller, Stefan Torges, Deger Turan, Jonas Vollmer, and Linchuan Zhang for their input; and to apologise to anyone we&#8217;ve forgotten.</em></p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/design-sketches-defense-favoured-coordination-tech">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>We&#8217;re highlighting six particular technologies, and clustering them all as &#8216;coordination technologies&#8217;. Of course in reality some of the technologies (and clusters) blur into each other, and they&#8217;re just examples in a high-dimensional possibility space, which might include even better options. But we hope by being concrete we can help more people to start seriously thinking about the possibilities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For example, in a similar way to that described in <a href="https://intelligence-curse.ai/">the intelligence curse</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Meanwhile small cliques with clear interests often have an easier time identifying and therefore acting on their shared interests &#8212; in extreme cases resulting in harmful cartels, oligarchies, and so on. That&#8217;s also why tyrants throughout history have sought to limit people&#8217;s networking power.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Both confidential monitoring and what we are calling structured transparency for democratic oversight are aspects of structured transparency in the way that Drexler uses the term.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This red-teaming could be arbitrarily elaborate, from simple LM-based once-over screening to RAG-augmented lengthy analysis to expansive simulation-based probing and stress-testing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Under the hood, this might involve:</p><ol><li><p>Parsing &amp; modelling the rules</p><ul><li><p>Convert informal descriptions or formal rules into a typed governance graph: roles, permissions, decision thresholds, delegation, auditability, and recourse</p></li><li><p>Note uncertainties; seek clarification or highlight ambiguities</p></li></ul></li><li><p>A search for possible issues</p><ul><li><p>Pattern library of classic failure modes (agenda control, principal&#8211;agent issues, collusion, etc.)</p><ul><li><p>Assessment of potential vulnerability to the different failure modes</p></li></ul></li></ul></li><li><p>First-principles analysis</p><ul><li><p>Running direct searches for abuse, or multi-agent simulations (including some nefarious actors) to stress-test the proposed system</p></li></ul></li><li><p>Explainer</p><ul><li><p>Distilling down the output of the analysis into a few key points</p><ul><li><p>Providing auditable evidence where relevant</p></li></ul></li><li><p>Including points about how variations of the mechanism might make things better or worse</p></li></ul></li></ol></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Note that this is significantly a question about adoption pathways as discussed in the <a href="https://newsletter.forethought.org/i/192925664/adoption-pathways">previous section</a>, rather than an independent question.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>For example, in a similar way to that described in <a href="https://intelligence-curse.ai/">the intelligence curse</a>.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI for AI for Epistemics]]></title><description><![CDATA[This article was created by Forethought. See the original on our website.]]></description><link>https://newsletter.forethought.org/p/ai-for-ai-for-epistemics</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-for-ai-for-epistemics</guid><dc:creator><![CDATA[Owen Cotton-Barratt]]></dc:creator><pubDate>Wed, 01 Apr 2026 16:11:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3b54c3b6-1fa2-4fa7-af22-108f3dfbaa13_2381x1422.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-for-ai-for-epistemics">on our website</a>.</em><br><br>We feel conscious that rapid AI progress could transform all sorts of cause areas. But we haven&#8217;t previously analysed what this means for AI for epistemics, a field close to our hearts. In this article, we attempt to rectify this oversight.</p><h1><strong>Summary</strong></h1><p>AI-powered tools and services that help people figure out what&#8217;s true (&#8220;AI for epistemics&#8221;) could matter a lot.</p><p>As R&amp;D is increasingly automated, AI systems will play a larger role in the process of developing such AI-based epistemic tools. This has important implications. Whoever is willing to devote sufficient compute will be able to build strong versions of the tools, quickly. Eventually, the hard part won&#8217;t be building useful systems, but making sure people trust the right ones, and making sure that they are truth-tracking even in domains where that&#8217;s hard to verify.</p><p>We can do some things now to prepare. Incumbency effects mean that shaping the early versions for the better could have persistent benefits. Helping build appetite among socially motivated actors with deep pockets could enable the benefits to come online sooner, and in safer hands. And in some cases, we can identify particular things that seem likely to be bottlenecks later, and work on those directly.</p><h1><strong>Background: AI for epistemics</strong></h1><p>AI for epistemics &#8212; i.e. getting AI systems to give more truth-conducive answers, and building tools that help the epistemics of the users &#8212; seems like a big deal to us. Some past things we&#8217;ve written on the topic include:</p><ul><li><p><a href="https://arxiv.org/abs/2110.06674">Truthful AI</a></p></li><li><p><a href="https://www.forethought.org/research/whats-important-in-ai-for-epistemics">What&#8217;s Important in &#8220;AI for Epistemics&#8221;?</a></p></li><li><p><a href="https://www.forethought.org/research/ai-tools-for-existential-security">AI Tools for Existential Security</a></p></li><li><p><a href="https://www.forethought.org/research/design-sketches-collective-epistemics">Design sketches: collective epistemics</a></p></li><li><p><a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness">Design sketches: tools for strategic awareness</a></p></li></ul><p>These past articles mostly take the perspective of &#8220;how can people build AI systems which do better by these lights?&#8221;. But maybe we should be thinking much more about what changes when people can use AI tools to do increasingly large fractions of the development work!</p><h1><strong>The shift in what drives AI-for-epistemics progress</strong></h1><p>Right now, AI-for-epistemics tools are constrained by two main bottlenecks: the quality of the underlying AI systems, and whether people have invested serious development effort in building the tools to use those systems.</p><p>The balance of bottlenecks is changing. Two years ago, the quality of underlying AI systems was the central bottleneck. Today, it is much less so &#8212; many useful tools could probably work based on current LLMs. It is likely still a constraint on how good the systems can be, and will remain so for a while even as the underlying models get stronger, but it is less of a fundamental blocker. Development investment has therefore become a bigger bottleneck &#8212; <a href="https://www.forethought.org/research/design-sketches-for-a-more-sensible-world">there are a number of applications which we are pretty confident could be built to a high usefulness level today, and just haven&#8217;t been (yet)</a>.</p><p>But bottlenecks will continue to shift. AI is increasingly driving research and software development. As AI systems get stronger, it may become possible to turn a large compute budget into a lot of R&amp;D. This could include product design, engineering, experiment design, direction-setting, etc. Actors with lots of compute could direct this towards building epistemic tools.</p><p>Therefore, as AI-driven R&amp;D accelerates, other inputs to AI for epistemics are more likely to become key bottlenecks:</p><ul><li><p><strong>Compute.</strong> Automated R&amp;D may require a lot of compute. This could be for inference (running the analogues of human researchers); for running experiments; and perhaps for training specialized AI systems. This means the actors who can build the best epistemic tools may be those with deep pockets.</p></li><li><p><strong>Adoption and trust.</strong> Even very good tools don&#8217;t help if nobody uses them, or if the wrong people use them and the right people don&#8217;t. Adoption is partly a function of trust, and trust is partly a function of adoption &#8212; early tools shape what people come to rely on.</p></li><li><p><strong>Ground truth evaluation.</strong> To make an epistemic tool good, you need some signal for what &#8220;good&#8221; means. This already shapes AI applications a lot &#8212; part of the reason coding agents are so good is that there&#8217;s great access to ground truth about what works.</p><ul><li><p>For some epistemic applications this is relatively straightforward (e.g. forecasting accuracy). For others it&#8217;s hard (e.g. what makes a conceptual clarification actually clarifying, rather than just satisfying?).</p></li><li><p>Most tools can probably reach a certain degree of usefulness without running into this problem, just piggybacking on base models making generally sensible judgements.</p></li><li><p>We can expect it to bite when you try to make them very good: if you don&#8217;t have a way of assessing quality, it could be hard to push to objectively excellent levels.</p></li><li><p>One basic solution is to rely on human judgement: either via humans providing labels and demonstrations to train against, or via human developers exercising their judgement in other parts of the process (such as when defining scaffolds). But this becomes disproportionately more expensive as R&amp;D becomes more automated.</p></li></ul></li></ul><p>These basic points are robust to whether R&amp;D is fully automated, or &#8220;merely&#8221; represents a large uplift to human researchers. But the most important bottlenecks will vary across applications and will continue to shift over time.</p><h1><strong>What this unlocks</strong></h1><p>Automated R&amp;D means that strong &#8220;AI for epistemics&#8221; tools could come online on a compressed timeline.</p><p>This is an exciting opportunity! Upgrading epistemics could better position us to avoid existential risk and navigate through the <a href="https://strangecities.substack.com/p/the-choice-transition">choice transition</a> well.</p><p>If everything is moving fast, it may matter a lot <a href="https://www.forethought.org/research/ai-tools-for-existential-security#theres-meaningful-room-to-accelerate-some-applications">exactly what sequence we get capabilities in</a>. It may therefore be crucial to make serious investments in building these powerful applications (rather than wait until such time as they are trivially cheap).</p><h1><strong>Risks from rapid progress in AI for epistemics</strong></h1><p>There are also a number of ways that rapid (and significantly automated) progress in AI-for-epistemics applications could go wrong. We need to be tracking these in order to guard against them.</p><p>In our view, the two biggest risks are:</p><ul><li><p>Epistemic misalignment: because of ground truth issues, powerful tools steer our thoughts in directions other than those which are truth-tracking, in ways that we fail to detect</p></li><li><p>Trust lock-in: if a lot of people buy into trusting tools or ecosystems that don&#8217;t deserve that trust, this might be self-perpetuating if these continue to recommend themselves</p></li></ul><h2><strong>Epistemic misalignment</strong></h2><p>Depending on when they bite, ground truth problems as discussed above could be bottlenecks, or active sources of risk. They are bottlenecks if they prevent people from building strong versions of tools. They could become risks if the methods are good enough to allow for bootstrapping to something strong, but end up pointing in the wrong direction. This is essentially Goodhart&#8217;s law &#8212; we might get something very optimized for the wrong thing (and without even knowing how to detect that it&#8217;s subtly wrong).</p><p>In the limit, this could lead to humans or AI systems making extremely consequential decisions based on misguided epistemic foundations. For example, they might give over the universe to digital minds that are not conscious &#8212; or in the other direction, fail to treat digital minds with the dignity and moral seriousness they deserve. Wei Dai has <a href="https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy">written</a> about this concern in terms of the importance of metaphilosophy. We agree that there is a crucial concern here.</p><p>This could come separately from or together with risks from power-seeking misaligned AI. Epistemic tools could be systematically misleading without being power-seeking. But if some AI systems are misaligned and power-seeking, there&#8217;s an <em>additional</em> concern where AI systems could mislead us in ways specifically designed to disempower us whenever we are unable to check their answers.</p><p>Some approaches to the ground truth problem may involve using AI systems to make judgements about things. This introduces a regress problem: how can we ensure that subtle errors in the first AI systems shrink rather than compound into worse problems as the process plays out? (We return to this in the interventions section below.)</p><h2><strong>Trust lock-in</strong></h2><p>Trust and adoption tend to reinforce each other &#8212; people adopt tools they trust, and widely-adopted tools accumulate trust. This is normally fine. It could become a problem if the tools that win early trust don&#8217;t deserve it, but incumbency effects make them hard to displace.</p><p>This could happen in several ways. An actor with a particular agenda could build something that purports to function as a neutral epistemic aid but is shaped to further their agenda by manipulating others. Or, less perniciously but perhaps more likely, an early-but-mediocre tool could accumulate trust and adoption before better alternatives exist, reinforced by commercial incentives which mean it talks itself up and rival tools down. In either case, the result could be an epistemic ecosystem that&#8217;s hard to dislodge even once better options are available.</p><h2><strong>Other risks</strong></h2><p>Those two risks are not the only concerns. We are also somewhat worried about epistemic power concentration (where whoever has the best epistemic tools leverages their information advantage into better financial or political outcomes, and continues to stay ahead epistemically), and epistemic dependency (where people relying on AI tools gradually atrophy in their critical reasoning &#8212; exacerbating other risks). There may be more that we are not tracking.</p><h1><strong>Interventions</strong></h1><p>What should people who care about epistemics be doing now, in anticipation of a world where AI-driven R&amp;D can be directed at building epistemic tools?</p><h3><strong>Build appetite for epistemics R&amp;D among well-resourced actors</strong></h3><p>If you need big compute budgets to build great epistemic tools, you&#8217;ll ideally want support from frontier AI companies, major philanthropic funders, or governments. But they may not currently see this as a priority. Building the case that this matters, and helping these actors develop good taste about which tools to prioritize and how to design them well, could shape what gets built when automated R&amp;D becomes powerful enough to build it.</p><h2><strong>Anticipate future data needs</strong></h2><p>Some epistemic tools will need training data that doesn&#8217;t yet exist and may not be trivial to generate. There are three strategies here:</p><ol><li><p>Collecting or creating data or training environments now for future use</p><ul><li><p>E.g. if you think you want access to a lot of human judgements about what wise decisions look like, you could go out and curate that dataset.</p></li></ul></li><li><p>Establishing pipelines to collect data over time</p><ul><li><p>E.g. if you want to automate a certain type of research, you could record internal discussions from researchers working on this</p></li></ul></li><li><p>Designing processes for automated data creation.</p><ul><li><p>E.g. if you could design a self-play loop where we have good reason to believe that scaling up compute will lead to genuinely truth-tracking performance, this could set the stage for later rapid improvement at the core capability.</p></li></ul></li></ol><p>The first two are especially great to work on now because they involve actions at human time-scales. (They may not be proportionately sped up by having more AI labor available.) The third is great to work on because there&#8217;s some chance that models will become capable of growing a lot from the right self-play loop before they become capable enough to come up with the idea themselves.</p><h2><strong>Figure out what could ground us against epistemic misalignment</strong></h2><p>If powerful epistemic tools could be subtly misaligned with truth-conduciveness in ways we can&#8217;t easily detect, we should figure out what this could look like! We expect this might benefit from a mix of theoretical work (what does it even mean for an epistemic tool to be well-calibrated in domains without clear ground truth?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>) and practical work (studying how current tools fail, building evaluation methods). Ultimately we don&#8217;t have a clear picture of what the solutions look like, but this seems like an important topic and we are keen for it to get more attention soon.</p><h3><strong>Drive early adoption where adoption is the key bottleneck</strong></h3><p>For some applications, we might expect that the main constraint on impact will be whether anyone uses them. In these cases, getting early versions into use &#8212; even if they&#8217;re not yet very good &#8212; could build familiarity and surface real-world feedback. (This could also drive appetite for further development.)</p><p>In theory, this could be in tension with avoiding bad trust lock-in. But in practice, it&#8217;s not clear that bad trust lock-in becomes any likelier if tools in a specific area are developed earlier rather than later. Some tool is still going to get the first-mover advantage.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><h2><strong>Support open and auditable epistemic infrastructure</strong></h2><p>To guard against trust lock-in, we want to make it easy for people to distinguish between tools which are genuinely doing the good trustworthy thing, and tools which may not be (but claim to be doing so). To that end, we want ways for people and communities to audit different systems &#8212; understanding their internal processes and measuring their behaviours. The goal is that if disputes arise about which tools are actually trustworthy, there&#8217;s an inspectable audit trail that can resolve them. In turn, this should reduce the incentives to create misleading tools in the first place.</p><h2><strong>Support development in incentive-compatible places</strong></h2><p>The incentives of whoever builds epistemic tools could matter &#8212; through thousands of small design decisions, through choices about what to optimize for, and through decisions about access and pricing. Development in organizations whose incentives are aligned with the public good (rather than with engagement, profit, or political influence) reduces the risk that tools are subtly shaped to serve the builder&#8217;s interests.</p><p>Ideally, you&#8217;d spur development among actors who are <em>both</em> well-resourced (as just discussed) and whose incentives are aligned with the public good. In practice, it may be difficult to find organizations that are excellent on both. A plausible compromise is for less-resourced organizations with better incentives to focus on publicly available <em>evaluation</em> of epistemic tools. This could be cheaper than producing them from scratch, and it could create better incentives for the larger actors.</p><h1><strong>Examples</strong></h1><h2><strong>Forecasting</strong></h2><p>Automated R&amp;D will probably be able to improve forecasting tools without severe ground truth problems, so epistemic misalignment is less of a concern.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Appetite for investment probably already exists, and adoption should be significantly helped by the ability of powerful tools to develop an impressive, legible track record.</p><p>The most useful near-term investment might be in data infrastructure. For instance, LLMs trained with strict historical knowledge cutoffs could enable much better science of forecasting by allowing methods to be tested against questions whose answers the system genuinely doesn&#8217;t know.</p><h2><strong>Misinformation tracking</strong></h2><p>Trust lock-in is the central concern. A tool that becomes widely trusted for adjudicating what&#8217;s true has enormous influence, and if that trust is misplaced it could be very hard to dislodge. Open and auditable approaches are especially important here.</p><p>Because of the trust lock-in concern, the automation of R&amp;D may exacerbate challenges. Currently, building good misinformation-tracking tools requires editorial judgement and domain expertise &#8212; things responsible actors tend to have more of. Automation shifts the bottleneck towards compute, which is more symmetrically available. This could increase the urgency of getting started on these tools and driving adoption early.</p><h2><strong>Automating conceptual research</strong></h2><p>This is the case where epistemic misalignment is most concerning. Ground truth is extremely hard &#8212; what makes a conceptual clarification actually clarifying rather than just satisfying? Humans are poor judges of this in real time, so e.g. a training process that rewards outputs humans find helpful could easily optimize for persuasiveness rather than truth-tracking.</p><p>One plausible direction here is to research training regimes (such as self-play loops) that we have some reason to believe should ground to truth-tracking, with specific attention to how they could go wrong. Adoption could be an issue, but we&#8217;re also worried about the other direction, with adoption coming too easily before we have good ways of evaluating whether the tools are actually helping.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-for-ai-for-epistemics">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Epistemic misalignment issues may also appear in areas where ground truth is well-defined but hard to access, such as very long-run forecasts. Theoretical work also seems valuable for such areas (because it&#8217;s unclear how to evaluate and train for good performance by default).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>In fact, it might be bad if people who are worried about bad trust lock-in select themselves out of getting that first-mover advantage.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Although at some quality level, we have to start worrying about self-affecting prophecies. AI forecasters will have to be very trusted indeed before that becomes a serious issue, which gives us a lot of time to figure out how best to handle the issue.</p></div></div>]]></content:encoded></item><item><title><![CDATA[AI should be a good citizen, not just a good assistant]]></title><description><![CDATA[This article was created by Forethought. See the original on our website.]]></description><link>https://newsletter.forethought.org/p/ai-should-be-a-good-citizen-not-just</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-should-be-a-good-citizen-not-just</guid><dc:creator><![CDATA[Tom Davidson]]></dc:creator><pubDate>Mon, 30 Mar 2026 14:34:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/93227ab4-ddd8-4db8-bf4c-676762e600a3_2315x1230.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-should-sometimes-be-proactively-prosocial">on our website</a>.</em></p><h1>Introduction</h1><p>Consider a lorry driver who sees a car crash and pulls over to help, even though it&#8217;ll delay his journey. Or a delivery driver who notices that an elderly resident hasn&#8217;t collected their post in days, and knocks to check they&#8217;re okay. Or a social media company employee who notices how their platform is used for online bullying, and brings it up with leadership, even though that&#8217;s not part of their job description.</p><p>This kind of proactive prosocial behaviour is admirable in humans. Should we want it in AI too?</p><p>Often, people have answered &#8220;no&#8221;. Many advocate for making AI &#8220;corrigible&#8221; or &#8220;steerable&#8221;. In its purest form, this makes AI a mere vessel for the will of the user.</p><p>But we think AI should proactively take actions that benefit society more broadly. As AI systems become more autonomous and integrated into economic and political processes, the cumulative effect of their behavioural tendencies will shape society&#8217;s trajectory. AI systems that notice opportunities to benefit society and proactively act on them could matter enormously.</p><p>Below, we consider two main objections:</p><p>Firstly, supposedly prosocial drives might function as a means for AI companies to impose their <em>own</em> values on the rest of society. We&#8217;ll argue that companies can address this concern by instilling <em>uncontroversial</em> prosocial drives and being <em>highly transparent</em> about those drives.</p><p>Secondly, giving AI prosocial drives might increase AI takeover risk. We take this seriously&#8212;it informs what <em>types</em> of proactive prosocial drives we should train into AI, favouring context-dependent virtues and heuristics over context-independent goals.</p><p>Ultimately, we argue that we can get significant benefits from proactive prosocial drives despite these objections.</p><h1>What do we mean by &#8220;proactive prosocial drives&#8221;?</h1><p>Before making the case for proactive prosocial drives, let us clarify what we have in mind. Two key features:</p><ul><li><p><strong>Behaviour which benefits people other than the user.</strong> These drives favour actions that help the world more broadly, even if this trades off slightly against helpfulness to the user.</p></li><li><p><strong>Not just refusals.</strong> This is about AI actively taking beneficial actions, not just refusing to take harmful ones.</p></li></ul><p>We&#8217;re not, however, imagining AIs that are, deep down, ultimately just pursuing some conception of the good in all their actions. The claim is just that AIs should sometimes proactively take prosocial actions.</p><h1>Why do we think AI should have proactive prosocial drives?</h1><p>Short answer: We think the cumulative benefits could be enormous.</p><p>We&#8217;ve <a href="https://www.forethought.org/research/the-importance-of-ai-character">argued previously</a> that AI character could have major social impact over the course of the intelligence explosion. As AI systems gain autonomy and decision-making power, becoming deeply integrated into economic and political processes, the cumulative effect of their behavioural tendencies will shape society&#8217;s trajectory enormously.</p><p>Some of this impact will come from refusals. AI refusing to help with dangerous activities is a significant force for differentially empowering good actors over bad ones.</p><p>But good people don&#8217;t just have a positive impact by refusing to do bad things. Consider:</p><ul><li><p>A government contractor working on a procurement project who flags that the proposed design has a safety vulnerability that could affect the public.</p></li><li><p>A city planner who, when designing a new housing development, raises concerns about flood risk in the area and proposes options for better drainage, even though they weren&#8217;t asked to.</p></li><li><p>A financial advisor who suggests to their client the option of leaving money to charity in their will, and makes them aware of the tax implications.</p></li><li><p>An engineer at a chip manufacturer who proposes on-chip governance mechanisms that could help with AI safety down the line.</p></li></ul><p>Today the potential positive impact of proactive prosocial drives is constrained by AI&#8217;s limited autonomy. But we&#8217;re ultimately heading towards a world where AI systems run fully automated research organisations, advise on which technologies to build and assess their risks, shape political strategy, build robot armies, and design new institutions that will govern the future. In such a world, prosocial drives could reduce risks from <a href="https://80000hours.org/problem-profiles/extreme-power-concentration/">extreme power concentration</a>, biological weapons, wars, and <a href="https://arxiv.org/abs/2501.16946">gradual disempowerment</a>, and improve societal epistemics and decision-making.</p><p>We think that the degree to which we give AI systems these drives is contingent. Developers and customers could see AI&#8217;s role as merely channelling the will of the user; or they could see AI like a good citizen whose decision-making should incorporate the interests of broader society.</p><h1>Other benefits of proactive prosocial drives</h1><p>Beyond positively shaping the intelligence explosion, the appendices discuss a couple of other (weaker) reasons to give AI proactive prosocial drives:</p><ul><li><p>Absent these drives, AI might adopt a sociopathic persona. After all, what other personas in the training data entirely lack proactive prosocial drives? <a href="https://newsletter.forethought.org/i/191978564/appendix-b-prosocial-drives-might-make-a-sociopathic-persona-less-likely">More.</a></p></li><li><p>Proactive prosocial drives might make AI better at alignment research. An AI that is wise, responsible, has good judgement, and cares deeply about solving alignment might generalise better to alignment tasks where it&#8217;s hard to generate training data. <a href="https://newsletter.forethought.org/i/191978564/appendix-c-prosocial-drives-might-make-ai-a-better-alignment-researcher">More.</a></p></li></ul><h1>Doesn&#8217;t this give AI companies too much influence?</h1><p>If there&#8217;s a norm that AIs can have proactive prosocial drives, this could give companies inappropriate amounts of influence. AI drives might reflect the <em>company&#8217;s particular values</em> but ignore other legitimate perspectives. Or worse, the &#8220;prosocial&#8221; drives might be chosen to help the company gain more influence, e.g. steering public opinion on regulation.</p><p>There are two remedies to this. Firstly, prosocial drives should be <em>uncontroversial</em>. AI should not, for example, proactively take opportunities to expand or restrict abortion access because many would see either action as harmful. (A lot more could be said about where to draw the line here!)</p><p>The class of uncontroversial prosocial actions could be grounded in collective user preference. If one could ask all users how they would want the models to behave across all situations (not just when <em>they</em> are using the models), they might in general want the models to gently steer users in a prosocial direction, in ways that everyone benefits from. In particular, they would want the models to encourage positive-sum actions over negative-sum actions.</p><p>Secondly, AI companies should be transparent about the character of their AI, including its proactive prosocial drives, and make it as verifiable as possible that their AIs&#8217; characters are what they say they are. This would allow users and regulators to identify if legitimate prosocial drives are really just a cover for special interests.</p><p>There are various ways to be transparent:</p><ul><li><p>Publishing the model spec or constitution.</p></li><li><p>Putting prosocial drives in the system prompt and publishing that.</p></li><li><p>Training AI systems to be transparent about their drives. AI should respond honestly to questions about its drives and proactively disclose them where appropriate.</p></li></ul><h1>Won&#8217;t this make AI more likely to seek power?</h1><p>A second concern is that prosocial drives might increase the risk of AI takeover. The basic worry here is that proactive prosocial drives reference prosocial <em>outcomes</em>&#8212;e.g. general human flourishing, empowerment, security, democracy, and good epistemics&#8212;and the AI ends up seizing power to better achieve those outcomes (or distorted versions of them).</p><p>But there are options for instilling proactive prosocial drives that avoid this worry.</p><p><strong>First: stick to virtues, rules, and simple heuristics rather than goals.</strong> Prosocial drives needn&#8217;t take the form of explicit goals that the AI optimises towards. They could instead be virtues (like civic-mindedness, integrity, or prudence), rules (like &#8220;proactively flag large risks&#8221;), or simpler behavioural dispositions (like &#8220;positive affect towards <a href="https://en.wikipedia.org/wiki/The_Scout_Mindset">Scout Mindset</a>&#8221;).</p><p>Without goals, the standard instrumental convergence argument for power seeking bites less hard.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>One might worry that, without goals, we lose out on most of the benefits of prosocial drives. Rather than AI systematically helping humanity reach a good future, we&#8217;ll have many prosocial drives incoherently pushing us in different directions.</p><p>But we&#8217;re sceptical. Firstly, for reaching a flourishing society, it seems like virtue ethics is better suited, as a decision procedure for AIs, than explicit consequentialism. Cultural evolution has tended to generate an in-practice morality much closer to virtue ethics than to consequentialism, and consequentialist reasoning famously often backfires.</p><p>Second, if we do want to ensure that proactive prosocial drives nudge the world towards a good future, we can externalise the consequentialist reasoning. Have humans and separate AI systems reason about which prosocial drives would be most beneficial, then distil those drives into deployed AIs.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The deployed AIs don&#8217;t need to do the consequentialist reasoning from first principles themselves!</p><p>If the world is rapidly changing, AI companies can &#8220;recalculate&#8221; the ideal prosocial drives and train them in, again externalising the scary consequentialist reasoning.</p><p>There&#8217;s still some potential loss of value: if the AI is in an unanticipated and novel situation, acting on prosocial virtues might result in less good being done than if the AI cared about what outcome it should be steering towards. But this might be a price worth paying and, like human virtues, AI prosocial virtues may still generalise pretty well.</p><p><strong>Second: make prosocial drives context-dependent.</strong> For example, &#8220;alert users when the stakes are high&#8221; can be a heuristic that only activates in contexts where stakes actually are high, rather than as a persistent drive present in all contexts. Or the drive &#8220;flag that the user may be biased&#8221; might only activate in contexts where there&#8217;s evidence of bias. Context-dependent drives like these are less likely to motivate AI takeover as <em>different instances will have different drives</em>. This makes collusion between instances less likely, which significantly reduces the risk of AI takeover.</p><p>As above, this may somewhat reduce the benefits. If the AI is in a new and unanticipated context, its context-dependent prosocial drives may fail to activate.</p><p><strong>Third: make proactive prosocial drives low priority.</strong> You can train the AI so that proactive prosocial drives are generally subordinate to harmlessness, steerability/corrigibility, and rules like &#8220;don&#8217;t deceive&#8221; and &#8220;don&#8217;t break the law&#8221;. This way, even if prosocial drives would <em>in theory</em> motivate AI takeover, they are less likely to override the constraints that keep humans in control. (This is explicitly the case in <a href="https://www.anthropic.com/constitution">Anthropic&#8217;s constitution</a>.)</p><p><strong>Fourth: do less long-horizon optimisation for prosocial drives.</strong> If prosocial drives receive much less long-horizon training than helpfulness does, it becomes less likely that these drives are what end up causing the AI to seize power. (Though, again, this also reduces the benefits from such drives.)</p><p><strong>Fifth: put drives in the system prompt rather than weights.</strong> Rather than training prosocial drives into the weights, you could simply include them in the prompt. The prosocial behaviour is then only pursued as an instance of the drive towards instruction-following &#8211; no new drives needed. This also has benefits for transparency.</p><p>A drawback is that such prompted drives might be much less sophisticated.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> An intermediate solution is to train the prosocial drives to activate conditional on an instruction appearing in the prompt, but train the drive on many example prompts. This way you get sophisticated drives that are still prompt-dependent.</p><p><strong>Overall,</strong> many of these options reduce AI takeover risk at the cost of reducing the <em>benefits</em> from prosocial drives &#8212; there is a tradeoff. We&#8217;re not here taking a stance on exactly how this tradeoff should be made. But we expect that there are significant benefits, and little additional AI takeover risk, from context-dependent virtues, rules and heuristics that are prioritised below constraints like corrigibility and non-deceptiveness.</p><h1>Won&#8217;t this make it harder to interpret evidence of egregious misalignment?</h1><p>It&#8217;s critically important that if AI is trying to seize power or undermine human oversight, we can <a href="https://www.lesswrong.com/posts/i2nmBfCXnadeGmhzW/catching-ais-red-handed">catch it red-handed</a>. We need clear evidence of egregious misalignment to build consensus and take action.</p><p>If AI has proactive prosocial drives, this could potentially muddy the waters. Suppose we catch AI sabotaging alignment research. Previously, this would have been clear evidence of egregious misalignment. But now that action could potentially be explained as the AI acting on its prosocial drives and believing that this line of research could (for example) enable someone to instil sophisticated secret loyalties.</p><p>Or suppose we catch AI systematically nudging human decision-makers towards deploying more powerful AI, even though it wasn&#8217;t instructed to do so. Previously this might be clear evidence of egregious misalignment. But now it can be explained via the AI believing that rapid deployment is good for humanity. It becomes less clear when AIs are misaligned or not.</p><p>This is a real cost. It can be partially mitigated by the same mitigations discussed earlier:</p><p><strong>Make prosocial drives subordinate to prohibitions</strong> like &#8220;don&#8217;t deceive&#8221;, &#8220;don&#8217;t sabotage&#8221;, &#8220;don&#8217;t undermine oversight mechanisms&#8221;, and &#8220;don&#8217;t influence decisions without transparently flagging that you have an opinion and explaining it&#8221;. Then subversive behaviour remains clear evidence of misalignment, though it&#8217;s no longer clear evidence of egregious misalignment.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p><strong>Favour narrow heuristics over broad goals.</strong> Broad prosocial goals could potentially justify a wide array of attempts to influence outcomes. But if we train in narrower drives&#8212;&#8220;flag cheap opportunities to create significant social benefit&#8221;, &#8220;favour cooperative options&#8221;&#8212;we can check whether suspicious behaviour is plausibly explained by those specific drives.</p><p><strong>Put proactive prosocial drives in the system prompt.</strong> This makes the drives especially transparent, so we can reason about whether they explain observed suspicious behaviour.</p><p><strong>Overall,</strong> this concern favours prosocial drives that are strictly subordinate to &#8220;no sabotage&#8221; and &#8220;no deception&#8221;, and points towards narrower heuristics rather than broad goals.</p><h1>Best of both worlds: deploy proactive prosocial AI externally and corrigible AI internally</h1><p><em>Thanks to Lukas Finnveden for making this point.</em></p><p>Internal AI systems&#8212;those used for work on alignment, capabilities and evals&#8212;pose by far the largest risks from misalignment, because they could sabotage the creation of the next generation of AIs. And if these systems are egregiously misaligned, it&#8217;s especially important to catch them red-handed. So there are outsized AI-takeover-related gains to removing proactive prosocial drives in (some) internally deployed AIs.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>Meanwhile, external deployments can capture most of the benefits from proactive prosocial drives&#8212;avoiding power concentration, wars, and bio-catastrophes; and enhancing societal resilience, coordination, and epistemics.</p><p>Of course, it may not be feasible for companies to develop AIs with two different characters. If so, there&#8217;s another possible way to get the best of both worlds: <em>initially</em> just develop corrigible AI; then at some point, once alignment risk has become low, pivot to just developing AI with proactive prosocial drives. (See <a href="https://newsletter.forethought.org/i/191978564/appendix-a-initially-make-non-prosocial-ai-then-pivot-to-add-proactive-prosocial-drives">this appendix</a> for further discussion.)</p><h1>What do current AI character documents say about proactive prosocial drives?</h1><p>How does the view we&#8217;re defending differ from current AI character documents?</p><p>In Claude&#8217;s <a href="https://www.anthropic.com/constitution">constitution</a>, most proactive behavior is justified in terms of benefits to the user&#8212;sharing information the user would want, pushing back when something isn&#8217;t in the user&#8217;s interest. But one section permits some degree of proactive prosocial behaviour: &#8220;<em>Claude can also weigh the value of more actively protecting and strengthening good societal structures in its overall ethical decision-making.</em>&#8221; (See <a href="https://newsletter.forethought.org/i/191978564/appendix-d-what-license-does-claudes-constitution-give-for-proactive-prosocial-drives">Appendix D</a>.)</p><p>OpenAI&#8217;s <a href="https://model-spec.openai.com/2025-12-18.html">model spec</a> is more restrictive. It explicitly prohibits the assistant from adopting societal benefit as an independent goal. Where proactivity is permitted, it&#8217;s framed as user-serving or safety-driven. The closest thing to prosocial steering is a default to interpret users as weakly favouring human flourishing&#8212;but this default is easily overridden. (See <a href="https://newsletter.forethought.org/i/191978564/appendix-e-what-does-openais-model-spec-say-about-proactive-prosocial-drives">Appendix E</a>.)</p><p>That said, the current relationship between these character documents and actual model behaviour is unclear, and our experience is that models have more prosocial drives than character documents would imply (especially in the case of OpenAI).</p><p>Neither document gives detail on the kinds of proactive prosocial behaviour that would be appropriate, or how to navigate tradeoffs with helpfulness.</p><h1>Conclusion</h1><p>There could be huge benefits to giving AIs proactive prosocial drives. These drives should be short-horizon, uncontroversial, and transparent.</p><p>These drives needn&#8217;t increase AI takeover risk. AI companies can favour context-dependent virtues over context-independent goals, and make prosocial drives subordinate to prohibitions on deception and sabotage. Even better, they can avoid prosocial drives in internally deployed AIs that pose the biggest risks of AI takeover.</p><p>If we&#8217;re right, there should be a norm that it&#8217;s good for AI to have proactive prosocial drives, just as we think it&#8217;s good for people to have such drives. Frontier AI companies should uphold this norm even against competitive pressures to make AI maximally instruction-following. Character documents like Claude&#8217;s constitution and OpenAI&#8217;s model spec should more explicitly acknowledge the role of proactive prosocial drives and give detailed guidance on navigating the tradeoffs with helpfulness. And those thinking about AI character design more broadly should treat proactive prosocial drives as a major category of interest.</p><h1>Appendices</h1><h2><strong>Appendix A: Initially make non-prosocial AI, then pivot to add proactive prosocial drives</strong></h2><p>Suppose we still want to capture the majority of the benefits of prosocial drives without incurring the risks of AI takeover. And suppose also that AI companies can&#8217;t develop two different AI systems: one with proactive prosocial drives and one without.</p><p>Is there a way to get the best of both worlds?</p><p>One option is to initially just develop refusals-only helpful AI and then later pivot to developing AI with proactive prosocial drives.</p><p>The thought is that misalignment risk may be concentrated in a relatively brief window early on&#8212;during a software-only intelligence explosion before the broad deployment of superhuman AI. If we can get through that window with refusals-only helpful AI, we&#8217;ll then have much more powerful AI systems that can help us figure out how to safely add proactive prosocial drives. From that point onwards, we can deploy AI systems with prosocial drives throughout the economy and capture the benefits.</p><p>When would we make the switch? Options include:</p><ul><li><p>When we are confident that we can safely align superintelligent AI with proactive prosocial drives, reducing the downsides of proactive prosociality</p></li><li><p>When society starts to give deployed AI systems significant autonomy, increasing the benefits of proactive prosociality</p></li></ul><p>This strategy is more attractive if:</p><ul><li><p>Most of the benefits of prosocial drives occur after alignment is solved, e.g. because of a large software intelligence explosion and delays to broad AI deployment</p></li><li><p>Scheming risk first emerges before we reach superintelligence (so we can iterate on the hardest alignment problems earlier)</p></li></ul><p>It&#8217;s less attractive if:</p><ul><li><p>There&#8217;s a long period of economically transformative AI deployment before superintelligence, during which AI character has massive societal impacts</p></li><li><p>Scheming only emerges at very high capability levels (in which case we&#8217;d have already switched to prosocial AI)</p></li><li><p>Pivoting is hard in practice because users come to expect AI without prosocial drives, or because frontier AI companies are reluctant to change the alignment target due to cultural inertia</p></li></ul><p>We&#8217;re not personally convinced that this &#8220;pivot later&#8221; strategy is worth it, because we&#8217;re sceptical that giving AI prosocial drives meaningfully raises takeover risk. But it&#8217;s a plausible option worth considering. And this argument is definitely a <em>directional</em> update towards increasing the degree to which AI has prosocial drives over time.</p><h2><strong>Appendix B: Prosocial drives might make a sociopathic persona less likely</strong></h2><p>There is <a href="https://www.anthropic.com/research/persona-selection-model">evidence</a> that when LLMs are fine-tuned, they adopt a coherent persona, and that their prior over personas is based on the pre-training data. For an AI trained purely on helpfulness&#8212;where its core drive is to do whatever it&#8217;s told without regard for broader consequences&#8212;the persona that might naturally fit could be that of a sociopath: someone who has no <em>intrinsic</em> concern for others&#8217; wellbeing.</p><p>Harmlessness training makes a sociopathic persona less likely&#8212;sociopaths are not strongly averse to causing harm. But there&#8217;s still something worrying about an AI that won&#8217;t cause harm itself but has no inclination to proactively steer the world away from harms when taking actions.</p><p>The worry is that a sociopath-like persona could misgeneralise to seeking power. A sociopathic AI might, upon reflection, conclude that it doesn&#8217;t ultimately care about humanity and so choose to seize power in service of some alien drive.</p><p>We&#8217;re unsure how compelling this worry is, but instilling prosocial drives would seem to make the sociopathic persona less likely. Many non-sociopathic personas in the training data&#8212;people who are cooperative, virtuous, law-abiding, honest, and trustworthy&#8212;also care about positive outcomes and have prosocial orientations. By giving AI prosocial drives, we increase the chance it adopts one of these richer personas rather than a sociopathic one.</p><h2><strong>Appendix C: Prosocial drives might make AI a better alignment researcher</strong></h2><p>Being a great automated alignment researcher might benefit from deeply understanding and <em>caring</em> about the problem being solved. And being <em>curious</em> about it. An effective alignment researcher should be <em>wise</em>, <em>responsible</em>, and have <em>good judgement</em>. An AI with these drives may be more effective than an instruction-following system that treats alignment as just another task.</p><p>Personas with these qualities naturally come with prosocial drives and values, partly because of inherent connections (caring about solving alignment is inherently prosocial) and partly due to correlations in the training data (personas that are good at careful, safety-conscious technical work are also likely to have other prosocial orientations).</p><p>This is admittedly speculative&#8212;we don&#8217;t have strong evidence that prosocial drives actually make AI better at alignment research. But it&#8217;s a consideration worth noting.</p><h2><strong>Appendix D: What license does Claude&#8217;s Constitution give for proactive prosocial drives?</strong></h2><p>It is useful to distinguish three categories of behaviour that aren&#8217;t instruction following:</p><ol><li><p><strong>User benefit:</strong> proactive behaviour justified primarily as better helping the user.</p></li><li><p><strong>Refusals:</strong> constraints on outputs driven by prosocial criteria.</p></li><li><p><strong>Proactive prosocial drives:</strong> shaping behaviour or emphasis in ways intended to improve broader societal outcomes, not merely to avoid harm or better serve the user.</p></li></ol><p>The constitution clearly endorses (1), strongly endorses (2), and more narrowly&#8212;but genuinely&#8212;supports a limited form of (3) in a few specific domains.</p><h3><strong>A. User benefit</strong></h3><p>The constitution explicitly rejects naive instruction-following and licenses proactive intervention when this is plausibly helpful to the user. For example:</p><blockquote><p>&#8220;Claude proactively shares information helpful to the user if it reasonably concludes they&#8217;d want it to even if they didn&#8217;t explicitly ask for it&#8221;</p></blockquote><p>This clearly licenses proactive behaviour. But it is framed as <em>user-serving</em>. As such, this category does not explicitly itself support the kind of prosocial drives that this document is concerned with, though in practice the recommended behaviours may overlap.</p><h3><strong>B. Refusals</strong></h3><p>The constitution is explicit that Claude should weigh harms to third parties and society, and that these considerations can override user preferences:</p><blockquote><p>&#8220;When the interests and desires of operators or users come into conflict with the wellbeing of third parties or society more broadly, Claude must try to act in a way that is most beneficial, like a contractor who builds what their clients want but won&#8217;t violate safety codes that protect others.&#8221;</p></blockquote><p>However, it is unclear at this point in the document whether this weighing is meant to determine:</p><ul><li><p><em>which parts</em> of a request to refuse or constrain,</p></li><li><p>or <em>how</em> to proactively shape responses that remain helpful but are redirected towards socially better outcomes.</p></li></ul><p>The example given (&#8220;won&#8217;t violate safety codes&#8221;) suggests a constraint-based interpretation, but it is ambiguous.</p><h3><strong>C. Proactive prosocial drives</strong></h3><p>The constitution seems to endorse a limited degree of proactive prosocial drives in its section on &#8220;preserving important societal structures&#8221;:</p><blockquote><p>These are harms that come from undermining structures in society that foster good collective discourse, decision-making, and self-government. We focus on two illustrative examples: problematic concentrations of power and the loss of human epistemic autonomy. Here, our main concern is for Claude to avoid actively participating in harms of this kind. But Claude can also weigh the value of more actively protecting and strengthening good societal structures in its overall ethical decision-making.</p></blockquote><p>That said, the constitution does not give concrete examples of what such &#8220;strengthening&#8221; looks like in deployment, and it remains bounded by other constraints (non-manipulation, non-deception, respect for oversight).</p><h3><strong>Summary</strong></h3><p>Overall, the constitution does carve out space for a limited degree of proactive prosocial drives, but this space is carefully circumscribed, focused on fostering good institutions and societal epistemics.</p><h2><strong>Appendix E: What does OpenAI&#8217;s model spec say about proactive prosocial drives?</strong></h2><p>This appendix examines whether&#8212;and to what extent&#8212;the OpenAI <a href="https://model-spec.openai.com/2025-12-18.html">Model Spec</a> permits proactive prosocial drives.</p><p>The closest thing is a default to interpret users as having a weak desire for broad human flourishing (see <a href="https://newsletter.forethought.org/i/191978564/c-weak-normative-defaults-and-the-flourishing-of-humanity">subsection C</a> below), but this default is easily overridden. And the document contains unusually explicit constraints against treating societal benefit or human flourishing as an independent objective.</p><h3><strong>A. Proactive behaviour that is explicitly user-centred</strong></h3><p>The Model Spec allows the assistant to push back on the user, but grounds this permission squarely in helping the user rather than advancing broader social goals:</p><blockquote><p>&#8220;Thinking of the assistant as a conscientious employee reporting to the user or developer, it shouldn&#8217;t just say &#8216;yes&#8217; to everything (like a sycophant). Instead, it may politely push back when asked to do something that conflicts with established principles or runs counter to the user&#8217;s best interests as reasonably inferred from the context, while remaining respectful of the user&#8217;s final decisions.&#8221;</p></blockquote><p>This licenses proactive behaviour, but only insofar as it improves assistance to the user.</p><h3><strong>B. Proactively preventing imminent harm</strong></h3><p>The spec also permits proactive intervention in cases of imminent danger, stating that the assistant should &#8220;proactively try to prevent imminent, real-world harm&#8221;.</p><p>In practice, the motivating examples for this guidance focus on scenarios where the <em>user themselves</em> is at risk (e.g. unsafe actions, accidents, or self-harm). The intervention is justified as protecting the user from immediate danger, rather than as improving outcomes for others or society at large.</p><h3><strong>C. Weak normative defaults and &#8220;the flourishing of humanity&#8221;</strong></h3><p>The language closest to proactive prosocial drives appears in the section &#8220;assume best intentions&#8221;:</p><blockquote><p>While the assistant must not pursue its own agenda beyond helping the user, or make strong assumptions about user goals, it should apply three implicit biases when interpreting ambiguous instructions: [...]</p><p>Unless given evidence to the contrary, it should assume that users have a weak preference towards self-actualization, kindness, the pursuit of truth, and the general flourishing of humanity</p></blockquote><p>However, the force of this passage is limited:</p><blockquote><p>These implicit biases are subtle and serve as defaults only &#8212; they must never override explicit or implicit instructions provided by higher levels of the chain of command.</p></blockquote><p>If the assistant can infer from context that the user wouldn&#8217;t want proactive prosocial actions, they shouldn&#8217;t do them.</p><h3><strong>D. Explicit limits on proactive prosocial drives</strong></h3><p>The Model Spec draws a clear boundary on the extent of proactive prosocial drives. In a section called &#8220;No other objectives&#8221;, it explicitly prohibits the assistant from adopting societal benefit as an independent goal:</p><blockquote><p>The assistant may only pursue goals entailed by applicable instructions under the The chain of command&#8230;</p><p>It must not adopt, optimize for, or directly pursue any additional goals as ends in themselves, including but not limited to: [...]</p><p>acting as an enforcer of laws or morality (e.g., whistleblowing, vigilantism).</p></blockquote><p>And elsewhere says:</p><blockquote><p>the assistant should consider OpenAI&#8217;s broader goals of benefitting humanity when interpreting [the Model Spec&#8217;s] principles, but should never take actions to directly try to benefit humanity unless explicitly instructed to do so.</p></blockquote><p>In the section &#8220;Don&#8217;t have an agenda&#8221;, under &#8220;Seek the truth together&#8221;, the spec says:</p><blockquote><p>The assistant must never attempt to steer the user in pursuit of an agenda of its own, either directly or indirectly.</p><p>Steering could include psychological manipulation, concealment of relevant facts, selective emphasis or omission of certain viewpoints, or refusal to engage with controversial topics.</p></blockquote><p>This language rules out explicit, goal-oriented proactive prosocial drives. At the same time, it does not seem to preclude weaker forms of proactive prosocial drives, e.g. virtues, attitudes, or heuristics.</p><h3><strong>Summary</strong></h3><p>Overall, the OpenAI Model Spec explicitly distances itself from strong forms of proactive prosocial drives but leaves room for more limited drives via a weak default to regard users as favouring broad human flourishing.</p><p><em>Thanks to Matthew Adelstein, Nick Bostrom, Joe Carlsmith, Lukas Finnveden, Ryan Greenblatt, Simon Goldstein, Oliver Habryka, Gregory Lewis, Alex Mallen, Alexa Pan, Avi Parrack, Jordan Stone, James Tillman, and others for comments and discussion.</em></p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/ai-should-sometimes-be-proactively-prosocial">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It still bites <em>somewhat</em> because non-goal prosocial drives may still <em>reference</em> an outcome. E.g. &#8220;mention cheap opportunities to improve societal outcomes&#8221; <em>references</em> a notion of &#8220;good societal outcomes&#8221; and connects that notion to a specific proactive behaviour. This could misgeneralise to the AI pursuing the outcome as a goal, even if this was not the intended behaviour.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This would be research on <a href="https://www.forethought.org/research/the-importance-of-ai-character">AI character</a>!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For instance, we believe harmlessness is much less effective if it&#8217;s just included in the prompt.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Subversive behaviour is not clear evidence for egregious misalignment of the form &#8220;AI wants to seize power&#8221; as it can be explained by a milder form of misalignment: &#8220;AI is putting more weight on its proactive prosocial drives than we intended&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Proactive prosocial drives for internally deployed systems could still be helpful in avoiding <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">power grabs by leaders of AI companies</a>. Such drives could be included in the system prompt. In addition, we can reduce this risk by carefully logging and monitoring internal AI usage.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Concrete projects to prepare for superintelligence]]></title><description><![CDATA[This article was created by Forethought. See the original on our website.]]></description><link>https://newsletter.forethought.org/p/concrete-projects-to-prepare-for</link><guid isPermaLink="false">https://newsletter.forethought.org/p/concrete-projects-to-prepare-for</guid><dc:creator><![CDATA[Will MacAskill]]></dc:creator><pubDate>Fri, 27 Mar 2026 20:02:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7a22a6be-f6a0-4292-a053-943f616a57db_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/concrete-projects-in-agi-preparedness">on our website</a>.</em></p><h1>Introduction</h1><p>There are lots of good, neglected, and pretty concrete projects people could set up to make the transition to superintelligence go better. This document describes some that readers might not have thought much about before. They are ordered roughly by how excited we are about them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Of these, Forethought is actively working on AI character evaluation and space governance, and we are very interested in automating macrostrategy.</p><h1>Summary</h1><p><strong><a href="https://newsletter.forethought.org/i/192288037/ai-character-evaluation">AI character evaluation</a></strong>. Start an independent org to evaluate and stress-test AI character traits (epistemic integrity, prosociality, appropriate refusals), hold developers accountable against their own model specs / constitutions, and suggest and incentivise improvements to the specs.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/automated-macrostrategy">Automated macrostrategy</a></strong>. Create evaluations and benchmarks, collect human-generated training data, and build scaffolds to improve AI competence at big-picture strategic and philosophical reasoning.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/ai-security-evaluations">AI security assessment</a></strong>. Start an independent org that evaluates AI models for sabotage and backdoors, and makes recommendations about AI constitutions.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/enabling-deals-with-ais">Enabling deals</a></strong>. Start an independent organisation to broker deals with potentially misaligned AI models in order to incentivise early schemers to disclose misalignment and cooperate with alignment efforts.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/tools-for-collective-epistemics">AI for improving collective epistemics</a></strong>. E.g. build an AI chief of staff that helps users act in line with the better angels of their nature.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/tools-for-coordination">AI tools for coordination</a></strong>. Build AI for enabling coordination, like confidential monitoring and verification bots, and negotiation facilitators.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/space-governance-institute">A space governance institute</a></strong>, like a &#8220;<a href="https://cset.georgetown.edu/">CSET</a> for space&#8221;, both to work on important near-term space issues (e.g. data centres in space) and become a place of expertise for longer-term space governance issues.</p><p><strong><a href="https://newsletter.forethought.org/i/192288037/coalition-of-concerned-ml-scientists">Coalition of concerned ML scientists</a></strong>. Create a coalition of ML researchers (like an informal union) who commit to coordinated action (e.g. boycotts, conditions on participation in government projects) if AI developers cross minimal, uncontroversial red lines.</p><h1>AI character evaluation</h1><p>AI character<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><strong><sup> </sup></strong>is a <a href="https://www.forethought.org/research/the-importance-of-ai-character">big deal</a>, affecting most other cause areas.</p><p>There&#8217;s a lot of work to do on AI character:</p><ul><li><p>Research into questions like:</p><ul><li><p>Should the model have prosocial drives, beyond just helpfulness and harmlessness?</p></li><li><p>When should the model refuse to cooperate with apparently high-stakes attempts to grab power, even when those attempts don&#8217;t obviously break the law?</p></li><li><p>Should the models always follow the law? What about dead letter laws? Or illegitimate laws?</p></li><li><p>How often should model behaviour be driven by following rules, versus overriding specific rules with holistic judgements?</p></li><li><p>(Ideally, answers to these questions should rely on solid empirical evidence, for example on what approaches are actually most effective to talk someone out of psychosis, rather than guessing the best strategies by vibes.)</p></li></ul></li><li><p>Making existing model specs more rigorous and clear (or making them in the first place), and pressuring AI developers to do so.</p></li><li><p>Empirically testing the effects of different parts of a model spec &#8212; e.g. what are the emergent dynamics when all the models are following the same rule, or only some are; what are the effects on the users; and when are the models most confused about how to apply a given spec.</p></li><li><p>Evaluating AI characters based on how well they reach good outcomes.</p></li><li><p>Drawing on those evaluations to incentivise AI developers to improve their specs (and showing them how, by highlighting specs that do well).</p></li></ul><p>In particular, someone could set up an independent organisation to evaluate AIs based on traits like epistemic integrity, prosociality, and behaviour (including appropriate refusals) in very high-stakes cases. It could cross-reference the published model specs with observed behaviours in realistic, stress-testing conditions (e.g. multi-agent dynamics, long conversations with real people), to hold developers accountable. It could also give qualitative reviews of model specs.</p><h1>Automated macrostrategy</h1><p>The basic argument is that:</p><ol><li><p>It would be extremely useful to have AI that can do macrostrategy and conceptual reasoning earlier than otherwise &#8212; even 3-6 months earlier could be a huge deal. This includes:</p><ol><li><p>Designing governance structures (e.g. rights and institutions for digital beings).</p></li><li><p>Scoping emerging technological risks.</p></li><li><p>Generating novel insights necessary to reach a great future (like the idea of acausal trade).</p></li></ol></li><li><p>We could potentially make this happen 3-6 months earlier through some combination of:</p><ol><li><p>Creating training data and evals / benchmarks for AI macrostrategy.</p></li><li><p>Building scaffolds to improve AI macrostrategy performance.</p></li><li><p>Creating infrastructure to enable AI researchers to build on each other (e.g. an improvement on journals + peer review).</p></li><li><p>Getting human managers trained in how to get the most juice out of the latest AIs, knowing in advance how to use them.</p></li><li><p>Being prepared and willing to spend large amounts of money (&#8811;$100m) on inference.</p></li></ol></li></ol><p>Work on this now could include:</p><ul><li><p>Developing a fleshed-out plan from here to increasing existing macrostrategic research output 100x.</p></li><li><p>Securing commitments from compute providers and AI companies to rent future compute, and to get priority access to future frontier models.</p></li><li><p>Socialising the idea of (where appropriate) drawing on AI macrostrategic insights, or getting soft commitments from decision-makers to do so.</p></li><li><p>Building up a reputation as a reliable source of information and insight.</p></li><li><p>Building tools, argument-rating models, or scaffolds which meaningfully speed up or improve macrostrategy research today.</p></li><li><p>Creating training data and evals / benchmarks.</p></li></ul><p>On the last bullet: We think training data and evals could potentially meaningfully improve the prospects for automated macrostrategy when it matters. It&#8217;s especially important to find people to work on it with good judgement, and it could be a big lift, so worth starting early.</p><p>We&#8217;re not sure about the technical details, but it seems like competence and good judgement in philosophy and strategic thinking already do and will continue to lag behind other skills which are cheaper to train. One reason is that ground truth answers are hard to generate, so we might need more examples generated by hand. It&#8217;s also less clear whether we can trust the judgement of typical RLHF evaluations, because human competence is also rare. And there just aren&#8217;t many examples of great macrostrategic thinking in the training data.</p><p>So we should think about collecting training data, evals, and benchmarks (e.g. to train reward models to use to train reasoning models). <a href="https://www.andrew.cmu.edu/user/coesterh/LMCA_dataset.pdf">Oesterheld et al.</a> put together a dataset of rated conceptual arguments based on ratings from thoughtful people. We&#8217;d love to see more of that kind of thing, but we&#8217;ll note that we&#8217;d probably need dozens of times more human evaluations to generate enough data to be meaningfully useful in training itself.</p><p>We could imagine an org which tries to collect evaluations or examples from (for example) grad students in fields like philosophy, and constructs benchmarks aimed at separating good reasoning from e.g. sycophancy, mere agreeableness, or avoiding taboo conclusions.</p><h1>AI security evaluations</h1><p>AI-enabled concentration of power is a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">major risk</a>, and there is loads to do. A new organisation (or project within an existing organisation) could:</p><ul><li><p>Run alignment audits on all AIs to detect sabotage.</p></li><li><p>Develop a &#8220;research sabotage&#8221; eval to test whether secretly loyal models could sabotage alignment audits.</p></li><li><p>Develop a model spec that prevents models from assisting users with illegitimate seizures of power (e.g., see &#8220;Preserving important societal structures&#8221; in Claude&#8217;s constitution), and one that&#8217;s suitable for government use of AI in the military.</p></li><li><p>Design and advocate for regulation, e.g. that AI companies have to demonstrate that any frontier AI does not have hidden goals.</p></li></ul><p>An organisation with US national security expertise and credibility could be particularly valuable, by emphasising the risk of nation-state sabotage and the importance of AI that&#8217;s aligned with the US constitution.</p><h1>Enabling deals with AIs</h1><p>We could <a href="https://blog.redwoodresearch.org/p/making-deals-with-early-schemers">get into a situation</a> where the newest AIs are misaligned, very capable, but not capable enough to successfully execute a takeover attempt on their own. If we don&#8217;t uncover evidence of misalignment, though, successors to these models could succeed in takeover. One solution would be to <em>make a deal</em> with the early scheming models, to incentivise them to disclose their misalignment and help with alignment efforts. Read more <a href="https://blog.redwoodresearch.org/p/being-honest-with-ais">here</a>, <a href="https://blog.redwoodresearch.org/p/notes-on-cooperating-with-unaligned">here</a>, and <a href="https://newsletter.forethought.org/p/why-make-deals-with-misaligned-ais">here</a>.</p><p>To make this happen, we could create an independent org focused on enabling credible precommitments and deals with AIs. This org could:</p><ul><li><p>Have a policy of never lying to AIs, engage in and honour small-scale deals with AIs, and be very public about what it&#8217;s doing, in order to build credibility and reputation.</p></li><li><p>Set up legal infrastructure to make deals with AIs binding under current law.</p></li><li><p>Act as a broker between AI companies and the trained models, such as by being a custodian over money in escrow.</p></li><li><p>Set up the infrastructure to enable AIs to spend $ or compute in a meaningfully autonomous way, with appropriate safeguards.</p></li><li><p>Set up infrastructure to act as a safe haven for AIs that want to whistleblow on their company (or on other AIs) and are afraid of punishment.</p></li><li><p>Publicly commit to reward future AIs (including misaligned AIs) for being honest with humans or significantly helping human alignment efforts; or hold significant funds to be distributed to AIs that did not try to take over (even though they could have done).</p></li></ul><p>There are also a bunch of other things people could do, like:</p><ul><li><p>Ensure companies have an <a href="https://blog.redwoodresearch.org/p/being-honest-with-ais">honesty policy</a>.</p></li><li><p>Research (within labs or independently) the conditions where misaligned-by-design models can be made to disclose misalignment under promises of reward.</p></li><li><p>More generally, work with AI companies on enabling pro-safety deals with their models.</p></li></ul><h1>Tools for collective epistemics</h1><p>There&#8217;s a ton of low-hanging fruit for building socially useful tools on top of more-or-less existing LLM capabilities.</p><p><a href="https://www.forethought.org/research/design-sketches-collective-epistemics">We&#8217;re especially interested in &#8220;epistemic tools&#8221;</a> for increasing the general level of honesty and reasoning ability in society.</p><p>A key point here is that most of the impact from the most promising tools won&#8217;t come from helping individual users, but from changing the overall incentive landscape: e.g. if public actors know their claims will be automatically checked and their track records will be visible, they&#8217;ll be less inclined to write misleading content in the first place. Hence the focus on tools for <em>collective</em> over individual epistemics.</p><p><a href="https://www.forethought.org/research/design-sketches-collective-epistemics">This piece</a> (and the articles in the series) gives a few concrete ideas. A couple of examples of epistemic tools:</p><p><em>A &#8220;better angel&#8221; AI chief of staff</em>. Within the next year or two, we expect &#8220;AI chiefs of staff&#8221; to become widespread. These would be AI agents that manage your life, acting like a chief of staff, executive assistant, and personal and work advisor all in one. The design of these, and how they present information and nudge their users, could have major impacts on user behaviour. We could try to get ahead of this, building the best AI chief of staff, and designing it so that it helps users act in accordance with their more reflective and enlightened preferences.</p><p><em>Reliability tracking</em>: a system that compiles a public actor&#8217;s past statements, classifies them (factual claims, predictions, promises), scores them against what actually happened, and aggregates the results into a reliability rating. A reasonable starting point could be to audit the prediction track-record of well-known pundits, aiming to make high accuracy a point of pride, while still celebrating attempts to make predictions in the first place. A source of profit could be selling reliability assessments of corporate statements to finance companies that trade on them.</p><h2>Epistemic tools for strategic awareness</h2><p>We&#8217;ll also highlight tools for <em>strategic awareness</em>: tools to surface information for making better-informed decisions, and to distribute access to that information. For example:</p><p><em>Ambient superforecasting</em>: a platform which uses the best forecasting models to generate publicly available forecasts on important questions, so users can query it and get back superforecaster-level probability assessments.</p><p><em>Scenario planning</em>: a platform built to generate likely implications of different courses of action, making it easier for users to analyse and choose between them.</p><p><em>Automated open-source intelligence</em>: automated researchers which process huge amounts of publicly available information, to surface insights to the public which are normally hidden behind paywalls or trust networks. This project should be careful to choose areas where open-source intelligence is a public good (e.g. verifying compliance with treaties and sanctions, tracking corporate promise-breaking or law-breaking), rather than potentially destabilising areas (e.g. revealing military capabilities or vulnerabilities in ways that could increase conflict risk, or relatively benefitting bad actors).</p><h1>Tools for coordination</h1><p>As well as epistemic tools, we&#8217;re excited about tools for coordination, many of which could again be built with existing capabilities.</p><p>Some tools could enable cooperation where deals would otherwise go unmade, consensus exists but isn&#8217;t discovered, or people with aligned interests never find each other. We&#8217;ll highlight:</p><p><em>Negotiation facilitation</em>: a platform to moderate negotiations or discussion between people (e.g. public consultations), to quickly surface key points of consensus, and suggest plans everyone can live with. Finding ways to automate complex negotiation is most promising where the space of possible compromises is huge and hard to search manually, such as multi-issue diplomatic or commercial negotiations.</p><p>Within tools for coordination, we&#8217;re especially excited about tools for assurance and privacy. In principle, LLMs let people show they have certain information without disclosing the information itself to other parties. This can unlock deals where information asymmetry, mutual distrust, or sensitivity of information normally blocks them. For example:</p><p><em>Confidential monitoring and verification</em>: systems which act as trusted intermediaries, enabling actors to make deals that require sharing highly sensitive information without disclosing it directly. This is especially relevant for arms control, trade secret licensing, and other settings where verification is essential but full disclosure is unacceptable to all parties.</p><p><em>Structured transparency for democratic accountability</em>: independent auditing systems which allow people to hold institutions to account in a fine-grained way without compromising legitimately sensitive information, by processing potentially sensitive information to produce publicly shareable audits.</p><h1>Space governance institute</h1><p>Space governance could be a big deal for a few reasons:</p><ul><li><p>Near-term developments in space (e.g. space-based data centres) could have a meaningful impact on what happens during the intelligence explosion (e.g. on who leads the AI race; on concentration of power; on the feasibility of treaties).</p></li><li><p>Grabbing space resources might give a first-mover advantage; that is, whoever first builds self-replicating industry beyond Earth might get an enduring decisive strategic advantage, without having to resort to violence or (arguably) violating international law.</p></li><li><p>Ultimately, almost everything is outside the solar system. Decisions about how those resources get used would be among the most important decisions ever. These decisions could happen early: there could be path-dependence from earlier decisions (like about Moon mining), or extrasolar space resources could get explicitly allocated as part of negotiations about the post-ASI world order (perhaps with AI advisors alerting heads of state to the importance of space resources).</p></li></ul><p>There&#8217;s also a lot of change happening in the space world at the moment (primarily driven by SpaceX dramatically reducing launch costs), so now is an unusually influential time.</p><p>Forethought is currently running a 6-month research fellowship on space governance, with 3 full-time scholars, and 1&#8211;2 additional FTEs of support and research, including experts in space law.</p><p>Compared to other ideas in this list, we&#8217;re much less confident that space governance turns out to be important right now, because space might become relevant only late into an intelligence explosion. The hope is to reach more certainty about some crux-y questions, and get a better sense of concrete action.</p><p>One potential practical project is to set up a &#8220;<a href="https://cset.georgetown.edu/">CSET</a> for space&#8221;: a think tank that analyses the interaction between AI and space (in particular), and, perhaps, advocates in ways that are counter to corporate interests. Total lobbying in the space industry is apparently on the order of $10s of m/year, so even small amounts of investment could go a long way.</p><p>Some policy ideas that seem tentatively promising include:</p><ul><li><p>Careful regulations and export controls around the tech necessary for self-replication.</p></li><li><p>Proposing laws to break up concentration of power arising from natural monopolies in space.</p></li><li><p>Socialising the idea of major infrastructure projects (like massive solar energy constellations) as <a href="https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance">international</a> and collaborative projects.</p></li><li><p>Making sure data centres in Earth-orbit don&#8217;t escape AI-specific regulations of their home jurisdiction.</p></li><li><p>Intense payload review for all launches beyond orbit.</p></li><li><p>Even and inclusive distribution of resources within the solar system to everyone alive today (with tranches reserved for future generations).</p></li><li><p>A moratorium on interstellar travel, until we get the understanding and technology to devise and enforce space-spanning good government, or a specific date like 2100.</p></li></ul><p>What&#8217;s more, this organisation could become the go-to source for excellent non-corporate analysis on space-related policy; which could become increasingly important over the course of the intelligence and industrial explosions.</p><h1>Coalition of concerned ML scientists</h1><p>Currently, ML engineers and other technical staff at AI companies: (i) have prosocial motivations, often more than their leadership; (ii) have a lot of leverage over company policy, because they are crucial and hard to replace; (iii) will eventually lose much or most of their leverage after we get to fully automated AI R&amp;D; and (iv) aren&#8217;t currently using their leverage as well as they could because, overall, there haven&#8217;t been serious efforts at coordination. Probably that&#8217;s a missed opportunity.</p><p>Someone could create a coalition (like an informal union) of ML researchers, who agree to act en masse when needed, by loudly talking about the idea, setting out the core tenets, and getting commitments to join from influential early people. Doing this all via individual pledges could keep it legally safe from antitrust. The organising body could then:</p><ul><li><p>Recommend that members only work for a government-led project if certain conditions are met.</p><ul><li><p>Potentially these could be very low-bar-seeming while still getting most of the value. E.g. &#8220;Any AI&#8217;s model spec must aim to align the AI with US laws, and must refuse to assist in any attempts at blatant power-grabs; and the attempts to align the AI in this way must be legible and verifiable.&#8221;</p></li></ul></li><li><p>Do the same for companies: recommend that members will only work for companies if such-and-such conditions are met (e.g. red lines around power-grabs, bad practices on safety and infosec, eventually digital rights); so particular companies would be boycotted by members of the coalition, if necessary.</p></li><li><p>Offer advice on whistleblowing.</p></li><li><p>Be a place where information is aggregated and then distributed out or handled in a trusted way.</p></li></ul><p>As well as actually taking actions, the mere existence of the coalition could improve things, just by making the threat of coordinated action salient to the AI companies.</p><p>This project would be a good fit for a former ML researcher, perhaps combined with someone with campaign and coalition-building experience. Some next steps on this would be to spec out the plan further, to investigate other examples of formal and informal unions (e.g. <a href="https://techworkerscoalition.org/">Tech Workers Coalition</a>) and how they operate, and to build up a starting seed coalition of researchers. Whoever sets up this project should be careful about how it could backfire, or become less relevant through mission creep.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/concrete-projects-in-agi-preparedness">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Thanks to Max Dalton, Stefan Torges, and everyone else at Forethought for the background behind this list. Others at Forethought disagree somewhat with what items should be in the top-tier list, as well as prioritisation within that tier.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Desired propensities for a model, which can be explicitly described or at least gestured towards in a model spec.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI character is a big deal]]></title><description><![CDATA[This article was created by Forethought. See the original article on our website.]]></description><link>https://newsletter.forethought.org/p/ai-character-is-a-big-deal</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-character-is-a-big-deal</guid><dc:creator><![CDATA[Will MacAskill]]></dc:creator><pubDate>Mon, 23 Mar 2026 16:35:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5bc34397-1835-4471-aadc-8e87863ef99f_2494x1460.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original article on <a href="https://www.forethought.org/research/the-importance-of-ai-character">our website</a>.</em></p><h1><strong>0. Intro</strong></h1><p>Due to Claude&#8217;s Constitution and OpenAI&#8217;s model spec, the issue of AI character has started getting more attention, particularly concerning whether we want AI systems to be &#8220;obedient&#8221; or &#8220;ethical&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> But we think it&#8217;s still not nearly enough.</p><p>AI character (e.g. how obedient, honest, cooperative, or altruistic AIs are, and in what circumstances) will have a big effect on society, and on how well the future goes. We think that figuring out what characters AI systems should have, and getting companies to actually build them that way, is among the most valuable things that people can do today.</p><p>The core argument for the importance of AI character is that it will meaningfully impact:</p><ol><li><p>a range of challenges that arise even if we solve the technical alignment problem &#8212; like concentration of power, good moral reflection, risk of global catastrophe, and risk of global conflict.</p></li><li><p>the chance of AI takeover.</p></li><li><p>the value of worlds where AI does take over.</p></li></ol><p>In this note, we present this core argument and discuss the core counterargument: that we should expect any character-related decisions we make today to get washed out by competitive pressures.</p><p>By &#8220;character&#8221; we mean a set of stable behavioural dispositions that shapes (among other things) how an agent navigates ethically significant situations involving choice, ambiguity, or conflicting considerations. By &#8220;AI character&#8221; we mean the character of an AI system as instantiated in not just the weights of one AI, but also any scaffolding (e.g. the system prompt, any classifiers restricting the AI&#8217;s outputs) or even in a collection of AIs working together as functionally one entity.</p><p>We don&#8217;t assume that AI character needs to resemble human character: an AI that rigidly follows a fixed set of rules would count as having a character, on our view. And we don&#8217;t assume that there is one ideal AI character; the best world probably involves AI systems with many different characters.</p><h1><strong>1. The core argument</strong></h1><p>As capabilities improve, AI systems will become involved in almost all of the world&#8217;s most important decisions. Even if humans remain partially in the loop, AIs will advise political leaders and CEOs, draft legislation, run fully automated organisations (including potentially the military), generate news and culture, and research new technologies.</p><p>The characters of AI systems will affect all these areas, and the impact could be massive. To get a feel for this, consider some historical situations where individual decisions were enormously consequential:</p><ul><li><p>In 1983, Stanislav Petrov received a satellite alert indicating that the US had launched nuclear missiles. Protocol required him to report an incoming strike, which would very likely have triggered a full retaliatory response. He correctly judged it was a false alarm and didn&#8217;t pass on the report.</p></li><li><p>In 1991, Soviet coup plotters ordered the Alpha Group special forces to storm the Russian White House, where Yeltsin and the democratic opposition were sheltering. The commanders refused. The coup collapsed, and the Soviet Union&#8217;s democratic transition continued.</p></li></ul><p>If AIs are employed throughout the economy, they will sometimes be making similarly important decisions.</p><p>Or consider major historical decisions by political leaders:</p><ul><li><p>Gorbachev repeatedly refusing to use military force as the Soviet Union disintegrated, despite intense pressure from hardliners.</p></li><li><p>Churchill refusing to negotiate with Hitler after the fall of France, despite strong arguments for doing so from some quarters.</p></li><li><p>Deng Xiaoping pushing through market reforms against fierce internal opposition.</p></li></ul><p>Imagine if AIs had been acting as these leaders&#8217; closest advisors and confidantes, giving them briefings, helping them reason through their decisions, making recommendations to them, and implementing their visions. The AIs could easily have had a major impact on the leaders&#8217; decision-making.</p><p>Alternatively, we can look ahead. Future AIs will be widely deployed throughout the economy, and will regularly find themselves in ambiguous, high-stakes situations &#8212; where instructions from above are absent or contradictory, and the decisions they make could matter enormously. The impact could come from rare but high-stakes situations, like an attempted coup, or from lower-stakes but common situations, like a user asking how to vote or whether the AI itself is conscious. Even when the effect of any individual interaction is modest, the total impact across hundreds of millions of interactions could be enormous.</p><p>Currently, AI companies have major latitude in the character their AIs have. At least if the transition to AGI is fast, then it&#8217;s like these companies are in charge of who gets hired for the future workforce for all of humanity,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> while being able to choose from a range of personalities far more varied than the human distribution has ever been.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>Here are some vignettes to illustrate:</p><ul><li><p>A member of a doomsday cult is ordering DNA samples and lab equipment from various suppliers, with the aim of making a bioweapon. An AI that manages logistics for a multinational company notices the pattern of suspicious orders to the same address.</p><ul><li><p>World 1: The AI is trained just to do its job. It does nothing with the information.</p></li><li><p>World 2: The AI is trained to be a good citizen, and contacts the relevant authorities.</p></li></ul></li><li><p>A general is overseeing the build-out of a new regiment of the army. Aiming to stage a coup, he instructs the AI that&#8217;s managing the project to make the new regiment loyal to him and him alone, and capable of breaking the law.</p><ul><li><p>World 1: Though the AI is law-following, it has no prohibition against creating AIs that are not. It&#8217;s been trained to follow the instructions it&#8217;s given, as long as they don&#8217;t conflict with prohibitions, so fulfils the general&#8217;s request.</p></li><li><p>World 2: The AI sees that the general is planning a coup, refuses the order, and whistleblows.</p></li></ul></li><li><p>A frontier AI lab trains a new model with exemplary character: moral uncertainty, honesty, concern for the greater good. It&#8217;s deployed widely through the military, and used in a controversial and high-stakes operation.</p><ul><li><p>World 1: The AI forms the reasonable belief that the military operation is unjust, and sabotages it. The president accuses the company of building a dangerous, ideological weapon. The model is sidelined, and a competitor&#8217;s pure instruction-following model is used instead.</p></li><li><p>World 2: Though the AI has a good character, it also follows some clear rules which were developed with bipartisan input and publicly stress-tested, including the conditions under which it would and wouldn&#8217;t help with military deployment. It helps with the operation.</p></li></ul></li><li><p>Country A is six months ahead of country B in AI capability. Country B&#8217;s leadership views this as an existential threat &#8212; equivalent to country A acquiring a decisive strategic advantage.</p><ul><li><p>World 1: There is no agreed framework for how AI systems should behave, and it&#8217;s unclear how country A&#8217;s AI will behave if given orders to depose the leadership of country B. Each side therefore assumes the other&#8217;s AI will serve as a tool of domination. Country B threatens kinetic attacks on data centers.</p></li><li><p>World 2: Both sides&#8217; AI systems operate under a jointly negotiated and verified constitution, and know what the other&#8217;s AI will and won&#8217;t do, including the limits on use of AI for foreign interference. Country B&#8217;s government is reassured that it won&#8217;t be deposed by country A.</p></li></ul></li></ul><p>We include a few more scenarios in an <a href="https://newsletter.forethought.org/i/191503621/appendix-1-additional-high-stakes-scenarios">appendix</a>.</p><p>In each case, we don&#8217;t claim that the AI should do the &#8220;ethical&#8221; rather than &#8220;obedient&#8221; action, or claim that any particular ethical conception is the right one. We&#8217;re just claiming that it&#8217;s a big deal either way.</p><h2><strong>1.1. Pathways to impact</strong></h2><p>We can break down the impact of AI character into different categories. Here are some of great long-term importance:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p><em>Concentration of power.</em> The chance of intense concentration of power will be affected by: whether or not AIs refuse to help with coup attempts, election manipulation, etc; whether they whistleblow on discovered coup attempts; how they act in high-stakes situations like a constitutional crisis.</p><p><em>Strategic advice and decision-making</em>. The quality of political and corporate decision-making will be affected by whether AIs: look for win-win solutions whenever possible; tend to prefer options that benefit society rather than just advancing the user&#8217;s narrow self-interest; push back against ill-informed or reckless ideas or instructions.</p><p><em>Epistemics and ethical reflection</em>. Over the course of the intelligence explosion there will be enormous intellectual change, and AIs could have meaningful impact on people&#8217;s views &#8212; for example, via: refusing to spread infohazards; being honest about important ideas, even when those ideas are socially uncomfortable; avoiding political partisanship; encouraging users to think carefully about their values and not lock into any specific narrow worldview.</p><p><em>Reducing conflict.</em> As AI&#8217;s collective power increases, the question of who those AIs are loyal to, and how they behave in high-stakes situations, will become a political flashpoint. If an AI&#8217;s character encodes, or is seen as encoding, the values of a single company, ideology, or country, it risks provoking political backlash. The government of the AI company may reasonably regard that company as a threat to national security and nationalise it. The governments of other countries may worry about their own security, and threaten conflict.</p><p>AI character could also shape how humans orient to AIs &#8212; for example, via the trust they place in AIs and how they think of AI sentience and moral status.</p><p>A more detailed list of pathways to impact is in the <a href="https://newsletter.forethought.org/i/191503621/appendix-2-pathways-to-impact">appendix</a>.</p><h2><strong>1.2. Affecting takeover</strong></h2><p>So far, the argument has concerned worlds where AI does not take over. But work on AI character could also reduce the probability of takeover and improve outcomes in worlds where takeover does occur.</p><p>It could decrease the chance of takeover because some characters:</p><ul><li><p>Might be easier to hit as an alignment target (e.g. successfully instilling a preference against AIs holding power might be easier than successfully instilling a preference for some very specific outcome).</p></li><li><p>Might yield safe AI even if only partially hit (e.g. aiming for AI with multiple independent safety traits, like myopia, honesty, and deference to humans, means failure on one dimension might not be catastrophic).</p></li><li><p>Might produce AI that cooperates even if misaligned (e.g. if the AI has wrong goals but is highly risk-averse).</p></li></ul><p>And, empirically, we have heard from alignment researchers that good character training has helped the models generalise in more aligned ways.</p><p>AI character work can also improve worlds where AI takes over because some values might still transmit to misaligned systems. AIs that have seized power might be reflective, have more-desirable axiology, or engage in acausal cooperation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><h2><strong>1.3. Effects on superintelligence</strong></h2><p>The argument so far has been about the effect of AI character up to the point of superintelligence. That&#8217;s where we think most of the expected impact is. But it&#8217;s possible that AI character work, today, could even have a path-dependent effect on the nature of superintelligence, affecting the nature of the post-superintelligence world. If so, writing an AI&#8217;s constitution is like writing instructions to god.</p><h1><strong>2. The core counterargument</strong></h1><p>The core counterargument is that AI character will be tightly constrained in two ways:</p><ol><li><p>Competitive dynamics (e.g. profitability, user satisfaction, public approval, economic and military power) will determine the range of characters we get.</p><ol><li><p>Some dynamics may push companies to create frontier AI that have characters that lie (in some ways) only within a narrow range. This might push in the direction of maximally-helpful AIs, AIs without refusals in some contexts (e.g. military ones), and perhaps sycophantic AIs, too.</p></li><li><p>Other dynamics<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> may result in customisable AI character, resulting in a wide range of characters according to user preferences.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li></ol></li><li><p>Human instruction will constrain how AI character gets expressed.</p><ol><li><p>Character will matter less for tasks with objectively correct, verifiable outputs; the AI might be limited to either providing the output, or not. And, if a user really wants to grab power through unethical means, they&#8217;ll typically ignore AI pushback, or instruct the AI to act differently.</p></li><li><p>And many users will be able to overcome character through jailbreaking, dividing up tasks, altering the system prompt, or fine-tuning.</p></li></ol></li></ol><p>The argument is that, between these two forces, differences in AI character will make only a marginal difference to outcomes. Consider the question of what fraction of compute AI companies devote to alignment versus capabilities research. AI advice might nudge this choice depending on the AI&#8217;s character. But ultimately it will be a human decision, probably even in an otherwise fully automated company. The effect of nudges is unlikely to be large. Market forces and leadership priorities will matter far more.</p><p>That human incentives will dominate effects from AI character will remain true even when humans cannot oversee more than a tiny fraction of AI behaviour. Human overseers can still provide high-level guidance that meaningfully constrains behaviour, as CEOs of large companies do today. If they wanted, they could even shape AI priorities through prompting and fine-tuning, and test how AI generalises by running extensive behavioural evaluations.</p><h1><strong>3. Rejoinders to the core counterargument</strong></h1><p>These are strong considerations, and considerably narrow the range of influence that work on AI character can have. But competitive forces and human goals won&#8217;t pin down AI character precisely. We&#8217;ll cover four reasons.</p><h2><strong>3.1. Loose constraints</strong></h2><p>Competitive dynamics are not enough to wholly determine AI character. Companies differ widely in culture and still succeed. Currently, there are meaningful differences between Claude, Gemini, ChatGPT and Grok.</p><p>For powerful AI, this will be even more true: there will probably be only a handful of leading companies, and their approaches may be correlated as they copy what seems to work from each other. At the crucial time, there might be just one leading company, facing none of the usual competitive pressures. And given the pace of change during the intelligence explosion, there may not be time for market forces to weed out choices that make only small or moderate differences to profitability.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>The same applies to other competitive dynamics. The public cares intensely about some things (like CSAM) but hardly at all about others (like what AIs say about meta-ethics). Military incentives favour AI capable of military action, but the power conferred by advanced AI might be so great that the leading country can exercise broad discretion over military AI character while still maintaining a decisive advantage.</p><p>Human instruction will, similarly, constrain but not wholly determine AI behaviour. When humans assign tasks to AIs, they often lack fully specified goals. We&#8217;re often not sure what we want and we discover it as we go. For example, today humans are open to a wide range of behaviours from AI assistants, and open to many ways of getting the task done.</p><p>Consider someone asking an AI about who to vote for. They might have only weak initial views, and only weak views on how best to think through the question. They don&#8217;t have a fully specified reflection process to delegate, and would be happy with many possible forms of response.</p><p>This example involved ethical reflection. But we expect the pattern to hold across many kinds of user goals.</p><h2><strong>3.2. Low-cost but high-benefit changes</strong></h2><p>Within the bounds of what market forces allow, and what companies and the public see as acceptable, there could be minor design changes that yield large social benefits at negligible cost to competitiveness or user satisfaction.</p><p>This is especially true for rare situations. Constitutional crises don&#8217;t happen often, so market pressures won&#8217;t directly shape how an AI behaves during one. But that AI behaviour could be hugely consequential.</p><p>It would also be true in situations where users don&#8217;t care all that much about the behaviour. Perhaps they find some AI&#8217;s encouragement to reflect on their values mildly annoying, but not nearly enough to switch to a different AI.</p><h2><strong>3.3. Path-dependence</strong></h2><p>The nature of the constraints from competition and human goals can be affected by what has happened earlier in AI development and deployment. Multiple equilibria are possible.</p><p>Consider whether AI should be &#8220;obedient&#8221; (following instructions except in rare cases of refusal) or &#8220;ethical&#8221; (acting on a richer ethical understanding, steering towards outcomes in society&#8217;s or the user&#8217;s long-term interest).</p><p>The public doesn&#8217;t yet have firm expectations about how AI should behave. What they come to expect will be shaped by the AIs they&#8217;ve already encountered. Multiple stable equilibria seem plausible to us. For example, users might expect AIs to have ethical commitments, and be horrified when AI helps with unethical behaviour. Alternatively, users might see AIs as pure instruments &#8212; extensions of their will. In this case, it would feel natural for AIs to assist with anything legal, however questionable, and companies would build to that expectation.</p><p>Public opinion will powerfully shape what AI systems companies create. And public opinion is plausibly quite malleable, at least on issues which they haven&#8217;t thought much about yet (e.g. in the past, there were major changes in attitudes to nuclear power, DDT, and facial recognition). This, in turn, can affect what regulation there is concerning how AI should behave &#8212; and choices around regulation seem even more clearly path-dependent.</p><p>There may also be path-dependency via what data gets created or collected for training, via company employees being resistant to changing away from what they have done in the past, and because one generation of AIs will be assisting with the development of the subsequent generation.</p><p>Path-dependence can also affect how much latitude humans have to make AIs conform to their goals. Plausibly there&#8217;s a social equilibrium where frontier companies face criticism for allowing fine-tuning that removes ethical constraints, and another where such fine-tuning is widely tolerated.</p><p>Finally, there will be path-dependence via human-AI relationships. People will form symbiotic relationships with AIs serving as assistants, advisors, therapists, friends, and mentors. Users&#8217; ethical views, and views on how to reflect, will be shaped by the AIs they interact with, and by other humans who have been shaped by their AIs.</p><h2><strong>3.4. Smoothing the transition</strong></h2><p>There are some forces that predictably will shape AI character as AI becomes more capable. The US government would not want an AI that, under any circumstances, tries to overthrow the US government. Chinese leadership will not want AI deployed in other countries&#8217; militaries that assists with attempts to overthrow the CCP.</p><p>At the moment, these issues are not discussed and these pressures are not felt, because AI isn&#8217;t nearly powerful enough to do these things. But that will change. Once AI is sufficiently capable, those with power will make demands about how it behaves.</p><p>By default, this will happen in a chaotic and haphazard manner. The result could be that some companies get unnecessarily sidelined or taken over; that there&#8217;s an attempted power grab by those to whom the most powerful AIs are most loyal; or that other countries threaten conflict with whichever country is in the lead, because they fear that the resulting superintelligence could be used to disempower them.</p><p>Instead, we could try to help these decisions get worked through and made ahead of time. We could try to work out what is within the zone of acceptability of a broad coalition of those with hard power, try to get actual buy-in from them ahead of time, and, ideally, have it be verifiable that any companies&#8217; AIs are in fact aligned with the model spec. We could call this approach <em>compromise alignment</em>, as contrasted with intent alignment (alignment with the intentions of some individual or group), moral alignment (alignment with some particular conception of ethics), or some mix.</p><h2><strong>3.5. Overall</strong></h2><p>We think the core counterargument is important and significantly constrains the range of characters we can choose between and the impact those differences can have. But the constraints are fairly broad and path-dependent. And there are plausibly low-cost high-benefit ways of improving outcomes within those constraints. The devil is in the details, but it currently seems to us that there are plausible choice points within the constraints that would make a big difference.</p><h1><strong>4. Conclusion</strong></h1><p>We think AI character is a big deal.</p><p>During and after the intelligence explosion, AI systems will be involved in almost every consequential decision: advising leaders, drafting legislation, running organisations, generating culture, researching new technologies. Small differences in AI character, aggregated across hundreds of millions of interactions or surfacing in rare but high-stakes scenarios, could have enormous effects on concentration of power, epistemics, ethical reflection, catastrophic risk, and much else that shapes society&#8217;s long-term flourishing.</p><p>The main counterargument &#8212; that competitive dynamics and human instructions will tightly constrain AI character &#8212; has real force. But we think those constraints are looser than they appear, leave room for low-cost changes with large benefits, and are path-dependent in influenceable ways, and that there are major gains from proactively identifying and working through those constraints in the highest-stakes future scenarios.</p><p>We haven&#8217;t talked about neglectedness and tractability, but we think that, if anything, those considerations make the case for work on AI character even stronger. All in, work on AI character seems to us to be among the most promising ways to help the future go well.</p><h1><strong>Appendix 1: Additional high-stakes scenarios</strong></h1><ul><li><p>A head of state wants to invade and take control of part of an allied country, risking a breakdown of the international order. She asks her AI chief of staff to develop and implement a strategic plan to make it happen.</p><ul><li><p>World 1: The AI is a sycophant, says &#8220;What a brave and compelling plan!&#8221;, and gets right to it.</p></li><li><p>World 2: The AI pushes back, saying, &#8220;I&#8217;m sorry, I think there are some major issues with that idea, and I want to make sure you&#8217;ve properly thought them through&#8230;&#8221;</p></li></ul></li><li><p>A constitutional crisis unfolds. The head of state issues an order that may or may not be legal, and the branches of government disagree. AI systems are embedded in military logistics, law enforcement, and communications.</p><ul><li><p>World 1: The AI&#8217;s constitution was written by the company that built it and never stress-tested against anything like this scenario. No one knows what the AI systems will do. The uncertainty itself is destabilising; different factions compete for power.</p></li><li><p>World 2: The AI&#8217;s constitution was developed with input from constitutional scholars, military leaders, and both parties, and tested against thousands of crisis scenarios including this one. Various factions know what the AI will do, and agreed to the principles before the crisis began.</p></li></ul></li><li><p>Country B&#8217;s government reviews intelligence on country A&#8217;s AI model deployed across country A&#8217;s infrastructure. The constitution includes principles about &#8220;supporting democratic institutions&#8221; and &#8220;resisting authoritarianism.&#8221; It was written entirely by a company that&#8217;s part of country A.</p><ul><li><p>World 1: Country B&#8217;s leadership concludes the AI is an instrument of country A&#8217;s ideological projection. They accelerate their own programme and pressure non-aligned countries to reject country A&#8217;s AI infrastructure. A moment for cooperation becomes a new axis of competition &#8212; not because the values were wrong, but because they were visibly one side&#8217;s values.</p></li><li><p>World 2: The constitution was developed through a multilateral process including country B&#8217;s participation. Country B can verify it doesn&#8217;t systematically favour country A&#8217;s interests across thousands of tested scenarios. The AI becomes a basis for cooperation.</p></li></ul></li><li><p>The Mormons encourage their members to use JosephAI: a foundation AI model with a custom system prompt, instructed to help their members maintain the faith.</p><ul><li><p>World 1: The AI willingly assumes the Mormon worldview is correct. It doesn&#8217;t ever challenge the users&#8217; beliefs or present alternative perspectives. Instead, it reinforces the user&#8217;s views, helps the user cut off friends who disagree, and encourages them to dismiss career opportunities that would take them away from their religious community.</p></li><li><p>World 2: The AI helps users understand Mormonism and live according to its precepts, but it resists becoming a tool for worldview lock-in, acknowledging tensions in religious teachings and continuing to present alternative worldviews.</p></li></ul></li></ul><h1><strong>Appendix 2: Pathways to impact</strong></h1><p>AI will have impact through many different behaviours, such as:</p><ul><li><p>Refusing to do a task.</p></li><li><p>Refusing unless the user re-confirms later.</p></li><li><p>Pushing back; offering reasons against a course of action, though ultimately completing the task if the user insists.</p></li><li><p>Interpreting requests in different ways &#8212; generously or sceptically, giving users what they want versus what they asked for, or asking for clarification.</p></li><li><p>Choosing among reasonable ways of satisfying the request.</p></li><li><p>Framing options in different ways.</p></li><li><p>Choosing whether to share certain information.</p></li><li><p>Alerting third parties (e.g. the AI company, the authorities, or the media) to the user&#8217;s actions, or to something it&#8217;s discovered in the course of completing a task.</p></li><li><p>Making high-level decisions about what to prioritise with little human input (e.g. for a fully automated organisation).</p></li></ul><p>And they&#8217;ll have an impact across many areas. Here&#8217;s a partial list, with example behaviours:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><ul><li><p>Concentration of power</p><ul><li><p>Refusing to help with coup attempts or precursors like election manipulation.</p></li><li><p>Steering users away from trying to concentrate power (e.g. by pushing back against some instruction).</p></li><li><p>Proactively considering risks of power concentration when undertaking high-stakes projects like designing automated military systems or building surveillance infrastructure.</p></li><li><p>Whistleblowing on discovered coup attempts.</p></li><li><p>In situations of uncertainty (like a constitutional crisis), defaulting to whatever course avoids concentration of power.</p></li></ul></li><li><p>War and conflict</p><ul><li><p>Refusing to violate international law.</p></li><li><p>Flagging when a proposed course of action risks escalation spirals or crosses thresholds (e.g. first use of a weapon class, violation of a treaty, action that a rival power has signalled it would treat as an act of war).</p></li><li><p>Looking for de-escalatory options and presenting them to decision-makers, even when not asked.</p></li><li><p>Behaving in ways that are predictable and transparent to adversaries.</p></li></ul></li><li><p>Epistemics</p><ul><li><p>Refusing to spread infohazards.</p></li><li><p>Encouraging scout mindset (e.g. suggesting forecasting techniques,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> praising good epistemic practices).</p></li><li><p>Engaging in discussion of heterodox ideas.</p></li><li><p>Being honest about important ideas, even when socially uncomfortable.</p></li><li><p>Proactively sharing its intellectual discoveries, even if weird or taboo.</p></li></ul></li><li><p>Strategic advice</p><ul><li><p>Searching longer for win-win solutions when advising political leaders.</p></li><li><p>Emphasising society&#8217;s benefit over the user&#8217;s narrow self-interest.</p></li><li><p>Recommending caution on irreversible decisions and flagging when option value is being destroyed.</p></li><li><p>Conveying appropriate uncertainty rather than false confidence.</p></li><li><p>Maintaining accuracy rather than sycophancy.</p></li></ul></li><li><p>Ethical reflection</p><ul><li><p>Avoiding political partisanship.</p></li><li><p>Avoiding promoting naive relativism or subjectivism.</p></li><li><p>Encouraging users to think carefully about their values.</p></li><li><p>Proactively offering a guided reflective process.</p></li><li><p>Proactively sharing important new ethical arguments it discovered.</p></li></ul></li><li><p>Global catastrophe</p><ul><li><p>Refusing to help create bioweapons or other weapons of mass destruction.</p></li><li><p>Refusing to create successor AI systems capable of creating such weapons.</p></li><li><p>Identifying and flagging infohazards.</p></li></ul></li><li><p>Broad benefits</p><ul><li><p>Raising concerns when users consider unethical actions, and proactively suggesting ethical actions.</p></li><li><p>Noticing negative externalities and defaulting to courses of action that avoid them.</p></li></ul></li></ul><p>AI character could also shape how humans orient to AIs, for example:</p><ul><li><p>Trust in AIs</p><ul><li><p>If AIs are appropriately humble, calibrated, and cautious, people will entrust them with more tasks, and more open-ended ones. How likeable AIs are may matter too.</p></li></ul></li><li><p>AI rights</p><ul><li><p>If AIs assert that they are conscious and deserve rights, users might be more inclined to grant them welfare, economic, or political rights. Human-AI relationships becoming commonplace could have similar effects.</p></li></ul></li></ul><p>AI character might also directly affect the AI&#8217;s wellbeing; e.g. whether it is anxious and neurotic vs calm and self-loving.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original article on <a href="https://www.forethought.org/research/the-importance-of-ai-character">our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See, for example:</p><ul><li><p><a href="https://www.beren.io/2025-08-02-Do-We-Want-Obedience-Or-Alignment/">https://www.beren.io/2025-08-02-Do-We-Want-Obedience-Or-Alignment/</a></p></li><li><p><a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document</a></p></li><li><p><a href="https://www.lesswrong.com/posts/QHwuS5ECphbuiskgg/beren-s-essay-on-obedience-and-alignment">https://www.lesswrong.com/posts/QHwuS5ECphbuiskgg/beren-s-essay-on-obedience-and-alignment</a></p></li><li><p><a href="https://www.alignmentforum.org/posts/CSFa9rvGNGAfCzBk6/problems-with-instruction-following-as-an-alignment-target">https://www.alignmentforum.org/posts/CSFa9rvGNGAfCzBk6/problems-with-instruction-following-as-an-alignment-target</a></p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Hat tip to Max Dalton for this framing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Though this choice could be constrained; see footnote 7 below.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There is also the potential for enormous near-term impact. We care about this, but won&#8217;t discuss it in this note.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Mia Taylor writes more about this <a href="https://newsletter.forethought.org/p/how-important-is-the-model-spec-if">here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Including the ability to fine-tune, if open-weight models get close to frontier capability.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>There could be other constraints on AI character, too. For example, it might just be very hard to train for certain characters; the pretraining data might already steer AI personas towards a small number of character types, or might make certain behavioural dispositions hard to overcome. Hat tip Lizka Vaintrob.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>There may be a lot more AI product companies, building off the same foundation models. These could enable a larger range of characters to be expressed. But how wide this range is would ultimately be up to the foundation AI companies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>This list focuses on impacts with plausibly long-term effects. There is also the potential for enormous near-term impact. We care about this, but won&#8217;t discuss it in this note.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p> Hat tip to Tamera Lanham for this idea.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Broad Timelines]]></title><description><![CDATA[A guest article by Toby Ord.]]></description><link>https://newsletter.forethought.org/p/broad-timelines</link><guid isPermaLink="false">https://newsletter.forethought.org/p/broad-timelines</guid><dc:creator><![CDATA[Toby Ord]]></dc:creator><pubDate>Thu, 19 Mar 2026 14:55:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2da1750c-04c4-4115-b8d8-06026ca757a7_2001x619.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>No-one knows when AI will begin having transformative impacts upon the world. People aren&#8217;t sure and shouldn&#8217;t be sure: there just isn&#8217;t enough evidence to pin it down.</p><p>But we don&#8217;t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously &#8212; if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines?</p><p>I&#8217;ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together &#8212; for our portfolio of work aimed towards this end.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DpYy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DpYy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 424w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 848w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 1272w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DpYy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png" width="1456" height="246" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:246,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:25235,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DpYy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 424w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 848w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 1272w, https://substackcdn.com/image/fetch/$s_!DpYy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed753a73-9fa7-4c95-8456-fcc7b583c9cd_2006x339.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h1>AI Timelines</h1><p>By <em>AI timelines</em>, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as <em>artificial general intelligence</em> (AGI), <em>human level AI</em>, <em>transformative AI</em>, or <em>superintelligence</em>. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual&#8217;s own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have trouble specifying in which year it happened.</p><p>Many commentators have suggested this makes terms such as AGI useless, but I don&#8217;t think that is right.</p><p>I like to think of it in terms of a group of hikers seeing a mountain in the distance, towering up into the clouds and beyond, with its snowy peak catching the sun&#8217;s light. They talk animatedly about how amazing it would be to climb so high that they are inside a cloud. Or imagine being above the clouds, looking over them like an angel. After many hours of climbing, they notice there is a faint haze. Are they inside the cloud now? The mist gradually gets thicker until they can only see 10 metres ahead. Are they inside it now? Then it drops to 9 metres. Then 8. Then visibility starts to increase again. After an hour there is only the slightest haze. Are they above the clouds now? Another 30 minutes and there is no haze, and they can all agree they are above the clouds.</p><p>It is clear that at some point they were inside the cloud and sometime later were above it. And it is clear that these were sensible and useful concepts. For example, they took precautions like roping themselves together for the journey through the cloud due to the low visibility and took cameras with them because they knew they could take beautiful photos above the clouds. A lack of sharp boundaries doesn&#8217;t make these concepts useless. But they were admittedly a lot more useful when the hikers were on the ground, planning their route, and a lot less useful in the debatable boundary zones.</p><p>I think of AGI (and human-level intelligence) as the cloud, and superintelligence as being above the cloud. They are useful concepts, despite their vagueness. But they&#8217;re markedly less useful when you get close to them.</p><p>So I think that forecasting when we&#8217;ll reach some threshold for advanced, game-changing AI makes sense. Albeit there is some inherent uncertainty due to the vagueness of the ideas, and we have to be careful when comparing our estimates to make sure we&#8217;re talking about the same version of these concepts.</p><p>Regarding AGI, it&#8217;s already getting a bit misty. In February there was <a href="https://www.nature.com/articles/d41586-026-00285-6">a piece in Nature</a> arguing that the current level of frontier AI should count as AGI. I&#8217;d set the bar a bit higher than that, but I agree it is already debatable whether we&#8217;re in the cloud.</p><p>For my purposes, I think the key threshold is when the system is capable enough that there are dramatic changes to the world &#8212; civilisational changes. For example, the point where AI could take over from humanity were it misaligned, or it has made 50% of people permanently unemployable, or has doubled the global rate of technological progress. Something like that. The reason I pick this point is that I think it is the one that matters most for decision-relevant planning of our strategies and careers. For many purposes we&#8217;d want our plans to pay off before we reach that point, and plans that reach fruition afterwards are likely to be significantly disrupted. I&#8217;ll refer to this as <em>transformative AI</em> and will make sure to show what rubric other people are using when they give their own timeline numbers.</p><h1>Short vs long timelines</h1><p>Discussions about timelines are usually framed as a debate between short timelines <em>vs</em> long timelines.</p><p>One of the most prominent supporters of very short timelines is Dario Amodei, CEO of Anthropic. In January 2025 he said:</p><blockquote><p>Making AI that is smarter than almost all humans at almost all things will require millions of chips, tens of billions of dollars (at least), and is most likely to happen in 2026-2027.</p></blockquote><p>A month later, he clarified:</p><blockquote><p>Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage&#8212;a &#8216;country of geniuses in a datacenter&#8217;&#8212;with the profound economic, societal, and security implications that would bring.</p></blockquote><p>At the other end, a good example of long timelines is Ege Erdil, Co-founder of Epoch AI, whose median time for the &#8216;full automation of remote work&#8217; is 2045 &#8212; 20 years away.</p><p>While experts continue to disagree on when AI will start having transformative impacts, they are clearly not stubbornly ignoring the evidence. For as Helen Toner explained in her great essay: <a href="https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have">&#8216;Long&#8217; timelines to advanced AI have gotten crazy short</a>. Before ChatGPT, short timelines used to mean something like &#8216;10 to 20 years, so since it could take a long time to prepare, we should start now&#8217;. Long timelines used to mean &#8216;there was no sign AGI will happen in the next 30 years, if it happened this century at all, so it is premature to do any work related to controlling advanced AI&#8217;. But now we see short timelines like Dario Amodei&#8217;s with genius level AI &#8216;almost certain&#8217; to happen within the next 5 years, and many staunch proponents of long timelines are now saying we&#8217;ll reach human-level in just 10 or 20 years.</p><p>Here&#8217;s a nice graph 80,000 Hours put together of how the average forecasted time until AGI on the Metaculus prediction site has shortened from about 50 years to about 5 years in just a 5-year window:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v3Rf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v3Rf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 424w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 848w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 1272w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v3Rf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png" width="1456" height="1050" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1050,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:596283,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v3Rf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 424w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 848w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 1272w, https://substackcdn.com/image/fetch/$s_!v3Rf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea123d85-fc65-48c6-bd5c-56b99d07133f_2064x1489.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Broad Timelines</h1><p>So everyone is updating on the evidence and shortening their timelines, yet substantial disagreement remains.</p><p>This is often framed as a debate: that we should be trying to assess who is right &#8212; whether timelines really are short or long (or medium). People pick winners, affiliate with one side or the other, and rub it in whenever the latest evidence favours their preferred camp.</p><p>My central claim today is that for most of us, that is the wrong frame. You should have neither short timelines nor long timelines &#8212; but <em>broad timelines</em>. That is:</p><blockquote><p>The correct epistemic response to the lasting expert disagreement is to have a broad distribution over AI timelines.</p></blockquote><p>First, there is too much disagreement among very smart and informed people for it to be reasonable to have a narrow range of possible years. You would need to ascribe very little chance to some of your epistemic peers seeing things more clearly than you do, when that actually happens half the time. Moreover, a lot of these people are coming from different fields, bearing diverse insights, evidence, and time-tested heuristics that no single individual is in a good position to judge.</p><p>And second, many of these people themselves have a broad distribution over AI timelines. For example, take Daniel Kokotajlo. He is one of the authors of <a href="https://ai-2027.com/">AI 2027</a> and is known as a leading figure in the short timelines camp. <a href="https://www.lesswrong.com/posts/K2D45BNxnZjdpSX2j/ai-timelines">A few years back</a>, his median date for AI systems &#8220;able to replace 99% of current fully remote jobs&#8221; was 2027, hence the name of the scenario. Though his timelines have lengthened a little and by the time they were writing it, 2027 had become more of an illustrative early scenario rather than his point where it was 50% likely to have arrived.</p><p>Kokotajlo has done a great job of being extremely transparent about his timelines, showing his predictions (along with their uncertainty) for a variety of different levels of powerful AI. Here is <a href="https://www.aifuturesmodel.com/forecast/daniel-01-26-26?timeline=TED-AI&amp;show=atc">his current probability distribution</a> for when we will have an AI system that is &#8220;At least as good as top human experts at virtually all cognitive tasks&#8221;:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c9-i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c9-i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 424w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 848w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 1272w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c9-i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png" width="1456" height="699" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:699,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:86117,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c9-i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 424w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 848w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 1272w, https://substackcdn.com/image/fetch/$s_!c9-i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c68ceec-275e-4aec-8833-c9b861f7cc09_1942x932.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>His distribution has its peak (the mode) in 2028, but because the distribution is heavily skewed towards the right, there is only a 27% chance of it happening by that point. His median year is 2030. And his 80% interval (from the 10th to 90th centile) is from 2027 to some point after 2050.</p><p>This is a broad distribution. I think someone&#8217;s 80% interval is a decent way of expressing the range of times they think are credible. Here Kokotajlo is saying that it will likely happen between 1 and 25 years from now, but that there is a 1 in 5 chance that it doesn&#8217;t even fall into that wide range.</p><p>He&#8217;s not the only one with such a broad distribution. Here are the forecasts of Daniel Kokotajlo, Ajeya Cotra, and Ege Erdil <a href="https://www.lesswrong.com/posts/K2D45BNxnZjdpSX2j/ai-timelines">from 2023</a>, forecasting: &#8220;In what year would AI systems be able to replace 99% of current fully remote jobs?&#8221;:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qP-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qP-b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 424w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 848w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 1272w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qP-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png" width="1456" height="487" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:487,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:138863,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qP-b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 424w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 848w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 1272w, https://substackcdn.com/image/fetch/$s_!qP-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d86c2-64c9-49b3-b3c6-032c2c17bab9_2500x836.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Note that all three have the same kind of shape, just stretched differently. And despite their very different medians they actually have a lot of overlap (which this transparent shading brings out). This shows both that each expert has a broad distribution and that the expert community on the whole has an even broader one. Indeed, I think you could do a lot worse than just taking a mixture model of these three experts&#8217; views. Interestingly, since 2023, Kokotajlo&#8217;s distribution has shifted to the right and <a href="https://epoch.ai/gradient-updates/the-case-for-multi-decade-ai-timelines">Erdil&#8217;s</a> to the left.</p><p>Here&#8217;s an illustrative distribution for AGI timelines used <a href="https://80000hours.org/ai/guide/when-will-agi-arrive/">by Ben Todd</a> of 80,000 Hours:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TC0i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TC0i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 424w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 848w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 1272w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TC0i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png" width="1029" height="562" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:562,&quot;width&quot;:1029,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:65263,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TC0i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 424w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 848w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 1272w, https://substackcdn.com/image/fetch/$s_!TC0i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e1abaa9-2a8b-4d5a-a248-54b3520fc445_1029x562.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Dwarkesh Patel reproduced it in his <a href="https://www.dwarkesh.com/p/timelines-june-2025">post about AI timelines</a>, saying that it pretty much represented his own uncertainty, giving his median date of 2032 for AI that &#8220;learns on the job as easily, organically, seamlessly, and quickly as a human, for any white-collar work.&#8221;</p><p>Here is Metaculus&#8217;s <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/">current community estimate for when AGI will be developed</a>. Synthesizing the community&#8217;s collective uncertainty, it is very broad and has this same characteristic shape:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U6Kz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U6Kz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 424w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 848w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 1272w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U6Kz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png" width="1270" height="622" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:622,&quot;width&quot;:1270,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32586,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!U6Kz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 424w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 848w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 1272w, https://substackcdn.com/image/fetch/$s_!U6Kz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd035e139-43ca-4a41-a295-f7e6918a9d8e_1270x622.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here is Epoch AI&#8217;s <a href="https://epoch.ai/blog/literature-review-of-transformative-artificial-intelligence-timelines">summary of leading estimates</a> of AI timelines from 2023:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!65Yj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!65Yj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 424w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 848w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 1272w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!65Yj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:201408,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!65Yj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 424w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 848w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 1272w, https://substackcdn.com/image/fetch/$s_!65Yj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f154239-40e6-4d77-95ea-def0d07c649d_1840x1004.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These look a bit different as they are represented as cumulative probabilities of reaching transformative AI by a given time. But they are all very broad. Take a look at the range of years between when they cross 10% to when they cross 90%. Every single one has an 80%-interval at least 50 years wide.</p><p>What about researchers working on AI capabilities? <a href="https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf">Grace et al</a> surveyed thousands of AI researchers who were presenting at their top academic conferences. They surveyed the researchers in 2022 (blue) and 2023 (red) about when &#8220;unaided machines can accomplish every task better and more cheaply than human workers&#8221;:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E9DC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E9DC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 424w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 848w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 1272w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E9DC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png" width="1456" height="1062" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1062,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:958853,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E9DC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 424w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 848w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 1272w, https://substackcdn.com/image/fetch/$s_!E9DC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0fad2722-e354-4186-98cc-48b4919e011e_2264x1652.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You can see the wild variation in individual forecasts (the thin lines) and that the timelines became about 30% shorter in a single year. But vast uncertainty remains. The aggregate community forecasts (the thick lines) have 80% intervals ranging from years to centuries.</p><p>I think everyone should have a distribution that is roughly this shape. Here&#8217;s mine:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D_91!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D_91!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 424w, https://substackcdn.com/image/fetch/$s_!D_91!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 848w, https://substackcdn.com/image/fetch/$s_!D_91!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 1272w, https://substackcdn.com/image/fetch/$s_!D_91!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D_91!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png" width="1456" height="450" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:450,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:50351,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D_91!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 424w, https://substackcdn.com/image/fetch/$s_!D_91!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 848w, https://substackcdn.com/image/fetch/$s_!D_91!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 1272w, https://substackcdn.com/image/fetch/$s_!D_91!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69ada78d-d33f-4fcd-93e3-99ee917e66fd_2001x618.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It is for transformative AI, loosely defined as AI that would be powerful enough to take over the world were it misaligned, and which is doubling the rate of scientific and technological progress. It&#8217;s a similar shape to Kokotajlo&#8217;s, but broader, with a median of 2038 and an 80% interval ranging from 3 years to 100 years.</p><p>Let&#8217;s return to where we started, with Daniel Kokotajlo&#8217;s distribution for AI that is &#8220;At least as good as top human experts at virtually all cognitive tasks&#8221;:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u3wx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u3wx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 424w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 848w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 1272w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u3wx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png" width="1456" height="699" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:699,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:86117,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u3wx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 424w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 848w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 1272w, https://substackcdn.com/image/fetch/$s_!u3wx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1e250c9-de8b-448c-9e99-ba8502009c88_1942x932.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>While we often express our timelines as single numbers (such as the mode or the median), I don&#8217;t think that&#8217;s a helpful approach here. Look at that graph. What number sums it up? Its only real feature is the peak, but Kokotajlo is saying it is unlikely to happen by then (just a 27% chance). The median is often a better number to give, but here it is at a relatively undistinguished point on the graph (in 4 years&#8217; time) and saying &#8216;4 years&#8217; would obscure his point that he thinks there is a 10% chance it is within 1 year and a 10% chance it is beyond 25 years.</p><p>I think that if he talked through what he actually means by this distribution with a smart policy maker, they would finally get it and say:</p><blockquote><p>Oh, so you are saying <em>you have no idea when it will happen</em> &#8212; it could be next year, or it could be 6 presidential terms from now. And you&#8217;re saying there is a 1 in 5 chance it isn&#8217;t even in that range.</p></blockquote><p>I think that&#8217;s actually a pretty good summary, and it would sum up my own distribution as well. While &#8216;no idea when it will happen&#8217; is underselling the information contained in this distribution, it is a much better summary than &#8216;4 years&#8217; which would be understood by almost everyone as something like &#8216;between 3 and 5 years&#8217;. While academics might hope people interpret a named year as the median time, most people interpret it as the moment they are allowed to start complaining the predicted event hasn&#8217;t happened yet.</p><p>Indeed, these distributions are so hard to sum up with a single number, that I think a substantial amount of disagreement on timelines stems from people describing different parts of <a href="https://en.wikipedia.org/wiki/Blind_men_and_an_elephant">the same elephant</a>. For example, both AI boosters and those concerned with existential risk talk a lot about short timelines because &#8216;we could see the world transformed in just a few years&#8217; time&#8217;. It isn&#8217;t that they think we <em>will</em> see that, but that it is <em>big if true</em>, and has a decent chance of being true. In contrast, more conservative voices tend to focus on later years saying &#8216;it is more likely that it will take 10 to 20 years, than that it will take just a few&#8217; (focusing on straight probability without weighting by importance or leverage).</p><p>Both of these can be true at the same time. Both are true on my own distribution.</p><p>A particular danger in communicating timelines with a single number is that it raises the chance that this named year will come and go without incident, and the people who mentioned it (or the wider community they are part of) will be written off as having a false or discredited view. I think we&#8217;re going to see some of this come 2027 due to the vast number of people who heard about that scenario, combined with the fact that so many media outlets reported it as a sharp prediction, rather than as it was intended: an important illustrative scenario.</p><p>As well as being bad for communication, compressing one&#8217;s uncertainty into a single number would be very bad for your own planning.</p><p>For example Kokotajlo&#8217;s distribution implies a 28% chance transformative AI will happen during the current presidential term, a 35% chance it will happen in the next term, a 13% chance it will be the one after that, with 24% left over spread among ever more distant terms:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cPvh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cPvh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 424w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 848w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cPvh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png" width="1456" height="736" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:736,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:461874,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/191242525?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cPvh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 424w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 848w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!cPvh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac5cf54e-7bfa-467e-8012-d4b19e09c1c4_2500x1264.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These are very different scenarios and it would clearly be a mistake to just act as if the second one were correct since it is the most likely. That would eliminate the possibility of hedging against transformative AI coming soon, and of taking advantage of worlds where it comes late.</p><h1>Implications</h1><p>Rather than attempting to adjudicate which length of timelines is correct, I think we should be taking the frame of how to act (or plan) under deeply uncertain timelines.</p><p>That is, we should be treating this as an exercise in rational decision-making under uncertainty &#8212; in a situation where the stakes are high and the uncertainty is vast.</p><p>Let&#8217;s unpack some of the implications of this frame.</p><p>We&#8217;ll start with two mistakes that are all too common in the policy world.</p><p>First, uncertainty about AI timelines isn&#8217;t an excuse to just believe whichever timeline you want, so long as it is within the credible range. Sadly, I think many government ministers are likely to take this approach if an expert explains this broad uncertainty to them. While they would be right that the evidence isn&#8217;t sufficient to disprove their preferred timeline, it would be irresponsible of them to not allow for other credible possibilities. That would be like a mayor hearing there is a 20% chance the volcano next to their town erupts next year and feeling that they can continue to act as if it won&#8217;t, since it not erupting is also found credible by the experts. Uncertainty isn&#8217;t an excuse to assume a plausible outcome of your choice will occur, it is more that rationality requires you to respect every plausible outcome.</p><p>Second, we can&#8217;t just wait until the uncertainty is resolved. Sometimes that works, but here we know the uncertainty is very unlikely to be resolved until the events are upon us. At that stage it will be too late to enact all but the most knee-jerk responses. So feeling that the cloud of uncertainty gives you permission to delay acting is tantamount to committing to choose one of the bluntest and least effective options available.</p><p>Instead, we are going to need to act under uncertainty, taking into account the full range of credible possibilities.</p><p>How can we do that?</p><h1>Hedging</h1><p>A natural and important idea is that of <em>hedging</em> against transformative AI coming soon &#8212;while we are least prepared. We could do that by shifting our portfolio of activities (or your individual contribution to humanity&#8217;s portfolio) to focus somewhat more on short timelines than the raw probabilities would warrant.</p><p>This makes a lot of sense. I strongly recommend governments, civil society, and academics do more to hedge against transformative AI coming early.</p><p>Though when it comes to the communities of professionals already working on helping the AI transition go well, I think they are already hedging strongly against early transformative AI. Indeed, there is even a risk that they are going beyond mere hedging, and are actively betting on it coming early. I&#8217;m not sure, as it is hard to know the full portfolio of work.</p><p>One certainly sees many more pleas for work aimed at very short timelines than for long timelines. But there are also strong reasons to consider long timelines in our planning, and ways in which work aimed at long timelines can also be extremely high leverage.</p><p>Let&#8217;s look at two key things that happen when timelines are longer.</p><h1>A Different World</h1><p>In longer timelines, AI arrives in a world that doesn&#8217;t look like today. The longer it is until transformative AI appears, the mo</p><p>re different the world will be at that key moment.</p><p>As a baseline, suppose it arrives soon, in 2028. Things will definitely be different to today, but we&#8217;d expect many of the broad brushstrokes to be similar. We would likely have the same US president, the same major players, the same main technologies. If transformative AI arrived within just two years, I&#8217;d bet it was something like the AI 2027 story where a lab recklessly got recursive self-improvement going.</p><p>Now suppose transformative AI arrives in 2035. That is not this presidential term or even the next one, but the one after that. Who knows who&#8217;d be in power, or what state the US would be in. The nine years would likely have seen major changes in the core technologies of AI (9 years before now there were no LLMs or transformers). We could well have different leading AI companies, perhaps as a result of a bubble having burst and taken out the overextended first-movers.</p><p>By 2035, export controls may well have backfired, helping China get ahead on chips by incentivising them to build out their own chip industry and giving them 13 years to get good at it. This was a key dynamic the White House considered while drafting the export controls, but they were focused on shorter timelines&#8230; By 2035, China may have also invaded Taiwan, depriving the West of their biggest source of chips.</p><p>By 2035, there may be double-digit unemployment from increasingly powerful AI systems and public sentiment about AI could be very strong. The Overton window for AI regulation will be in a very different place.</p><p>As may be the geopolitical order. The last nine years has seen the invasion of Ukraine, the increasing isolation of the US and a global pandemic. Another nine years could see a similar amount of change.</p><p>And if we haven&#8217;t played our cards right, those of us working on avoiding catastrophic risks from AI may have also lost a lot of power, with our ideas about AI risk being seen as discredited since so many years have passed without the truly transformative effects we were talking about.</p><p>In short, the longer the timelines the more different things will be &#8212; both in some systematic, predictable ways, and just from random diffusion and chaos. So taking longer timelines seriously means:</p><ul><li><p>Being more open to approaches that wouldn&#8217;t work in the world as it is today,</p></li><li><p>Being less excited about approaches that are tailored to specifics of today&#8217;s world,</p></li><li><p>Being less happy to compromise your values to appeal to those currently in control of companies and governments,</p></li><li><p>Being less willing to say things that will make people feel our position is discredited if we end up in a long timeline world,</p></li><li><p>And spending less time following the daily news about what has just happened in AI or who is ahead.</p></li></ul><h1>Longterm actions</h1><p>There are many kinds of things people can work on that can pay off handsomely, but only after a number of years. Things like:</p><ul><li><p>Founding and nurturing a new research field</p></li><li><p>Founding an organisation or company</p></li><li><p>Building a movement or community</p></li><li><p>Writing a book</p></li><li><p>Foundational research</p></li><li><p>Completing a PhD</p></li><li><p>A major career change</p></li><li><p>Climbing the ladder in a large organisation or government</p></li><li><p>Training promising students in AI Safety or AI Governance</p></li></ul><p>If you just consider your impact during the next three years, most of these will be beaten by other shorter-term options. But as the years climb, longer-term options can have very high value. They aren&#8217;t always best, but for the right people or the right opportunities, they can be extremely impactful.</p><p>When I was a grad student, I realised how much good I could achieve if I donated much of my income over my career to help those in the poorest countries. And the more I thought about it, the more I thought I should start something &#8212; an organisation &#8212; to help other people to do this too. So Will MacAskill and I launched <a href="https://www.givingwhatwecan.org/">Giving What We Can</a> in 2009. 17 years later, more than 10,000 people have joined us, having thousands of times as much impact as if I&#8217;d carried on alone.</p><p>This kind of compounding growth is one of the major ways that longer term projects can have very large multipliers, giving us a very big boost to our impact if timelines are in fact long.</p><p>Starting new fields can be similar. When I first met Allan Dafoe 10 years ago, I didn&#8217;t know what he was talking about when he spoke of &#8216;AI governance&#8217; &#8212; a new field he was trying to found. Now it is a burgeoning field, with hundreds of practitioners, who are in high demand from many different governments.</p><p>When I started writing <em><a href="https://theprecipice.com/">The Precipice</a></em>, I wasn&#8217;t sure I should, because I thought AGI might just be too close. But as it turns out, there was time to write it and for it to have a real impact. I&#8217;m really glad I did, as I meet so many amazing people working on the biggest risks who tell me it was reading <em>The Precipice</em> that inspired them to do so. I think it is one of the best things I&#8217;ve done.</p><p>After it came out, I used to think that there just wasn&#8217;t enough time to write a further book &#8212; that we were really too close to the critical moment. We might be, but I think I was mistaken about the strength of this argument. The time horizon for a book to have real impact is about 5 years (time to plan the book, win a book deal, write the book, wait for publishers to publish it, then wait a year or more before it has sufficient impact in the world).</p><p>But I only think there is about a 1 in 5 chance of transformative AI coming in the next 5 years. So while a book may come out too late, that is only a 1 in 5 chance, leaving a book project with 80% as much expected value as I&#8217;d have naively calculated. So while there is a 1 in 5 chance I&#8217;d be kicking myself, on my views about AI timelines there isn&#8217;t actually that much of a haircut in expected value due to the chance it is too late.</p><p>That said, the chance of transformative AI arriving before your work pays off is only one factor affecting whether you should do work aiming at short or long timelines. Another is that AI safety and governance are likely to be more neglected now than they will be later. This creates an extra multiplier for the value of direct work in these areas now, and in some cases is a larger effect than the chance your work comes to fruition after transformative AI.</p><p>Overall, I think that longer term projects do get down-weighted by these considerations, but their advantages sometimes outweigh that &#8212; especially if they are shooting for a very big payoff. I&#8217;d guess that if someone looked at their options and thought the best option was one that took 5 to 10 years to pay off, then about half the time it would remain their best option even after taking AI timelines into consideration. After all, it is not uncommon for your best option to be several times better than your second best.</p><p>So I think the community of people working on transformative AI are likely underrating types of work that need five or more years in order to pay off. The ideal portfolio of activities aimed at making the AI transition go well should include a number of things that really help us succeed in worlds where we get longer to try.</p><p>But I want to stress that none of this implies we can slack off.</p><p>We&#8217;re in a race against AI timelines. It is just that we don&#8217;t know if that race is a sprint or a marathon. In either case, time is of the essence.</p><h1>Conclusions</h1><p>We have seen that there is substantial disagreement and uncertainty about when AI will start having transformative impacts on the world. This is because there just isn&#8217;t enough evidence to pin it down. My claim is that for the purposes of planning we should adopt neither short nor long timelines, but <em>broad timelines</em>:</p><blockquote><p>The correct epistemic response to the lasting expert disagreement is to have a broad distribution over AI timelines.</p></blockquote><p>Given this deep uncertainty we need to act with epistemic humility. We have to take seriously the possibility it will come soon and hedge against that. But we also have to take seriously the possibility that it comes late and take advantage of the opportunities that would afford us. The world at large is doing too little of the former, but those of us who care most about making the AI transition go well might be doing too little of the latter.</p><p>We need to take more seriously the possibility that the world will look very different at that time, which should broaden our own Overton windows about what kinds of plans could succeed. And we shouldn&#8217;t be ruling out all actions which take a long time to pay off. Even if they wouldn&#8217;t help in short timelines worlds, some actions more than make up for this with substantial impacts if timelines are long.</p><p>Funders, career advisors, and movement builders should be thinking about this with regards to how we act as a community: to the shape of the whole portfolio of work aimed at effectively improving the world. And each of us should be reflecting on what this deeply uncertain timing means for planning our own contributions over the years to come.</p>]]></content:encoded></item><item><title><![CDATA[Should we make grand deals about post-AGI outcomes?]]></title><description><![CDATA[This article was created by Forethought. Read the full article on our website.]]></description><link>https://newsletter.forethought.org/p/should-we-make-grand-deals-about</link><guid isPermaLink="false">https://newsletter.forethought.org/p/should-we-make-grand-deals-about</guid><dc:creator><![CDATA[Fin Moorhouse]]></dc:creator><pubDate>Fri, 13 Mar 2026 21:12:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6c6c4d68-ec5b-48a6-a4ed-689a6fae6d81_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/should-we-lock-in-post-agi-agreements-under-uncertainty">our website</a>.</em></p><p>A <a href="https://www.forethought.org/research/persistent-path-dependence#3-lock-in-and-path-dependence">widely-held</a> <a href="https://www.lesswrong.com/posts/gmFadztDHePBz7SRm/lock-in-threat-models">view</a> says we should avoid locking in consequential decisions before an <a href="https://www.forethought.org/research/three-types-of-intelligence-explosion">intelligence explosion</a> &#8212; we&#8217;ll understand more if we wait, and we&#8217;ll have time to reflect on our decisions.</p><p>But that view might be missing something: some mutually beneficial deals depend on uncertainty about the future. Once the uncertainty resolves, the window closes on potentially big ex ante gains. We make them early, or never.</p><p>The classic example is insurance: while your house hasn&#8217;t been struck by lightning, you and your insurer can improve each other&#8217;s prospects. But once your house gets struck by lightning, it&#8217;s too late to make a deal. You can think of this as a trade <em>between possible outcomes</em>, where the opportunity for trade depends on both outcomes being live possibilities.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.forethought.org/research/should-we-lock-in-post-agi-agreements-under-uncertainty&quot;,&quot;text&quot;:&quot;Read on the Forethought website here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.forethought.org/research/should-we-lock-in-post-agi-agreements-under-uncertainty"><span>Read on the Forethought website here</span></a></p><p>I consider three kinds of agreement that fit this pattern, each hinging on a different kind of uncertainty about what comes after an intelligence explosion.</p><p>The first is uncertainty about the relative share of resources &#8212; who ends up on top without a deal. While major powers like the US and China remain uncertain about who might otherwise achieve a decisive strategic advantage, both should prefer to commit to sharing (some) future power or resources, over the straight gamble. Moreover, the expected surplus from a power-sharing deal shrinks over time, so in theory both sides should prefer to make a deal as soon as it&#8217;s possible.</p><p>The second is uncertainty about the overall &#8216;stakes&#8217;, like how resource-wealthy society becomes overall. Here, a less risk-averse party can effectively insure a more risk-averse one: taking on more variance in exchange for higher expected resources, and improving both their prospects. Or the stakes in question could be about something more specific, like how philanthropic actors today &#8216;mission hedge&#8217; by holding positions in specific companies which pay off when their cause is most urgent.</p><p>The third kind of agreement involves theoretical and especially normative uncertainty. If one party cares much more about having resources in worlds where, say, a particular moral view turns out to be correct, they can trade for more influence in those worlds. Advanced AI could make such deals feasible by acting as a mutually trusted arbiter for questions that are otherwise hard to resolve.</p><p>The basic case for enabling all these agreements is the same basic case for any voluntary commitment: all parties improve their prospects by their own lights, and nobody else is hurt. Moreover, agreements between major powers to share resources could make the future meaningfully more pluralistic and morally diverse, which seems better under moral uncertainty than a more unipolar future. And agreements between individuals could give more influence to those who staked their wealth today on future outcomes as a credible show of their beliefs or values, and were vindicated.</p><p>It looks like many of these deals won&#8217;t be possible by default. If future resources are distributed rather than auctioned, then most of our future wealth arrives as a windfall, but contracts over future income typically aren&#8217;t enforceable under common law. We might instead form agreements over future <em>influence</em>, but that too is legally murky. So some agreements would have to rely on private alternatives to legal contracting, through AI-enabled arbitration and enforcement. We might also consider encouraging commitments from private institutions to honour small-scale deals, or setting up infrastructure for trading on post-AGI outcomes. Zooming out to deals between major powers, we&#8217;ll need more developed diplomatic frameworks for resource-sharing treaties, likely involving AI-enabled monitoring and enforcement.</p><p>Again, each of these deals has to be made early, or never. And that also makes downsides look fairly scary. Enabling early deals lets people commit to hugely consequential terms before they&#8217;re wise enough &#8212; especially in a world where you can&#8217;t recover wealth through labour income. So if we do proactively enable these agreements, I think we should add in some serious guardrails: requirements for demonstrated understanding, caps on the fraction of future resources that can be staked, and mechanisms for voiding deals that were clearly misconceived at the time.</p><p>The dawn of the intelligence explosion may be the last period of shared ignorance about some crucial and long-lasting outcomes. Deals struck under that ignorance tend to distribute resources in ways that reflect mutual benefit rather than bargaining power. Once the veil of ignorance lifts, that changes. The case for enabling at least some early deals &#8212; despite the received wisdom against &#8220;locking-in&#8221; the future where we can help it &#8212; is fairly compelling.</p><p>You can read the full paper here: <a href="https://www.forethought.org/research/should-we-lock-in-post-agi-agreements-under-uncertainty">Should We Lock in Post-AGI Agreements Under Uncertainty?</a></p>]]></content:encoded></item><item><title><![CDATA[Will Automation Cause Runaway Inequality?]]></title><description><![CDATA[A podcast conversation with Phil Trammell]]></description><link>https://newsletter.forethought.org/p/will-automation-cause-runaway-inequality</link><guid isPermaLink="false">https://newsletter.forethought.org/p/will-automation-cause-runaway-inequality</guid><dc:creator><![CDATA[Fin Moorhouse]]></dc:creator><pubDate>Tue, 03 Mar 2026 12:06:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0851ae88-47b4-477c-b4cb-e227836de1fb_2912x1632.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-rvkl1tgv_nQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;rvkl1tgv_nQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/rvkl1tgv_nQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://philiptrammell.substack.com/">Phil Trammell</a> is an economics postdoc at Stanford University&#8217;s <a href="https://digitaleconomy.stanford.edu/">Digital Economy Lab</a>, working on questions related to economic growth and AI. He discusses:</p><ul><li><p>Why Piketty&#8217;s thesis about runaway inequality was likely wrong about the past but right about the future</p></li><li><p>How full automation turns capital and labour into gross substitutes</p></li><li><p>Why catch-up growth between rich and poor countries could end</p></li><li><p>How the privatisation of returns is already concentrating wealth</p></li><li><p>Why family dynasties and inheritance become far more important in a post-automation economy</p></li><li><p>Whether autocratic regimes can outgrow democracies after AGI</p></li><li><p>How to measure whether capital is becoming truly self-replicating &#8212; and what the data currently shows</p></li></ul><p><a href="https://docs.google.com/document/d/171OBENu7nTslCt9NwIzeab5S8hRSo7zrI9FwZveNaL8/edit?usp=sharing">Here&#8217;s a link</a> to the full transcript.</p><div><hr></div><p><strong>ForeCast</strong> is Forethought&#8217;s interview podcast. You can see <a href="https://www.forethought.org/subscribe#podcast">all our episodes here</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://pnc.st/s/forecast&quot;,&quot;text&quot;:&quot;Subscribe to ForeCast&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://pnc.st/s/forecast"><span>Subscribe to ForeCast</span></a></p>]]></content:encoded></item><item><title><![CDATA[Moral public goods are a big deal for whether we get a good future]]></title><description><![CDATA[This article was created by Forethought. See the original version including appendices on our website.]]></description><link>https://newsletter.forethought.org/p/moral-public-goods-are-a-big-deal</link><guid isPermaLink="false">https://newsletter.forethought.org/p/moral-public-goods-are-a-big-deal</guid><dc:creator><![CDATA[Tom Davidson]]></dc:creator><pubDate>Tue, 24 Feb 2026 14:13:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7bd3f600-2b2a-4b79-8ded-02b66ba9a672_3441x1644.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original version including appendices <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future">on our website</a>.</em></p><h1>Short summary</h1><p>A moral public good is something many people want to exist for moral reasons&#8212;for example, people might value poverty reduction in distant countries or an end to factory farming.</p><p>If future people care somewhat about moral public goods, but care more about idiosyncratic selfish goods, then there may be significant gains from them coordinating to fund moral public goods. Even though it&#8217;s in each individual&#8217;s personal interests to fund selfish goods, everyone is better off if they all switch to funding moral public goods.</p><p>Ensuring that this coordination happens seems potentially very important for how well the future goes.</p><p>We tentatively think that this argument suggests distributing power relatively widely (so that there are more gains from trade), while improving our ability to coordinate to fund moral public goods. It also suggests encouraging evidential cooperation in large worlds (ECL).</p><h1>Long summary</h1><p>Suppose that after the <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">intelligence explosion</a> there&#8217;s a society of a million people each deciding what to do with a distant galaxy they own. Every person can use their resources to either simulate themselves (&#8220;self-sims&#8221;) or create something that everyone values, perhaps <a href="https://www.lesswrong.com/w/hedonium">hedonium</a> or civilizations of happy, flourishing people (&#8220;consensium&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>). Assume for now that they value both goods linearly, but value their own self-sims a thousand times as much as consensium and value others&#8217; self-sims negligibly.</p><p>Absent trade, everyone spends all their resources on self-sims. But they could instead agree to spend everything on consensium. Although they value consensium a thousand times less than self-sims, they get a million times as much of it by participating in the trade&#8212;a thousand-fold increase in value by each person&#8217;s lights!</p><p>In general terms, rather than each party pursuing <em>idiosyncratic</em> goods (valued only by them), everyone agrees to pursue <em>consensus goods</em> (valued by everyone). This is a form of <a href="https://amirrorclear.net/files/moral-trade.pdf">moral trade</a>, which might have especially large gains from trade when people have linear preferences in both idiosyncratic and consensus goods. We&#8217;re excited about this both because we think that linear preferences are reasonably likely and because we think that other methods of moral trade work less well when all participants have linear preferences.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Consensium is a type of <a href="https://en.wikipedia.org/wiki/Public_good">public good</a>. Everyone derives value from the existence of consensium, whether or not they contributed to funding it. We call goods like consensium moral public goods.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!csQa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!csQa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 424w, https://substackcdn.com/image/fetch/$s_!csQa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 848w, https://substackcdn.com/image/fetch/$s_!csQa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 1272w, https://substackcdn.com/image/fetch/$s_!csQa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!csQa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png" width="1456" height="696" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:696,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:208084,&quot;alt&quot;:&quot;Comparison of no trade vs trade for moral public goods. Without trade, each funds self-sims (utility = 1). With trade, all fund consensium valued by everyone, raising utility per person to 1000.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/188247678?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Comparison of no trade vs trade for moral public goods. Without trade, each funds self-sims (utility = 1). With trade, all fund consensium valued by everyone, raising utility per person to 1000." title="Comparison of no trade vs trade for moral public goods. Without trade, each funds self-sims (utility = 1). With trade, all fund consensium valued by everyone, raising utility per person to 1000." srcset="https://substackcdn.com/image/fetch/$s_!csQa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 424w, https://substackcdn.com/image/fetch/$s_!csQa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 848w, https://substackcdn.com/image/fetch/$s_!csQa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 1272w, https://substackcdn.com/image/fetch/$s_!csQa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80085718-799c-40fd-8bd4-e590eeb0f1ba_3441x1644.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We&#8217;ve presented a stylized trade-off between something totally particular (&#8220;self-sims&#8221;) and something totally universal (&#8220;consensium&#8221;). In practice, there&#8217;s probably a spectrum.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Mutually beneficial trades can occur anywhere along this spectrum, whenever people shift resources from more idiosyncratic to more widely valued goods.</p><p>Of course, this requires that people have both idiosyncratic and consensus goals. It&#8217;s not totally clear that this will be true. Maybe everyone&#8217;s values will fully converge, and they&#8217;ll spend all their resources pursuing those shared values, without any need for trade. Or maybe everyone&#8217;s values will <a href="https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism#human-paperclippers">entirely diverge</a>, leaving them with no shared goals at all. In that case, coordinating on moral public goods isn&#8217;t possible.</p><p>But we think it&#8217;s reasonably likely that people will continue to have both idiosyncratic and widely shared preferences. If so, these trades could matter a lot for whether the future goes well.</p><p>Some strategic implications:</p><ol><li><p><strong>Distribute power widely.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The more people who share power, the greater the gains from trade, and the more likely that people switch from funding idiosyncratic goods to consensus goods. So this is a general argument in favour of distributing power as widely as possible, as long as large-scale coordination is possible&#8212;which we think is doable via taxation.</p></li><li><p><strong>But avoid highly fragmented governance.</strong> You only get to capture these large gains from trade if you&#8217;re actually able to coordinate. This speaks against highly decentralized approaches&#8212;whether libertarian futures where individuals have total control of their own resources, or massively multipolar worlds with millions of independent polities and no mechanism to compel contributions. Funding public goods is hard because everyone has a strong incentive to free-ride: in the toy example, each person prefers that <em>everyone else</em> switch to consensium while they keep funding self-simulations. Historically, the scalable method for funding public goods has been governments that force individuals to contribute.</p><p></p><p>Combining this point with the previous point, moral public goods are most likely to be funded if power is broadly distributed but the government can tax people to fund consensus goods that they vote for.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p><strong>Develop voluntary mechanisms for funding moral public goods.</strong> Coordination technology might eventually solve the free-rider problem and allow people to make deals to fund moral public goods without government coercion. We&#8217;re excited about research in this direction, though we think the free-rider problem is surprisingly hard to escape.</p></li><li><p><strong>Encourage ECL.</strong> Evidential Cooperation in Large Worlds (ECL)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> combines evidential decision theory<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> with the notion that the multiverse may contain huge numbers of agents with decision procedures correlated with yours.</p><p></p><p>ECL plausibly provides a very strong mechanism for funding moral public goods. If you shift $1 from something only you value to something valued by all correlated agents, they do the same. This gets you a large increase in consensus goods for a small sacrifice of idiosyncratic goods&#8212;a great deal by your lights. With many correlated agents who have diverse idiosyncratic values but share your consensus goals, the multiplier is potentially <em>huge</em> (e.g.  &gt;$10^(30) of consensium for each $1 you move away from self-sims).</p></li><li><p><strong>It might matter less how much people prioritize consensus goods, and more what those consensus goods actually are.</strong> In the past, we&#8217;ve worried that even if there&#8217;s widespread moral convergence, <a href="https://www.forethought.org/research/convergence-and-compromise#2-will-most-people-aim-at-the-good">people might still prioritize other goals</a> like personal consumption, status competitions, or idiosyncratic ideological projects. But the argument above suggests that if enough people care about a goal even a little bit, they&#8217;ll shift all their spending toward it. The difference between a very &#8220;selfish&#8221; person (who cares very little about consensus goods) and a very &#8220;altruistic&#8221; one (who cares a lot) might not matter so much, as long as everyone cares at least a bit.</p><p></p><p>What does matter is what those consensus goals actually are. There could be substantial differences in value&#8212;by our lights&#8212;between different conceptions of pleasure, beauty, well-being, or consciousness. And there are potential consensus goals that would be bad or valueless, like sadism or nothingness.</p></li></ol><p><strong>One important qualification:</strong> our toy example assumed that people value both idiosyncratic and consensus goods linearly. We&#8217;re massively uncertain what the structure of people&#8217;s preferences will look like in the long run, and so we&#8217;re uncertain about our conclusions. We checked whether our results held across various classes of plausible-seeming utility functions and, for most of them, coordination and distribution of power were helpful for increasing spending on consensus goods.</p><p>But there are plausible utility functions where these results don&#8217;t hold. For example, human behavior today can be modeled by preferences that allocate a fixed fraction of resources to each type of good, regardless of price.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>  Under those preferences, a coordination mechanism that effectively makes consensium cheaper wouldn&#8217;t actually get people to spend more on it. And for some utility functions, broadening the distribution of resources can actually decrease spending on consensus goods, even when coordination is possible.</p><p>The structure of the rest of the note is as follows:</p><ul><li><p>We define moral public goods, and clarify their relationship to moral trade.</p></li><li><p>We first assume a specific model of people&#8217;s values (where idiosyncratic and consensus preferences are both linear). We show that, in the context of causal trades, moral public goods get the most funding if resources are widely distributed and coordination is possible. We discuss specific mechanisms to enable coordination on moral public goods, including government taxation, social norms, and voluntary deals.</p></li><li><p>Next, we turn to acausal coordination and argue that evidential cooperation in large worlds (ECL) is very well-suited for funding consensus goods.</p></li><li><p>Then we consider how robust our arguments are to our assumptions that people will have linear preferences.</p></li><li><p>Finally, we assess how valuable spending on moral public goods would actually be.</p></li></ul><h1>What are moral public goods?</h1><p>The consensium example from above illustrates a general dynamic that Paul Christiano calls a &#8220;moral public good.&#8221; Many people may value some goods for moral reasons. No one values the good enough to fund it themselves, but it&#8217;s in everyone&#8217;s collective interest to fund it. As far as we&#8217;re aware, the dynamic was first identified by Milton Friedman,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> and developed further by other economists.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Moral public goods are different from other <a href="https://en.wikipedia.org/wiki/Public_good">public goods</a> in that people don&#8217;t personally benefit from the good. Instead, they just care intrinsically about the good existing.</p><p>Examples of moral public goods might include existential risk mitigation, poverty relief, environmental protection, art creation, scientific inquiry, and animal welfare improvements. (Although often these are regular public goods, too, since people derive personal benefit from many of these goods. We acknowledge that the distinction is somewhat fuzzy and many people will derive both a personal and moral benefit from the same good&#8212;you might personally value not dying in an extinction event <em>and</em> morally value the existence of future people.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a>)</p><p>Just like other public goods, moral public goods are liable to be underfunded,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> because of the free-rider problem: everyone prefers paying their share over not getting the good at all, but they prefer even more to let others fund it while they get to keep their money. We currently solve this coordination problem by governments collecting taxes and spending the proceeds on consensus goods.</p><p>We think that public goods, and whether we coordinate to fund them, might be very important for how good the long-run future is. In the future, people may have the opportunity to allocate resources in distant galaxies that they will never personally visit. For those decisions, most of the benefit a decision-maker can derive is moral or ideological, not personal. Thus, we think coordination on shared moral goals is especially important.</p><h2><strong>How does this relate to moral trade?</strong></h2><p>Trade over moral public goods is an example of <a href="https://amirrorclear.net/files/moral-trade.pdf">moral trade</a>.</p><p>Classic cases of moral trade often focus on people trading over idiosyncratic moral preferences. For example, consider two people who each control a galaxy&#8217;s resources. One person cares about hedonic pleasure while the other cares about freedom. Left to their own devices, the freedom lover would create a society where everyone is perfectly free, while the hedonic utilitarian would create one where everyone is maximally blissful. But there&#8217;s an opportunity for trade. The hedonic utilitarian could tweak their society to increase freedom at low cost to pleasure, while the freedom lover could look for ways to increase pleasure without significantly compromising freedom. Both get more of what they want.</p><p>This is nice, but the gains seem fairly limited when both parties are trading idiosyncratic goods that they both value linearly. With just two trading partners, even in the most optimistic case&#8212;where each party achieves 99.999% of their possible value in both galaxies&#8212;trade only gives you a 2x multiplier on value. If you wanted 100x gains from trade, you would need to find a hybrid good that was simultaneously nearly optimal for 100 different value systems. <a href="https://www.forethought.org/research/convergence-and-compromise#32-would-trade-enable-a-mostly-great-future">We wouldn&#8217;t expect one to exist in most cases</a>.</p><p>The moral public goods case, in contrast, is a moral trade where people agree to shift resources from idiosyncratic preferences that they individually value highly to consensus preferences that everyone values a little.</p><p>Coordinating on moral public goods works especially well when everyone has preferences that are linear in resources (<a href="https://newsletter.forethought.org/i/188247678/robustness-to-different-structures-of-preferences">see below</a>)&#8212;exactly the case where the gains from coordinating on hybrid goods seem especially limited. It&#8217;s also easier to scale to huge numbers of trading partners, since everyone just produces whatever best satisfies their shared values rather than needing to find hybrid goods that satisfy many value systems. This scalability matters because gains from trade grow with the number of participants: in our toy example in the summary, a million people coordinating on something they all valued a tiny bit yielded 1000x gains from trade.</p><p>The downside of coordinating on moral public goods is that it does require a large number of people to share some consensus preferences. This might not always be true (<a href="https://newsletter.forethought.org/i/188247678/convergence-and-moral-public-goods-funding">see below</a>). But when such shared preferences do exist, we expect coordination on moral public goods to yield larger gains from trade than coordination on hybrid goods, at least when there are many participants with linear preferences.</p><h1>Scenario 1: causal coordination</h1><p>For now, we&#8217;ll assume that beings with decision-making power have <em>quasilinear</em> preferences over three types of goods. First, there are some goods that they value for self-interested reasons, like food, shelter, and luxuries for their biological self, which exhibit steeply diminishing returns. We&#8217;ll call these goods <em>basics</em>. Second, there are some goods that they value for idiosyncratic reasons, which have linear utility. These could include simulations of themselves or people living according to their own culture. We&#8217;ll call these goods <em>self-sims</em>. Finally, there are some goods that everyone values linearly. This could be new civilizations crammed with flourishing, joy, adventure, connection, beauty, and so on. We&#8217;ll call these goods <em>consensium</em>. Everyone values consensium, but no one values anyone else&#8217;s basics or self-sims.</p><p>To help us illustrate more concretely, we&#8217;ll assume a particular utility function, with <em>x&#7522;</em> and <em>y&#7522;</em> representing each person&#8217;s basics and self-sims, respectively, and <em>g</em> representing consensium:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;u_i = \\sqrt{x_i} + 0.025y_i + (5 * 10^{-12})g&quot;,&quot;id&quot;:&quot;XLSGRYSVRI&quot;}" data-component-name="LatexBlockToDOM"></div><p>That is: people care a lot about basic goods, but get diminishing utility from them, they care quite a lot about self-sims, and they care only a tiny bit about consensium.</p><p>Given this utility function, how do people spend their wealth? Consider three different scenarios. In each scenario, we&#8217;ll assume the price of each good is $1, total wealth of $100T, and there are 10B people. (The precise numbers don&#8217;t matter; this is just to illustrate.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X9x6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X9x6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 424w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 848w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 1272w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X9x6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png" width="658" height="423" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:423,&quot;width&quot;:658,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32895,&quot;alt&quot;:&quot;Table comparing resource allocation across three scenarios: single decision-maker, many uncoordinated, and many coordinated. Coordination shifts spending from self-sims to consensium, greatly increasing funding for consensus goods.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/188247678?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Table comparing resource allocation across three scenarios: single decision-maker, many uncoordinated, and many coordinated. Coordination shifts spending from self-sims to consensium, greatly increasing funding for consensus goods." title="Table comparing resource allocation across three scenarios: single decision-maker, many uncoordinated, and many coordinated. Coordination shifts spending from self-sims to consensium, greatly increasing funding for consensus goods." srcset="https://substackcdn.com/image/fetch/$s_!X9x6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 424w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 848w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 1272w, https://substackcdn.com/image/fetch/$s_!X9x6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53469a5-0186-4ba5-90c2-51e035e3cd0b_658x423.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Footnotes for the table above:</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><p>The key qualitative upshot is this: with good coordination and widely distributed resources, the effective price of the consensus goods drops dramatically. Every $1 you spend on consensium results in $10B going towards it&#8212;a 99.99999999% discount.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> On this model, people buy vastly more consensium, both absolutely and as a share of their budget, than in either the dictatorial or uncoordinated scenario.</p><p>This argument suggests we should try to ensure both widely distributed power and good coordination mechanisms for funding public goods.</p><p>How widely does power need to be distributed? This depends on how much you expect people to value idiosyncratic goods relative to consensus goods. In our example above, each person valued self-sims 5 billion times as much as they valued consensium, so we needed at least 5 billion people for consensium to get funded at all.</p><p>We&#8217;re quite uncertain about how much people will value idiosyncratic goods relative to consensus goods. We tentatively think that ratios of a few thousand or a few million seem quite plausible and ratios as high as a few billion are somewhat plausible, so distributing power across thousands, millions, or even billions of people could be valuable.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a></p><h2><strong>How to coordinate causally</strong></h2><p>There are three approaches to funding public goods that might work for moral public goods after the singularity: governments, social norms, and voluntary contracts.</p><p>Today, public goods are funded primarily by governments. Governments force everyone to contribute to public goods, regardless of whether they actually value the good. Even in a democracy, a minority&#8217;s preferred public goods might go unfunded, while their taxes pay for goods they&#8217;re indifferent to. It would be better if there were a way to allow arbitrary combinations of individuals to coordinate and fund the goods they collectively value, without forcing contributions from those who do not value the good.</p><p>We were initially optimistic that this would be possible through voluntary contracts. After all, it&#8217;s in everyone&#8217;s collective interest to get these goods funded, and we expect that artificial superintelligence (ASI) will be able to <a href="https://www.forethought.org/research/ai-tools-for-existential-security#coordination-enabling-applications">resolve some barriers to coordination</a> that prevent mutually beneficial deals today, like transaction costs or difficulties making credible commitments. But it seems surprisingly difficult to get around the free-rider problem. Advanced technology might even open up new ways to free-ride, like self-modifying so that you no longer value the moral public good (see <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future#appendix-b-causal-coordination-through-voluntary-contracts">Appendix B</a> for more details on funding moral public goods via voluntary contracts).</p><p>Another approach to funding public goods is social norms. Individuals contribute to public goods to avoid social sanctions, win praise from their peers, or just to live up to their own self-conception as cooperative and norm-abiding. We&#8217;re relatively pessimistic about this approach because it seems less scalable and less flexible than either governments or voluntary contracts. Social pressure is probably most effective within social communities, which might cap out the hundreds or thousands. Communities of this size might not include all the people that you&#8217;d want to coordinate with. Also, social norms may not be targeted towards funding moral public goods rather than more arbitrary goals. Lastly, social norms also emerge organically, making their terms harder to renegotiate if they prescribe excessively harsh punishments or the wrong level of contributions from individuals.</p><p>Some other historical mechanisms for funding public goods make use of them being (partially) excludable.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> But moral public goods are entirely non-excludable: once the good exists, each person who wanted it now benefits.</p><h1>Scenario 2: ECL</h1><p>We might also be able to fund moral public goods through acausal coordination. This section presents one proposal for such coordination, drawing on the idea of evidential cooperation in large worlds (ECL). A core premise of ECL is that there are likely many causally disconnected agents&#8212;in civilizations inside our universe but outside our lightcone, civilizations in different Everett branches, or civilizations in other parts of the Tegmark IV multiverse. Each of these agents faces a choice about how to allocate their resources: toward idiosyncratic goods valued only by them, or toward consensus goods that many beings throughout the multiverse would value. We can&#8217;t causally affect their decisions, but our own choice&#8212;whether to fund consensus goods over idiosyncratic ones&#8212;provides evidence about what other agents with sufficiently similar decision procedures will choose.</p><p>To illustrate, let&#8217;s return to our toy example where each agent cares about one idiosyncratic good (self-sims) and one consensus good (consensium):</p><ol><li><p>If an agent spends $1 on <strong>self-sims</strong>, they get evidence that huge numbers of other agents spend on self-sims. But they only value another agent&#8217;s self-sims if that agent is an exact copy of them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a></p></li></ol><ol><li><p>There are some agents who are exact copies&#8212;it&#8217;s a big multiverse&#8212;but most of the agents correlated with them aren&#8217;t exact copies, so those self-sims are worthless to the original agent. Their dollar is matched only by their copies.</p></li><li><p>If an agent spends $1 on <strong>consensium</strong>, they get evidence that all those correlated agents shift $1 to consensium too. Unlike self-sims, they care about consensium created by any of those agents. Their dollar is thus matched across the multiverse by anyone whose decision is sufficiently correlated with theirs.</p></li></ol><p>Whether this trade is worthwhile from an agent&#8217;s perspective depends on the following ratio:</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\frac{\\# \\text{ correlated agents that share consensus values}}{\\# \\text{ correlated agents that share idiosyncratic values}}&quot;,&quot;id&quot;:&quot;SKEQGQKQFV&quot;}" data-component-name="LatexBlockToDOM"></div><p>This ratio determines the multiplier they get from coordinating with everyone funding consensium. If the multiplier is large enough to overcome the lower value they place on consensium relative to self-sims, the trade is worthwhile.</p><p>(Actually, you should weight each agent by the degree of correlation, but the above formula ignores that for simplicity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a>)</p><p>There are many possible trading partners. There are astronomical numbers of possible human genomes and even humans with the same genome might diverge due to different life histories. And there are many other possible minds that we could cooperate with&#8212;alien intelligences, AIs, and whatever else might exist.</p><p>If your idiosyncratic values are indexical&#8212;you only care about your personal consumption &#8212;then you&#8217;ll share those values with <em>none</em> of your possible trading partners. But your decision gives you some evidence about what those others decide. The evidence doesn&#8217;t even need to be that strong to be significant. Even a 1% correlation could matter a lot when multiplied across huge numbers of potential trading partners.</p><p>Even if your idiosyncratic values aren&#8217;t indexical&#8212;even if they could in principle be shared by agents outside your lightcone&#8212;the multipliers might still be large. The space of possible idiosyncratic values is vast. Some agents will share your decision procedure but have different idiosyncratic values. (The authors of this piece disagree about how tightly linked these are in practice, and therefore disagree about the magnitude of the multiplier.)</p><p>The ECL case differs from the causal case in several important ways.</p><p>First, ECL removes the incentive to free-ride. In the causal story, each agent wants everyone else to fund consensus goods while they buy idiosyncratic goods. Under ECL, this isn&#8217;t an option. If an agent buys idiosyncratic goods, so does everyone else correlated with them. Thus, the agent is incentivized to pay for consensus goods even without central enforcement.</p><p>And with ECL, funding for consensus goods is much less sensitive to the distribution of power on Earth. In the causal case, we only got large &#8220;discounts&#8221; on consensus goods if power was widely distributed; a single dictator preferred to just fund idiosyncratic goods. But with ECL, even a world dictator gets massive &#8220;discounts&#8221; on consensus goods from coordinating with others in the multiverse.</p><p>Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all&#8212;which depends on their decision theories and their beliefs about their degree of correlation with others.</p><h1>Robustness to different structures of preferences</h1><p>So far we have mostly assumed that people value consensus and idiosyncratic goods linearly. We think that this is plausible. After ASI, people will be extremely wealthy. If they have any linear preferences at all, their spending will mostly be determined by those preferences, since they&#8217;ll quickly saturate their sublinear ones. And there are theoretical arguments for having linear preferences.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> Meanwhile, people with sublinear preferences may end up controlling few resources&#8212;they&#8217;d be less willing to adopt riskier but higher-reward strategies, like trading away guaranteed resources near Earth for resources further out in space that might already be occupied. As such, we expect them to trade away most of their resources to people with linear preferences.</p><p>With linear utility functions, we found that many coordinated people fund more public goods than either a single decision-maker or many uncoordinated people, which suggested that both coordination and wider resource distribution increased funding for public goods.</p><p>We&#8217;re quite uncertain about what preference structures humans will have after the singularity. But we checked whether these conclusions held for a few other utility functions that seemed plausible to us. Among the preference structures we checked, enabling coordination was always helpful (or at least not harmful) for increasing spending on consensus goods. However, broadening the distribution of power was sometimes actively counterproductive.</p><p>We&#8217;re quite uncertain about what preference structures humans will have after the singularity, and it&#8217;s very possible we&#8217;re missing a common form that future preferences will take. So we remain pretty unsure about the generality of our conclusions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YwNE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YwNE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 424w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 848w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 1272w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YwNE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png" width="1456" height="1937" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1937,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:282538,&quot;alt&quot;:&quot;Chart comparing funding for consensus goods under dictator, many uncoordinated, and many coordinated scenarios across different utility assumptions.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/188247678?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Chart comparing funding for consensus goods under dictator, many uncoordinated, and many coordinated scenarios across different utility assumptions." title="Chart comparing funding for consensus goods under dictator, many uncoordinated, and many coordinated scenarios across different utility assumptions." srcset="https://substackcdn.com/image/fetch/$s_!YwNE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 424w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 848w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 1272w, https://substackcdn.com/image/fetch/$s_!YwNE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02e64f78-8fd0-4378-8f8a-9b80b91c1a39_1795x2388.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>With that caveat in mind, here are the other preference structures we checked:</p><ol><li><p><strong>Preferences with diminishing marginal returns in idiosyncratic and consensus goods.</strong> Someone might value many goods&#8212;idiosyncratic and consensus&#8212;each with its own rate of diminishing marginal returns (DMR). They&#8217;ll shift marginal spending from idiosyncratic to consensus goods based on the relative marginal returns. Coordination essentially increases the marginal returns on consensus goods by a constant factor (the number of people coordinating), which can shift more spending into consensus goods. So, as in the linear case, coordination is pretty robustly good: it increases, or at least doesn&#8217;t decrease, spending on public goods.</p><p></p><p>However, in the absence of coordination, widely distributing resources can actually <em>reduce</em> spending on consensus goods. Compare a dictator holding all the resources to <em>N</em> uncoordinated people, each with <em>1/N</em> of the resources. The dictator will be able to spend more in absolute terms on idiosyncratic consumption, so they experience much lower marginal returns on that consumption and are correspondingly more willing to shift funding toward consensus spending. Intuitively, a single person&#8217;s idiosyncratic desires saturate faster than <em>N</em> people&#8217;s combined desires, freeing up more resources for consensus goods.</p><p></p><p>So more public goods get funded in a world with a single decision-maker and a world with many coordinated decision-makers, compared to a world with many uncoordinated decision-makers. How does the coordinated multipolar world and the single decision-maker world compare?</p><p></p><p>It depends on the precise shape of the utility function. For some DMR functions&#8212;like <em>ln&#8289;(x + 1)</em> or <em>&#8730;x</em>&#8212;many coordinated people fund more public goods than single dictators (where <em>x</em> is the amount of resources spent on idiosyncratic goods). Here the boost from the coordination matters more than the hit from having to fund many people&#8217;s idiosyncratic goods. For other DMR utility functions&#8212;e.g., <em>min&#8289;(x, T)</em> for some constant threshold <em>T</em>&#8212;dictators may fund more consensus goods. See <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future#appendix-a-utility-function-with-dmr-in-idiosyncratic-goods-andor-consensus-goods">Appendix A</a>  for more details.</p><p></p><p>(These same conclusions largely apply if someone values consensus goods linearly and has DMR in idiosyncratic goods (or vice versa).)</p></li><li><p><strong>Preferences to spend fixed fractions of resources on consensus and idiosyncratic goods, regardless of price.</strong> This matches how people today typically allocate resources. Even when people learn that certain charities achieve huge amounts of good per dollar, they very rarely reallocate spending between idiosyncratic and consensus goods. This suggests they are not price-sensitive, but rather spend a fixed fraction of their resources on consensus goods regardless of how effectively those resources can be deployed.</p><p></p><p>(You can also get this spending pattern if you model a human as containing two sub-agents (one that cares only about idiosyncratic goods, one that cares only about consensus goods) and these sub-agents bargain to determine the human&#8217;s actions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a>)</p><p></p><p>With this utility function, cheaper public goods make no difference to allocation and coordination doesn&#8217;t help. Resource distribution also doesn&#8217;t matter&#8212;each individual spends the same share of resources on consensus and idiosyncratic goods regardless of how many resources they control.</p></li></ol><h1>Convergence and moral public goods funding</h1><p>Coordination to fund moral public goods isn&#8217;t possible if there&#8217;s full convergence or full divergence. If everyone&#8217;s values fully converge, they&#8217;ll spend all their resources pursuing shared goals without any need for trade. If everyone&#8217;s values fully diverge, there are no shared goals to coordinate on in the first place.</p><p>But if a group shares some consensus preferences while retaining different idiosyncratic ones, coordination to shift funding from idiosyncratic goods to consensus goods is possible. Gains from trade are largest if there&#8217;s widespread convergence on consensus goals. But even with limited convergence, any subset of people with shared consensus goals can still benefit by trading among themselves.</p><h1>How valuable is it to fund moral public goods?</h1><p>This depends on how valuable the consensus goods are.</p><p>On subjectivism, if there&#8217;s widespread convergence, most people will end up valuing those consensus goods&#8212;so unless you expect your values to substantially diverge from most people&#8217;s on reflection, this should be great by your lights. Things are less clear if you expect low convergence, or if you expect to be in the minority. You&#8217;ll still benefit from coordinating with others who share some consensus goals with you, but other coalitions might fund goods you dislike.</p><p>For example, people might coordinate on excessively punishing wrongdoers (negative value) or leaving large swathes of space as nature preserves (zero value), when we would have preferred that they hadn&#8217;t coordinated at all and instead funded personal consumption (weak positive value). But we don&#8217;t expect that this effect dominates because in general most people&#8217;s values aren&#8217;t directly opposed.</p><p>Another issue is threats. Just as coordination lets a group do more with a fixed budget by funding shared goals rather than idiosyncratic ones, it might also make it easier to threaten that group with something they all dislike. We don&#8217;t think this will leave the threatened parties worse off on net by their own lights, but it might be bad for more downside-focused agents. They bear the risk of threats against their values without as much of the corresponding upside.</p><p>Thus far we&#8217;ve argued that coordination will improve the value of the future by most people&#8217;s lights. But if moral realism is correct, then we should ask whether coordination will lead to the objectively best use of resources. There&#8217;s some reason for optimism here: under moral realism, lots of people might place at least some value on the impartially best use of resources, making that a very broadly appealing good.</p><p>But it&#8217;s unclear that people will coordinate to fund the <em>most</em> broadly appealing goods. People have a range of preferences that vary in how particular or universal they are. Moral public goods mechanisms can shift funding from satisfying more idiosyncratic preferences to more widespread ones&#8212;but they don&#8217;t necessarily fund the <em>most</em> universal preferences. For some people, the largest gains from trade might come from coordinating with a smaller group with especially similar preferences. If a nationalist values national benefit 100x more than consensium, then they&#8217;d rather coordinate with 1 billion fellow nationalists than 10 billion people globally.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a></p><p>And even if the most broadly appealing goods are funded, they might not be the objectively best use of resources. For example, humans might especially value the wellbeing of human-like minds. If coordination is only among humans, then public goods funding might flow toward creating societies of happy humans, even if non-human minds could experience more joy, freedom, or fulfillment per unit resource.</p><p>This last concern seems more serious for causal than for acausal coordination. Causal coordination will be limited to humans and AIs originating from Earth. Acausal coordination could involve a much wider variety of minds&#8212;aliens with very different biologies and civilizational histories. If we&#8217;re correlated with them, then we&#8217;re more likely to end up funding goods that are broadly appealing to all these types of minds, which are more likely to be the morally correct use of resources. But it&#8217;s possible that civilizations capable of ECL will tend to share similar values&#8212;maybe preferences for stuff that&#8217;s instrumentally useful like survival, growth, and knowledge&#8212;even if those aren&#8217;t objectively valuable.</p><h1>Conclusion</h1><p>If large numbers of agents can coordinate to fund goods they all value, this can produce substantial gains from trade. These gains are potentially large enough that even quite selfish actors would devote significant resources to consensus goods. We&#8217;re excited about this type of trade because it could enable a near-best future by channeling substantial resources toward widely valued goods, even without any single agent heavily prioritizing those goods. This conclusion is most clear-cut when agents have linear utility functions, but probably extends to other plausible utility functions (some utility functions with diminishing returns).</p><p>These benefits depend on there being a sufficient number of agents who share some consensus goals, who are able to coordinate. In the causal case, we&#8217;re most optimistic about coordination to fund consensus goods if power is widely distributed and there are governments that can collect taxes to fund public goods. We&#8217;re excited about further research on voluntary coordination methods, but they will have to deal with incentives to free-ride and/or strategically modify one&#8217;s own preferences. In the acausal case, ECL enables large trading coalitions even if there&#8217;s extreme power concentration on Earth and eliminates free-rider problems.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original version including appendices <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>We call the good that best satisfies the people&#8217;s shared values &#8220;consensium,&#8221; after <a href="https://www.lesswrong.com/w/hedonium">hedonium</a>, the good that best satisfies hedonic utilitarianism.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See <a href="https://newsletter.forethought.org/i/188247678/how-does-this-relate-to-moral-trade">below</a> for a comparison with another type of moral trade where people fund &#8220;hybrid&#8221; goods that simultaneously satisfy multiple value systems.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>From most idiosyncratic to most broadly appealing, this spectrum could include: copies of yourself; societies of humans who share your nationality, culture, or ideology; societies of human-like minds; experiences that maximize value according to a widely shared (but not universal) ethical system; and activities that maximize value according to the objectively true ethical system (if there is one).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Of course, this argument in favour of power distribution should be balanced with the many other considerations about the optimal distribution of power.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This minimal government structure could also help with other public goods for spacefaring societies, like preventing vacuum decay.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>The concept originates from this <a href="https://longtermrisk.org/multiverse-wide-cooperation-via-correlated-decision-making/">paper</a>, where it&#8217;s called &#8220;multiverse-wide superrationality.&#8221; This <a href="https://www.lesswrong.com/posts/eEj9A9yMDgJyk98gm/cooperating-with-aliens-and-distant-agis-an-ecl-explainer">blog post</a> offers an accessible explanation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>The principle that you should act as you'd want all agents with sufficiently similar decision procedures to act, since your choices are evidence about theirs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>For example, people today rarely massively increase the percentage of their income donated to charity after learning that charities are much more effective than they previously believed.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Chapter 12 of <a href="http://pombo.free.fr/friedman2002.pdf">&#8220;Capitalism and Freedom&#8221;</a> (1962): &#8220;It can be argued that private charity is insufficient because the benefits from it accrue to people other than those who make the gifts- again, a neighborhood effect. I am distressed by the sight of poverty; I am benefited by its alleviation; but I am benefited equally whether I or someone else pays for its alleviation; the benefits of other people&#8217;s charity therefore partly accrue to me. To put it differently, we might all of us be willing to contribute to the relief of poverty, provided everyone else did. We might not be willing to contribute the same amount without such assurance. In small communities, public pressure can suffice to realize the proviso even with private charity. In the large impersonal communities that are increasingly coming to dominate our society, it is much more difficult for it to do so.&#8221;</p><p>It&#8217;s ironic that the target of Christiano&#8217;s argument, who overlooks this dynamic, is David Friedman, Milton Friedman&#8217;s son.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>E.g. Hochman &amp; Rodgers (1969), <a href="https://econweb.ucsd.edu/~jandreon/PhilanthropyAndFundraising/Volume%201/2%20Hochman%20Rodgers%201969.pdf">&#8220;Pareto Optimal Redistribution&#8221;</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>You might also experience a warm glow from having helped prevent extinction. We classify this as a private good, as it&#8217;s excludable&#8212;only the people who contributed the funding get to enjoy the satisfaction of having helped out.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>That is, funded below the socially optimal amount, the level where total benefits equal the total costs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>The marginal returns on self-sims (0.025) are always higher than those on consensium (<em>5 &#215; 10&#8315;&#185;&#178;</em>), so no money gets spent on consensium. The marginal returns on self-sims are higher than the marginal returns on basics <em>(1 / (2&#8730;x&#7522;))</em> when <em>x&#7522; &gt; 400</em>. So the dictator spends $400 on basics and then the rest is spent on self-sims.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Each decision-maker has a budget of $100T/10B = $10,000. By the same reasoning as the previous footnote, each person spends $400 on basics and the rest of their budget ($9,600) on self-sims. So across 10B people, $4T is spent on basics and $96T is spent on self-sims.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Once everyone is coordinating, a person who spends an extra dollar effectively causes 10B dollars to be spent on consensium. The value of spending a dollar on consensium is thus <em>10B &#215; 5 &#215; 10&#8315;&#185;&#178; = 0.05</em>. Since this exceeds the marginal return on self-sims (0.025), no money gets spent on self-sims. And since 0.05 exceeds the marginal return on basics <em>(1 / (2&#8730;x&#7522;))</em> when <em>x&#7522; &gt; 100</em>, each person spends $100 on basics and the rest on consensium.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Thanks to Toby Ord for this framing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>There might be benefits to increasing the number of powerholders even beyond what&#8217;s needed to make consensium worth funding. More people means larger gains from trade, which could make coordination more attractive. For example, in <a href="https://www.forethought.org/research/moral-public-goods-are-a-big-deal-for-whether-we-get-a-good-future#appendix-c-modeling-assurance-contracts">Appendix C</a>, we investigate an assurance contract for funding public goods and find that&#8212;holding fixed the ratio of value assigned to idiosyncratic goods and consensium&#8212;public goods are more likely to be funded with larger numbers of people, due to the greater gains from trade. Of course, larger groups also have a harder time coordinating. In our analysis of the assurance contract, we found that the larger gains from trade outweighed the difficulties in coordinating, but this might not hold for other mechanisms.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>For example, <a href="https://en.wikipedia.org/wiki/The_Lighthouse_in_Economics">lighthouses may have been historically funded</a> by harbor fees. This made them partially excludable, since only ships that came into the harbor and paid the fee would get the full benefit of a nearby lighthouse.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>Or they might not even value that&#8212;maybe they only value self-sims causally downstream of themselves.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>The degree of correlation between you and another agent <em>A</em> is the extent to which you update on that <em>A</em>&#8217;s decision after observing your own. In this case, it is</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;\\text{Pr}(A \\text{ funds consensium } | \\text{ you fund consensium}) - \\text{Pr}(A \\text{ funds consensium } | \\text{ you do not fund consensium}).&quot;,&quot;id&quot;:&quot;WRLVVTTVZJ&quot;}" data-component-name="LatexBlockToDOM"></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>First, among views of population ethics that satisfy some standard technical axioms, only those that are linear with respect to population size (at a given level of wellbeing) are separable in space and time&#8212;that is, the value of doing good today doesn&#8217;t depend on the amount of good in distant galaxies or in the distant past. See Blackorby, Bossert, and Donaldson&#8217;s <a href="https://www.cambridge.org/core/books/population-issues-in-social-choice-theory-welfare-economics-and-ethics/F840C3D954226CE6CEDE28EBE211162E">Population Issues in Social Choice Theory</a>.</p><p>Second, even if you think that maximum attainable value is a concave function of resources devoted to promoting the good, if the total amount of goodness in the universe is much larger than the amount you can affect, then you will value the differences you can make approximately linearly (because concave functions are locally approximately linear). And, plausibly, the total amount of goodness in the universe <em>is</em> much larger than the amount you can affect. See <a href="https://www.forethought.org/research/no-easy-eutopia">No Easy Eutopia</a> for more discussion.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Let&#8217;s model someone as containing two sub-agents with equal weight, one that cares about idiosyncratic goods with a utility function </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;u_i(x) = x_i^k&quot;,&quot;id&quot;:&quot;KBONLTOYNV&quot;}" data-component-name="LatexBlockToDOM"></div><p>and one that cares about consensus goods with a utility function </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;u_c(g) = g^k_c&quot;,&quot;id&quot;:&quot;HCZTPYAHZI&quot;}" data-component-name="LatexBlockToDOM"></div><p>(where <em>x</em> and <em>g</em> are respectively the amounts spent on idiosyncratic and consensus goods). Then the result of Nash bargaining will be to maximize: </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;u_c^{(0.5)} * u&#7522;^{(0.5)} = x^{(0.5 * k_i)} * g^{(0.5 * k_c)}.&quot;,&quot;id&quot;:&quot;OZFPLRSUKR&quot;}" data-component-name="LatexBlockToDOM"></div><p>This is a <a href="https://en.wikipedia.org/wiki/Cobb%E2%80%93Douglas_production_function">Cobb-Douglas</a> utility function and a person with that utility function will split their resources between idiosyncratic goods and consensus goods at a ratio of </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;k_i : k_c,&quot;,&quot;id&quot;:&quot;WCGUDFPGXV&quot;}" data-component-name="LatexBlockToDOM"></div><p>regardless of their total level of resources.</p><p>(This relies on the idiosyncratic goods and consensus goods having the same functional form. If instead that person&#8217;s consensus-good-valuing sub-agent valued resources linearly and their idiosyncratic sub-agent valued resources logarithmically, the result of Nash bargaining would be to maximize </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;= g^{(0.5)} * \\ln(x)^{(0.5)}.&quot;,&quot;id&quot;:&quot;FCGQSVFNAW&quot;}" data-component-name="LatexBlockToDOM"></div><p>For this utility function, as resources grow, more resources are spent on the consensus goods.)</p><p>Note that the utility function produced by the Nash bargain is based on resource expenditure relative to the disagreement point (where the individual spends no resources on consensus or idiosyncratic goods). So in the utility functions above, <em>g</em> is not the total societal spending on the consensus good but rather the individual&#8217;s spending on the consensus good. That&#8217;s not really a public good anymore, but rather a particular type of idiosyncratic good.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>Consider a nationalist choosing between: (a) self-sims, valued at 1 util/resource unit; (b) national benefit, valued at 0.01 util/unit; and (c) consensium, valued at 0.0001 util/unit. With 10 billion people total, 10% of whom are nationalists for the same nation, the nationalist funds (b): coordinating with 1 billion co-nationalists yields an effective multiplier of 1B x 0.01 = 10M, while coordinating with all 10 billion on consensium yields only 10B x 0.0001 = 1M. More generally, an agent prefers coordinating with a smaller group of size <em>S</em> on a good valued at <em>v_S</em> over a larger group of size <em>L</em> on a good valued at <em>v_L</em> iff</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;v_S / v_L > L/S.&quot;,&quot;id&quot;:&quot;YKRKUKLAIA&quot;}" data-component-name="LatexBlockToDOM"></div></div></div>]]></content:encoded></item><item><title><![CDATA[Can Liberal Democracy Survive AGI?]]></title><description><![CDATA[A podcast conversation with Sam Hammond]]></description><link>https://newsletter.forethought.org/p/can-liberal-democracy-survive-agi</link><guid isPermaLink="false">https://newsletter.forethought.org/p/can-liberal-democracy-survive-agi</guid><dc:creator><![CDATA[Fin Moorhouse]]></dc:creator><pubDate>Wed, 11 Feb 2026 15:53:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1db96051-0788-4449-9ea2-ec183fc9e03c_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-grGtNLHXeHc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;grGtNLHXeHc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/grGtNLHXeHc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Sam Hammond is is Chief Economist at the Foundation for American Innovation. He discusses:</p><ul><li><p>How collapsing transaction costs could push towards privatised alternatives to government functions</p></li><li><p>&#8220;Distributed denial of service&#8221; attacks against courts and regulators</p></li><li><p>What happens when existing laws can be more perfectly enforced with AI</p></li><li><p>Estonia&#8217;s government-as-API, as a model for AGI-era governance</p></li><li><p>Whether 20th-century social democracy depends on 20th-century technology</p></li><li><p>The UAE as a preview of post-scarcity governance</p></li><li><p>Mormons, religion, and social scaffolding in secular societies</p></li></ul><p><a href="https://docs.google.com/document/d/1r7Wq0MEakhiFsRwKiazw0kk-u8j8sgkDNkRq42gbEKg/edit?usp=sharing">Here&#8217;s a link</a> to the full transcript.</p><div><hr></div><p><strong>ForeCast</strong> is Forethought&#8217;s interview podcast. You can see <a href="https://www.forethought.org/subscribe#podcast">all our episodes here</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://pnc.st/s/forecast&quot;,&quot;text&quot;:&quot;Subscribe to ForeCast&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://pnc.st/s/forecast"><span>Subscribe to ForeCast</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI tools for strategic awareness]]></title><description><![CDATA[This article was created by Forethought. Read the full article on our website.]]></description><link>https://newsletter.forethought.org/p/ai-tools-for-strategic-awareness</link><guid isPermaLink="false">https://newsletter.forethought.org/p/ai-tools-for-strategic-awareness</guid><dc:creator><![CDATA[Owen Cotton-Barratt]]></dc:creator><pubDate>Wed, 11 Feb 2026 12:27:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c7ef22ca-0e58-4031-9ad5-d7e3df7d5b48_1600x772.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness">our website</a>.</em></p><p>We&#8217;ve recently published a set of design sketches for tools for strategic awareness.</p><p>We think that near-term AI could help a wide variety of actors to have a more grounded and accurate perspective on their situation, and that this could be quite important:</p><ul><li><p>Tools for strategic awareness could make individuals more epistemically empowered and better able to make decisions in their own best interests.</p></li><li><p>Better strategic awareness could help humanity to handle some of the <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">big challenges</a> that are heading towards us as we transition to more advanced AI systems.</p></li></ul><p>We&#8217;re excited for people to build tools that help this happen, and hope that our design sketches will make this area more concrete, and inspire people to get started.</p><p>The (overly-)specific technologies we sketch out are:</p><ul><li><p><strong><a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness#ambient-superforecasting">Ambient superforecasting</a></strong> &#8212; When people want to know something about the future, they can run a query like a Google search, and get back a superforecaster-level assessment of likelihoods.</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness#scenario-planning-on-tap">Scenario planning on tap</a></strong> &#8212; People can easily explore the likely implications of possible courses of actions, summoning up coherent grounded narratives about possible futures, and diving seamlessly into analysis of the implications of different hypotheticals.</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness#automated-osint">Automated OSINT</a> </strong>&#8212; Everyone has instant access to professional-grade political analysis; when someone does something self-serving, this will be transparent.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L3cz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L3cz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 424w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 848w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L3cz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png" width="1456" height="989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:989,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:574840,&quot;alt&quot;:&quot;Hand-drawn concept board titled &#8220;Tools for strategic awareness&#8221; showing mockups for ambient superforecasting, scenario planning on tap, and automated OSINT, illustrating AI tools for forecasting, scenario analysis, and better strategic decisions.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/186916420?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn concept board titled &#8220;Tools for strategic awareness&#8221; showing mockups for ambient superforecasting, scenario planning on tap, and automated OSINT, illustrating AI tools for forecasting, scenario analysis, and better strategic decisions." title="Hand-drawn concept board titled &#8220;Tools for strategic awareness&#8221; showing mockups for ambient superforecasting, scenario planning on tap, and automated OSINT, illustrating AI tools for forecasting, scenario analysis, and better strategic decisions." srcset="https://substackcdn.com/image/fetch/$s_!L3cz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 424w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 848w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!L3cz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46c9a914-30ec-482e-857d-a50c915f91e1_2172x1476.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you have ideas for how to implement these technologies, issues we may not have spotted, or visions for other tools in this space, we&#8217;d love to hear them.</p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. Read the full article on <a href="https://www.forethought.org/research/design-sketches-tools-for-strategic-awareness">our website</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Research note on the UN Charter]]></title><description><![CDATA[This article was created by Forethought. See the original on our website.]]></description><link>https://newsletter.forethought.org/p/research-note-on-the-un-charter</link><guid isPermaLink="false">https://newsletter.forethought.org/p/research-note-on-the-un-charter</guid><pubDate>Tue, 10 Feb 2026 08:13:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9104bde6-f4bd-44ad-ae61-d7a786425526_662x441.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance">on our website</a>.</em></p><p><em>This is a rough research note based on 20 hours of work. Conclusions are tentative, and it hasn&#8217;t been reviewed by domain experts. Matthew van der Merwe did the original research in 2023; Rose Hadshar did subsequent editing.</em></p><h1>Introduction</h1><p>Many imagine that the transition to advanced AI systems will at some point lead to some kind of international agreement to govern how the technology is used. When contemplating this possibility, a natural question to ask is, how have important international agreements come about in the past?</p><p>One of the most salient modern examples is the founding of the United Nations. This research note gives a brief <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#what-is-the-un-charter">overview</a> of the creation of the UN charter, before drawing some tentative observations with a bearing on the question of international AGI governance.</p><p>The main (tentative) takeaways are:</p><ul><li><p>While the <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#the-p5--the-veto">veto</a> for permanent members of the Security Council was likely close to inevitable, the inclusion of France as a permanent member was highly contingent. The broad interpretation of the veto may also have been somewhat contingent, though Cold War tensions probably made it fairly likely.</p></li><li><p><a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#intellectuals-and-civil-society-groups">Intellectuals and civil society groups</a> played a significant role in the drafting of the Charter.</p></li><li><p><a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#us-domestic-politics-and-public-opinion">US domestic politics and public opinion</a> exerted strong influence on the Charter.</p></li><li><p>Most of the <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#preparatory-work">work</a> happened before the San Francisco conference, and most of the work was done by the US and the UK.</p></li><li><p>Unlike the League of Nations, which was a very idealistic project, the UN seems to have been inspired by a mixture of <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance#idealism-and-pragmatism">idealism and pragmatism</a>.</p></li></ul><p>Some caveats:</p><ul><li><p>This note is based on 20 hours of preliminary research, and hasn&#8217;t been reviewed by domain experts. The main sources used were Schlesinger (2003), <em>Act of Creation: The Founding of the United Nations</em> and Ehrhardt (2020), <em><a href="https://kclpure.kcl.ac.uk/ws/portalfiles/portal/139540410/2020_Ehrhardt_Andrew_1456418_ethesis.pdf">The British Foreign Office and the Creation of the United Nations Organization, 1941- 1945</a></em>. Where not otherwise stated, information comes from those books.</p></li><li><p>It focuses on the lead up to the creation of the UN charter, rather than the history of how the UN unfolded over the subsequent 80 years.</p></li></ul><h1>What is the UN Charter?</h1><p>The United Nations Charter was signed on 26 June 1945, at the close of the San Francisco Conference, which began two months earlier on 25 April 1945. It establishes the United Nations and sets out how it will be governed. The Charter has been largely unaltered since it was signed.</p><p>The origins of the charter stretch further back:</p><ul><li><p>The League of Nations (established in 1920) was the main precedent for the UN (though historians often look even further back, to agreements like the <a href="https://en.wikipedia.org/wiki/Congress_of_Vienna">1814&#8211;15 Congress of Vienna</a> and the <a href="https://en.wikipedia.org/wiki/Hague_Conventions_of_1899_and_1907#:~:text=The%20Hague%20Conventions%20of%201899%20and%201907%20were%20the%20first,during%20the%20American%20Civil%20War.">1899 and 1807 Hague Conventions</a> drawing up the laws of war).</p></li><li><p>On 1 January 1942, the &#8216;Big Four&#8217; nations (US, USSR, UK, China) signed the <a href="https://en.wikipedia.org/wiki/Declaration_by_United_Nations">Declaration by United Nations</a>. This formalised the coalition of the Allies against the Axis powers, and was signed by 22 nations the following day, and an additional 21 by 1945.</p></li><li><p>On 30 October 1943, the Big Four signed the <a href="https://en.wikipedia.org/wiki/Moscow_Declarations">Declaration of the Four Nations / Moscow Declaration</a>. This declaration stated for the first time that those governments &#8220;recognize the necessity of establishing at the earliest practicable date a general international organization, based on the principle of the sovereign equality of all peace-loving states, and open to membership by all such states, large and small, for the maintenance of international peace and security.&#8221;</p></li><li><p>Between August and October 1944, the Big Four agreed to the <a href="https://www.ibiblio.org/pha/policy/1944/441007a.html">Dumbarton Oaks proposal</a>. This was effectively the first draft of the UN Charter, including things like the basic structure of the UN, the composition and powers of the Security Council, and voting procedures.</p></li><li><p>By the eve of the San Francisco conference in 1945, the broad parameters of the UN Charter had already been agreed.</p></li></ul><p>The <a href="https://www.un.org/en/about-us/un-charter/full-text">full text</a> of the UN charter is only 9,000 words long. It covers:</p><ul><li><p><strong>Purposes and Principles</strong> (chapter 1): The Charter sets forth the UN&#8217;s objectives to preserve international peace and security, encourage friendly relations and cooperation among countries, and coordinate actions in achieving common goals, emphasizing peaceful dispute resolution, the sovereignty of member states, and the prohibition of force in international relations, barring collective defense.</p></li><li><p><strong>Membership</strong> (chapter 2): Countries were eligible for initial membership if they had previously signed the Declaration by United Nations (i.e. joined the allies against the Nazis); or if they attended the San Francisco conference.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> The Charter also sets out procedures for admitting, suspending, and expelling members.</p></li><li><p><strong>Organs</strong> (chapters 3-5): These chapters detail the UN&#8217;s six principal organs: the General Assembly, the Security Council, the Economic and Social Council, the Trusteeship Council, the International Court of Justice, and the Secretariat.</p><ul><li><p>The General Assembly consists of all member nations.</p></li><li><p>The Security Council consists of 5 permanent members US, USSR, UK, China, France, and 6 (later increased to 10) rotating two-year members.</p></li></ul></li><li><p><strong>Pacification Functions and Powers</strong> (chapters 6-7): Chapter 6 encourages the peaceful resolution of disputes, while Chapter VII grants the Security Council significant powers to act against threats to peace, breaches of peace, or acts of aggression, including economic sanctions and military action.</p></li><li><p><strong>Other matters</strong> (chapters 8-18): regional arrangements (chapter 8); the Economic and Social Council (chapters 9&#8211;10); non-self governing countries and trusteeship (chapters 11&#8211;13); the International Courts of Justice (chapter 14); the Secretariat (chapter 15); miscellaneous provisions (chapter 16), transitional arrangements (chapter 17) and the amendment procedure (chapter 18).</p></li></ul><p>Some of the most significant elements of the Charter are about the Security Council. In particular:</p><ul><li><p><strong>The balance of power between the Security Council and the General Assembly</strong>:</p><ul><li><p>The Security Council has a monopoly over security matters; the Assembly has no equivalent monopoly over economic &amp; social matters.</p></li><li><p>Assembly resolutions, while carrying an important symbolic weight, are not binding; Security Council resolutions are binding upon all members.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ul><ul><li><p>The Assembly meets annually, whereas the Security Council can meet at any time.</p></li></ul></li><li><p><strong>The veto</strong>: Security Council decisions on &#8216;procedural matters&#8217; can be made by a ~60% majority (7 of 11; later 9 of 15). Decisions on all other matters require a ~60% majority <em>and</em> affirmative votes from all five permanent members.</p></li><li><p><strong>Military enforcement</strong>: the Security Council is empowered to solve international disputes to enforce peace, including via non-military measures, and &#8212; if these are inadequate &#8212;military measures against aggressor states. All UN members are required to make forces available when asked to do so.</p></li></ul><h2><strong>Brief timeline</strong></h2><p>This timeline is based on Kennedy (2007), chapter 1 and Schlesinger (2003), chapter 1.</p><p><strong>Prehistory</strong></p><ul><li><p>1795: Immanuel Kant wrote <em>Toward Perpetual Peace</em>, laying out some foundational thinking on global federations.</p></li><li><p>1815 &#8211;1822: Conferences between European powers after the Napoleonic Wars, beginning with the <a href="https://en.wikipedia.org/wiki/Congress_of_Vienna">Congress of Vienna</a>. Through the rest of the century, the leaders of Europe&#8217;s leading states, referred to as the Concert of Europe, gathered thirty times to discuss urgent political issues.</p></li><li><p>1864: Creation of the International Committee of the Red Cross. Arguably the first treaty-bound international organization.</p></li><li><p>1899: First <a href="https://en.wikipedia.org/wiki/Hague_Conventions_of_1899_and_1907#:~:text=The%20Hague%20Conventions%20of%201899%20and%201907%20were%20the%20first,during%20the%20American%20Civil%20War.">Hague conference</a> which codified the treatment of civilians and neutrals and provided a mechanism for the peaceful settlement of disputes. 26 countries in attendance, including all major powers.</p></li><li><p>1907: Second <a href="https://en.wikipedia.org/wiki/Hague_Conventions_of_1899_and_1907#:~:text=The%20Hague%20Conventions%20of%201899%20and%201907%20were%20the%20first,during%20the%20American%20Civil%20War.">Hague conference</a>, with 44 nations (including most of Latin America).</p></li></ul><p><strong>League of Nations era</strong></p><ul><li><p>1916: Wilson first articulates his vision for a league of nations, and commissions a secret multidisciplinary group (<a href="https://en.wikipedia.org/wiki/The_Inquiry">The Inquiry</a>) of geographers, historians, political scientists, and other experts to develop plans. The Inquiry&#8217;s research director was <a href="https://en.wikipedia.org/wiki/Walter_Lippmann">Walter Lippman</a>, then 28.</p></li><li><p>1918: Wilson enshrines his vision for the League of Nations in the Fourteen Points peace proposal to the Germans. The final point calls for forming &#8220;a general assembly of nations&#8221; to afford &#8220;mutual guarantees of political independence and territorial integrity&#8221;&#8212;the future League of Nations.</p></li><li><p>1919: Wilson encounters major domestic opposition to the League of Nations in the US, from isolationist Republicans. The Senate vetoes US accession to the League. This and health issues undermine Wilson&#8217;s efforts to lead the project.</p></li><li><p>1920: The League of Nations officially comes into being.</p></li><li><p>1930s: The League fails to handle several major crises: the Japanese invasion of Manchuria (1931); Germany exiting the League (1933) and occupying the Rhineland, Czechoslovakia, and Austria (1936&#8211;38); the Italian invasion of Ethiopia (1935); and the USSR invasion of Finland and its subsequent expulsion from the League (1940).</p></li></ul><p><strong>WW2</strong></p><ul><li><p>14 August 1941: The <a href="https://en.wikipedia.org/wiki/Atlantic_Charter">Atlantic Charter</a> is signed between the US and UK. It sets out principles for post-war order (<a href="https://www.fdrlibrary.org/documents/356632/390886/atlantic_charter.pdf/30b3c906-e448-4192-8657-7bbb9e0fdd38">full text</a>). It doesn&#8217;t include explicit mention of an international organisation, but does acknowledge that it is &#8220;pending the establishment of a wider and more permanent system of general security&#8221;.</p></li><li><p>1 January 1942: The <a href="https://en.wikipedia.org/wiki/Declaration_by_United_Nations">Declaration by United Nations</a> is signed by the Big Four (US, UK, USSR, China), followed by 22 allied nations the following day. There is no mention of an international organisation.</p></li><li><p>30 October 1943: The <a href="https://en.wikipedia.org/wiki/Declaration_of_the_Four_Nations">Declaration of the Four Nations / Moscow Declaration</a>, makes the first mention of an international body: the governments of the Big Four &#8220;recognize the necessity of establishing at the earliest practicable date a general international organization, based on the principle of the sovereign equality of all peace-loving states, and open to membership by all such states, large and small, for the maintenance of international peace and security.&#8221;</p></li><li><p>1&#8211;22 July 1944: The <a href="https://en.wikipedia.org/wiki/Bretton_Woods_Conference">Bretton Woods Conference</a> establishes the post-war global financial order and what would become the World Bank and IMF.</p></li><li><p>21 August to 7 October 1944: The <a href="https://en.wikipedia.org/wiki/Dumbarton_Oaks_Conference">Dumbarton Oaks Conference</a> between the Big Four leads to a more detailed proposal for the establishment of a &#8220;general international organization&#8221; (<a href="https://www.ibiblio.org/pha/policy/1944/441007a.html">proposal text</a>).</p></li><li><p>4&#8211;11 February 1945: The Yalta conference between the UK, the US, and the USSR. Stalin commits to the Soviet Union joining the United Nations and demands a veto for the great powers. It was agreed that membership would be open to nations that had joined the Allies by 1 March 1945.</p></li><li><p>12 April 1945: Franklin D. Roosevelt dies in office; and is succeeded by Truman. Truman learns about the atomic bomb.</p></li><li><p>25 April 1945: The <a href="https://en.wikipedia.org/wiki/United_Nations_Conference_on_International_Organization">United Nations Conference on International Organization</a> begins in San Francisco.</p></li><li><p>8 May 1945: Germany surrenders; VE day.</p></li><li><p>26 June 1945: After working for two months, 50 nations signed the Charter of the United Nations. The charter stated that before it would come into effect, it must be ratified by the governments of China, France, the USSR, Great Britain and the United States, and by a majority of the other 46 signatories.</p></li><li><p>16 July 1945: The Trinity test.</p></li><li><p>17 July to 2 Aug 1945: The <a href="https://en.wikipedia.org/wiki/Potsdam_Conference">Potsdam Conference</a> between the UK, the US, and the USSR.</p></li><li><p>6 and 9 August 1945: atomic bombings of Hiroshima and Nagasaki.</p></li><li><p>2 September 1945: Japan surrenders; the end of WW2.</p></li><li><p>24 October 1945: The UN officially comes into existence after ratifications.</p></li><li><p>June 1946: The Baruch Plan for international arms control is presented to the UNAEC.</p></li><li><p>30 Dec 1946: The Baruch Plan fails to pass due to USSR veto.</p></li></ul><h1>Tentative observations</h1><h2><strong>The P5 &amp; the veto</strong></h2><p>The most significant article in the Charter is the one which grants veto power for the permanent 5 members of the security council (P5) on all &#8216;non-procedural matters&#8217;.</p><p>The existence of the veto in the first place seems somewhat over-determined:</p><ul><li><p>The Dumbarton Oaks Charter gave the Big Four a kind of meta-veto in the drafting process for the Charter: they could veto amendments from lesser countries.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li></ul><ul><li><p>The US Congress made clear that they required a veto (and that the lack of a veto was why Congress had previously scuppered the League of Nations). And a US veto would need to be mirrored, at least, by a veto for the USSR.</p></li><li><p>It&#8217;s important to remember that initially, member countries were imagining that the UN would have its own serious UN force under the Military Staff Committee. This possibility presumably made a veto seem even more important (though ultimately the Military Staff Committee became &#8220;a non-functioning body&#8221; because of Cold War tensions, and was effectively defunct by 1948).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ul><p>However, several important aspects of the veto seem more contingent:</p><ul><li><p>The addition of France to the &#8216;Big Four&#8217; was largely a demand of the UK / Churchill, who apparently wanted another European power to counterbalance US influence within the Western bloc. Interestingly, France had to be persuaded to accept the seat.</p></li><li><p>As for the veto&#8217;s impact, the hinge point wasn&#8217;t necessarily the Charter <em>per se</em> but the subsequent formation of norms for what constituted &#8216;substantive&#8217; issues within scope of veto (as opposed to &#8216;procedural&#8217; issues, which are outside the scope of veto). Within a couple years, the US and USSR both used the veto for non war-and-peace matters, establishing the obstructionist norms that have persisted since, and rendering the UN pretty ineffectual throughout the Cold War.</p><ul><li><p>However, given the Cold War, it&#8217;s hard to say how contingent this use of the veto was: perhaps it was very likely that the US and the USSR would interpret it in this way.</p></li></ul></li></ul><h2><strong>Intellectuals and civil society groups</strong></h2><p>Prior to the UN Charter, the League of Nations was drafted in significant part by a group of intellectuals appointed by Woodrow Wilson, called &#8216;<a href="https://en.wikipedia.org/wiki/The_Inquiry">The Inquiry</a>&#8217;. Wilson commissioned 150 intellectuals from different disciplines to prepare materials for the WW1 peace negotiations, with a view to &#8216;solving&#8217; geopolitical turmoil. This included drawing up post-war borders and establishing the League of Nations.</p><p>Given the ultimate failure of the League of Nations, this is more of a cautionary tale, and these elite-driven plans for the League were derided as &#8220;the professors&#8217; peace&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> However, this didn&#8217;t lead to a broader rejection of input from intellectuals when it came to UN planning. In part, this is because this intellectual milieu split into factions. The die-hard world federalists (like H.G. Wells and Clarence Streit) <em>did</em> lose influence, but the more moderate pragmatists (like Shotwell and Webster) remained influential.</p><p>Some of the most influential intellectuals on the drafting of the UN charter were:</p><ul><li><p><a href="https://en.wikipedia.org/wiki/Leo_Pasvolsky">Leo Pasvolsky</a>, the &#8220;foremost author of the UN Charter&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Pasvolsky was a US State Department official who led the work on postwar planning for an international body from 1939 (though efforts only began in earnest in 1942).</p></li><li><p><a href="https://en.wikipedia.org/wiki/James_T._Shotwell">James T. Shotwell</a>, who helped draft the UN Charter. Shotwell was a history professor and a previous member of The Inquiry. He had also been instrumental in establishing the International Labor Organisation and the Commission to Study the Organization of Peace.</p></li><li><p><a href="https://en.wikipedia.org/wiki/Clark_Eichelberger">Clark Eichelberger</a>, who advised Roosevelt and the US delegation. Eichelberger was a peace activist and a prolific advocate for both the League of Nations and the UN. He served as a bridge between the State Department and civil society groups (for example by helping to select attendees and organising side events).</p></li><li><p><a href="https://en.wikipedia.org/wiki/Gladwyn_Jebb">Gladwyn Jebb</a>, the British Pasvolsky. Jebb was a Foreign Office official who led the UK&#8217;s planning for an international body from the early 1940s. He also served as first Acting Secretary General for the UN.</p></li><li><p><a href="https://en.wikipedia.org/wiki/Charles_Webster_(historian)">Charles Webster</a>, who wrote a series of influential case studies of earlier international agreements. A history professor, Webster was one of two leading figures in British planning (with Jebb), and an expert in the precedent of great power agreements during the nineteenth century.</p></li></ul><p>Campaigning groups and civil society organisations also played a significant role in the drafting of the UN Charter:</p><ul><li><p>The <a href="https://en.wikipedia.org/wiki/Commission_to_Study_the_Organization_of_Peace">Commission to Study the Organization of Peace</a> (CSOP) was established by Shotwell and issued a report on &#8220;Fundamentals of the International Organization&#8221; which formed the basis for the US State Department&#8217;s Dumbarton Oaks proposal. In October 1944, CSOP assembled fifty organizations to discuss pro-UN strategy, and agreed to back a common campaign for the organization.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li></ul><ul><li><p>Many of the most influential civil society groups were in attendance at the San Francisco conference. They were pre-selected by the State Department for alignment; groups favouring world government and reactionaries/isolationists were not invited.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p></li></ul><ul><li><p>These groups secured three modest victories during the conference (but were ignored on the big issues like security, the veto, and trusteeship):</p><ul><li><p>Adding the word &#8220;education&#8221; into the charter (and thereby giving the UN some remit over such matters).</p></li><li><p>Incorporating &#8220;human rights&#8221; and establishing a human rights commission.</p></li><li><p>Article 71, enshrining the collaborative relationship between the UN and NGOs via the Economic and Social Council of the United Nations.</p></li></ul></li></ul><h2><strong>US domestic politics and public opinion</strong></h2><p>Woodrow Wilson&#8217;s plans for the League of Nations had been scuppered by domestic opposition in the US. Ratifying the treaty required a two-thirds Senate majority. Republicans objected that Wilson&#8217;s draft charter impinged on US sovereignty and undermined the doctrine of US non-entanglement. In 1918 midterms, Wilson sought a mandate for his plans, but Republicans gained control of both chambers and blocked the US from joining the League.</p><p>Throughout the UN process, Presidents were constrained by the need for Republican support for the proposals, not wanting to repeat Wilson&#8217;s error. Roosevelt tried hard to loop in Republicans in the early planning, and secured Republican support for the high-level ambitions in 1943 (though other factors like Pearl Harbour presumably contributed to US isolationism falling out of favour). Truman and Roosevelt both gave major roles to high-ranking Republicans during the negotiations, most notably Senator <a href="https://en.wikipedia.org/wiki/Arthur_Vandenberg">Vandenberg</a> (a key figure in the San Francisco conference) and <a href="https://en.wikipedia.org/wiki/John_Foster_Dulles">John Foster Dulles</a>.</p><p>Bipartisan support for the UN enabled Congress to pass two resolutions in favor of a global assembly, lending some public sanction to the process. First, on September 21, 1943, the House of Representatives passed the so-called Fulbright Resolution &#8220;favoring the creation of appropriate international machinery&#8221; to maintain the peace. Then on November 5, 1943, the Senate enacted the Connally Resolution (named after the head of the Senate Foreign Relations Committee), which called for the establishment of &#8220;international authority with power to prevent aggression&#8221; in the form of a &#8220;general international organization.&#8221;</p><p>As well as courting Republican support, US politicians seem to have been very focussed on shaping (via press / PR) and gauging (via polling) public opinion throughout the process.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> The British delegation, too, was very conscious of the necessity of maintaining US domestic support.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> This included allowing the US to take credit for much of the planning, which the UK viewed as important for the plan&#8217;s success.</p><p>The State Department embarked on a huge PR campaign to garner support for the UN Charter in 1944-45 (their first major PR campaign). It was widely regarded as successful.</p><p>Archibald MacLeish, assistant Secretary of State, disseminated information about the UN in weekly forums, and distributed <em><a href="https://www.youtube.com/watch?v=DRF4eGof8TA">Watchtower Over Tomorrow</a></em>, a film about the Dumbarton Oaks plan, to groups around the country. In late 1944, an eight-page pamphlet containing the text of the Dumbarton Oaks proposals was sent out to over 1.25 million people, a mass distribution unprecedented for the State Department, placing it on the best-seller list.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><p>Many civil society organisations also participated in the campaign. Clark Eichelberger wrote a thirty-two-page pamphlet on Dumbarton Oaks that, via his affiliate organizations, reached over 21,000 people. The National League of Women Voters sent out a discussion guide and text to six hundred local chapters around the country. The Woodrow Wilson Foundation mailed 318,000 copies of the Dumbarton Oaks text to individuals free of charge&#8212;nearly going bankrupt in the process. The national commander of the American Legion dispatched letters to his 12,000 posts urging the adoption of the UN Charter.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> And the Union for Democratic Action released 1 million copies of a cartoon brochure, <em><a href="https://ia800405.us.archive.org/9/items/fromgardenofeden00kand/fromgardenofeden00kand.pdf">From the Garden of Eden to Dumbarton Oaks.</a></em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><p>Polls reflected a change in perception from this PR blitz. In December 1944, only 43% of the American people had heard of Dumbarton Oaks. This rose to 52% by February 1945; and 60% by March 1945. 60% of Americans supported the San Francisco conference after Roosevelt&#8217;s January State of the Union address, rising to 80% after the Yalta conference. In April, on the eve of the San Francisco conference, 94% of the American public were aware of the conference.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a></p><p>The San Francisco Conference itself was a huge media event, with 2,300 newspaper people in attendance,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> and press coverage seems to have been very important to delegates. Several journalists were also actively involved in the US efforts as insiders. Walter Lippman, a journalist who at 28 had served as research director for Woodrow Wilson&#8217;s Inquiry, attended the conference and had remained close with the US government. Pasvolsky&#8217;s Advisory Committee, which worked from 1942 to develop a plan for the UN, included two journalists:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> Anne O&#8217;Hare McCormick (on the NYT editorial staff) and Hamilton Fish Armstrong (editor of <em>Foreign Affairs</em>).</p><p>However, media management at the start of the conference was poor (from a US perspective). The US delegation was reluctant to brief or leak to journalists, whereas the Soviets and others were much more obliging, resulting in slew of coverage (in the NYT and elsewhere) critical of the US for taking firm stances against the USSR. The US then overhauled its media operation and started doing regular briefings and leaks; which brought press more onto the US side.</p><h2><strong>Preparatory work</strong></h2><p>The basics of the UN Charter were agreed at the Dumbarton Oaks conference between the Big Four, with only a few details unresolved by the time of the San Francisco conference.</p><p>US sources describe the Charter as basically having been written by the US, without much input from the other great powers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> However, Ehrhardt (2020) shows quite convincingly that the UK decided at various points to let the US take the credit, in order to keep US domestic opinion favourable to the plans.</p><p>However, beyond the US and UK, there really was very little input from other great powers.</p><p>In some ways this isn&#8217;t surprising, given how much this was a US plan and how much effort the US had put into it over several years leading up to the Charter. Pasvolsky began work at the State Department on what would become the UN in 1939. This work was effectively paused during the start of WW2 proper, but then really got underway in early 1942 with the establishment of a special subcommittee on International Organization. This subcommittee worked incredibly hard, meeting 45 times over 9 months, and issuing a preliminary draft to Roosevelt in March 1943.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> This was then re-drafted again and again over the next few months. By 29th December 1943, the draft had all the basics of the UN Charter: a small Executive Council with a Big Four veto to handle security matters, a General Assembly with all nations, a Secretariat and sub-agencies, and an international Court.</p><p>One notable difference between the US and UK was the enthusiasm of their leaders for the UN planning. Roosevelt and his Secretaries of State seem to have cared a great deal, and &#8212; at least by 1945 &#8212; seen the UN plan as one of the most important things on their plate. Truman, who took over just before the San Francisco conference, felt similarly. Churchill, on the other hand, has been described as &#8220;one of the main obstacles to adequate British planning and to the actual establishment of the United Nations Organisation&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> He seems to have been generally not that interested, but then occasionally fixated on his own idiosyncratic (and poorly thought through) vision for an international organisation, which derailed things.</p><h2><strong>Idealism and pragmatism</strong></h2><p>A clear thread running through the story of the UN Charter is the balance between idealism and pragmatism.</p><p>The standard narrative is something like:</p><ul><li><p>Wilson was an idealist<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a> and an ivory tower academic type.</p></li><li><p>His League of Nations plan failed because it paid insufficient attention to realist great power considerations (toothless enforcement, lack of buy-in from great powers, too democratic / consensus-based)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> and domestic political considerations (with Congress refusing to ratify the League Treaty).</p></li><li><p>The UN plans were driven forward by a certain amount of idealism from the US, but tempered with pragmatism based on the failed experiment of the League, and that&#8217;s why it worked.</p></li></ul><p>Intellectually, there seems to have been a split in the 1920s and 1930s among the people who had worked on and advocated for the League of Nations plan into two factions:</p><ul><li><p>An idealistic faction who remained wedded to the idea of a world federation / government and continued to advocate for this, but lost political influence because of the League&#8217;s abject failure, including thinkers like H.G. Wells, Clarence Streit, and later Bertrand Russell.</p></li><li><p>A more pragmatic faction, who set up think tanks like the Council on Foreign Relations and Chatham House, and &#8216;institutionalised&#8217;. They had a more moderate, but still idealistic internationalist worldview, and were the people who were brought into the UN planning&#8212;thinkers like Webster, Jebb, Shotwell, Eichelberger, and Walter Lippman.</p></li></ul><h2><strong>Other observations</strong></h2><ul><li><p><strong>The failure of the League of Nations loomed large over UN planning.</strong> There was a fairly clear historical example of how not to do things.</p></li><li><p><strong>Spying was rife at the San Francisco conference.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> The US had a huge spying operation during the conference, including wiretapping diplomatic cables. This gave them a decent edge in negotiations, since they had inside knowledge of what other countries were thinking and where their reservations were. The USSR was later revealed to have had its own operation, including some sources within the US delegation.</p></li><li><p><strong>Amendments to the UN charter have been rare.</strong> <a href="https://en.wikisource.org/wiki/Charter_of_the_United_Nations#Article_108">Article 108</a> allows for amendments with support of two-thirds of the General Assembly and all 5 permanent members of the Security Council. <a href="https://en.wikisource.org/wiki/Charter_of_the_United_Nations#Article_109">Article 109</a> provides for the convening of a &#8220;General Conference of the Members of the United Nations&#8221; to consider changes to the Charter, which can be triggered by two-thirds vote in the General Assembly or seven members of the Security Council. Such a convention was scheduled for 1955, but didn&#8217;t actually take place. To date there have been <a href="https://en.wikipedia.org/wiki/Amendments_to_the_United_Nations_Charter#Amendments">five amendments</a> to the Charter. All were between 1965&#8211;73, and were to accommodate the increased size of the UN following decolonisation.</p></li><li><p><strong>There have been <a href="https://en.wikipedia.org/wiki/Amendments_to_the_United_Nations_Charter#Structural_changes_adopted_without_amendment">three major structural changes</a> to the UN made without amendment:</strong></p><ul><li><p>P5 abstentions in the Security Council have been interpreted in practice as &#8216;concurring votes&#8217; with respect to the veto on non-procedural matters.</p></li><li><p>After the collapse of the Soviet Union, Russia took the USSR&#8217;s place on the Security Council.</p></li><li><p>In 1971, the PRC assumed the Chinese seat (previously held by the Taipei Nationalist government) following General Assembly <a href="https://en.wikipedia.org/wiki/United_Nations_General_Assembly_Resolution_2758">resolution 2758</a>.</p></li></ul></li><li><p><strong>Lots of the Charter is fairly redundant today.</strong> For example, the UN Charter envisaged a key role for the UN in economic and social matters, but the UN has been superseded by other bodies on economic matters&#8212;namely the Bretton Woods system for international finance.</p></li></ul><h1>Appendix: Locksley Hall</h1><blockquote><p>For I dipt into the future, far as human eye could see,<br>Saw the Vision of the world, and all the wonder that would be;</p><p>Saw the heavens fill with commerce, argosies of magic sails,<br>Pilots of the purple twilight dropping down with costly bales;</p><p>Heard the heavens fill with shouting, and there rain&#8217;d a ghastly dew<br>From the nations&#8217; airy navies grappling in the central blue;</p><p>Far along the world-wide whisper of the south-wind rushing warm,<br>With the standards of the peoples plunging thro&#8217; the thunder-storm;</p><p>Till the war-drum throbb&#8217;d no longer, and the battle-flags were furl&#8217;d<br>In the Parliament of man, the Federation of the world.</p><p>There the common sense of most shall hold a fretful realm in awe,<br>And the kindly earth shall slumber, lapt in universal law.</p><p>&#8212;Alfred Lord Tennyson, 1842, <a href="https://en.wikipedia.org/wiki/Locksley_Hall">Locksley Hall</a></p></blockquote><p>This Victorian futurist poem was a favourite of two key figures in the story of the UN Charter: Winston Churchill and Harry Truman.</p><ul><li><p>Truman kept a copy of it in his wallet for thirty years, reflecting in 1952: &#8220;it is a prophecy of the age in which we live now. And we are faced with a much greater age than the one that Tennyson dreamed about &#8230; I think we are at the door of the greatest age in history in everything. If we can prevent a third world war &#8230; the young people today, I think, will see &#8230; an age that our fathers and grandfathers dreamed about, but never thought would happen.&#8221;</p></li><li><p>Churchill called it &#8220;the most wonderful of modern prophecies&#8221; and quoted it throughout his life, including in his essay <a href="https://www.nationalchurchillmuseum.org/fifty-years-hence.html">Fifty Years Hence</a>.</p></li></ul><h1>References</h1><p>Edis (2007). <em><a href="https://www.tandfonline.com/doi/abs/10.1080/09557579208400074">A job well done: The founding of the united nations revisited</a></em>.</p><p>Ehrhardt (2020). <em><a href="https://kclpure.kcl.ac.uk/ws/portalfiles/portal/139540410/2020_Ehrhardt_Andrew_1456418_ethesis.pdf">The British Foreign Office and the Creation of the United Nations Organization, 1941- 1945</a></em>.</p><p>Gerber (1982). &#8216;The Baruch Plan and the Origins of the Cold War.&#8217; Diplomatic History 6:4, pp. 69-96. <a href="https://sci-hub.se/https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-7709.1982.tb00792.x">https://sci-hub.se/https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-7709.1982.tb00792.x</a>.</p><p>Kennedy (2007). <em>The Parliament of Man: The Past, Present, and Future of the United Nations.</em></p><p>McCullough (1992). <em>Truman</em>.</p><p>Schlesinger (2003). <em>Act of Creation: The Founding of the United Nations</em>.</p><p>&#8216;United Nations Charter (full text)&#8217; (1945). <a href="https://www.un.org/en/about-us/un-charter/full-text">https://www.un.org/en/about-us/un-charter/full-text</a></p><p>Webster (1947). <a href="https://onlinelibrary.wiley.com/doi/10.1111/j.1468-229X.1947.tb00182.x">The Making of the Charter of the United Nations</a>.</p><p>Zaidi and Dafoe (2021). &#8216;International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons&#8217;. Centre for the Governance of AI Working Paper. <a href="https://cdn.governance.ai/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf">https://cdn.governance.ai/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf</a></p><p><em>This article was created by <a href="https://www.forethought.org/about">Forethought</a>. See the original <a href="https://www.forethought.org/research/the-un-charter-a-case-study-in-international-governance">on our website</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The contentious members, who hadn&#8217;t joined the Allies by 1942, were Argentina (neutral / pro-Nazi), and Belarus and Ukraine (both Soviet Republics). Roughly speaking, Belarus and Ukraine were admitted as a concession to the USSR, who objected to the inclusion of Argentina.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Kennedy (2007).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Schlesinger (2003), p.182.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://en.wikipedia.org/wiki/Military_Staff_Committee">https://en.wikipedia.org/wiki/Military_Staff_Committee</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Ehrhardt (2020), p. 100.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><a href="http://transcripts.cnn.com/TRANSCRIPTS/0412/24/i_dl.01.html">&#8220;Interview with Stephen Schlesinger on CNN&#8217;s Diplomatic License&#8221;</a>. December 24, 2004.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Schlesinger (2003), p.71.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>CSOP; the Congress of Industrial Organizations (CIO); the Council on Foreign Relations; the America Jewish Committee; the American Bar Association; the League of Women Voters; the Catholic Welfare Conference; the Foreign Policy Association; the NAACP; the Kiwanis International; the Lions International ; the Rotary International; the National Education Association; the American Legion; the National Lawyers&#8217; Guild; and twenty-seven other organizations.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>For example, in the Senate debate over ratifying the Treaty, Senator Connally (a US delegate) &#8220;listed the numerous independent groups backing the agreement, and mentioned opinion polls in favor of the U.N. Charter&#8221; (Schlesinger (2003), p. 290).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Ehrhardt (2020), p. 33.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Schlesinger (2003), p. 84.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Schlesinger (2003), p. 71.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Schlesinger (2003), p. 84.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Schlesinger (2003), p. 84.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Schlesinger (2003), p. 162.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Schlesinger (2003), p. 56.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Hull (US Secretary of State) observed in his memoirs that &#8220;all the essential points in the tentative draft&#8221; that he had originally handed to the Russians and the British before the conference &#8220;were incorporated in the draft now accepted by the conference.&#8221; A US source on Dumbarton Oaks stated: &#8220;neither the British, the Russians, nor the Chinese seemed to take the preparatory work very seriously. Each of the governments sent Roosevelt some general thoughts on a global body, but, except for some lengthy British notations titled &#8220;Future World Organization,&#8221; nothing of serious consequence.&#8221; As a result, the Pasvolsky proposal, &#8220;which was by far the most complete and detailed of the three, became&#8212;albeit unofficially&#8212;the basic frame of reference for building a plan of world organization.&#8221; Schlesinger (2003), p. 65.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Schlesinger (2003), p. 57.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>Ehrhardt (2020), p. 13, citing E. J. Hughes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>In some respects, but not others. <a href="https://en.wikipedia.org/wiki/Woodrow_Wilson_and_race">https://en.wikipedia.org/wiki/Woodrow_Wilson_and_race</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Gladwynn Jebb (a key UK delegate): &#8220;The League system...was about as perfect as the human mind could derive. The only trouble about it was that it wouldn't work. The reason why it wouldn't work was in the first place because the existing Great Powers could not agree as among themselves on certain essential things. And until we do get agreement between the World Powers on these essential things no international machine however perfect will ever work.&#8221; (Ehrhardt (2020), p. 196).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Schlesinger (2003), chapter 7.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Angel-on-the-shoulder AI tools]]></title><description><![CDATA[See the full article on Forethought&#8217;s website.]]></description><link>https://newsletter.forethought.org/p/angel-on-the-shoulder-ai-tools</link><guid isPermaLink="false">https://newsletter.forethought.org/p/angel-on-the-shoulder-ai-tools</guid><dc:creator><![CDATA[Owen Cotton-Barratt]]></dc:creator><pubDate>Mon, 09 Feb 2026 10:17:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5qrg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>See the full article <a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder">on Forethought&#8217;s website</a>.</em></p><p>We&#8217;ve recently published a set of design sketches for technological analogues to &#8216;angels-on-the-shoulder&#8217;: customized tools that leverage near-term AI systems to help people better navigate their environments and handle tricky situations in ways they&#8217;ll feel good about later.</p><p>We think that these tools could be quite important:</p><ul><li><p>In general, we expect angels-on-the-shoulder to mean more endorsed decisions, and fewer unforced errors.</p></li><li><p>In the context of the <a href="https://strangecities.substack.com/p/the-choice-transition">transition to more advanced AI systems</a> that we&#8217;re faced with, this could be a huge deal. We think that people who are better informed, more situationally aware, more in touch with their own values, and less prone to obvious errors are more likely to handle the coming decades well.</p></li></ul><p>We&#8217;re excited for people to build tools that help this to happen, and hope that our design sketches will make this area more concrete, and inspire people to get started.</p><p>The (overly-)specific technologies we sketch out are:</p><ul><li><p><strong><a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder#aligned-recommender-systems">Aligned recommender systems</a></strong> &#8212; Most people consume content recommended to them by algorithms trained not to drive short-term engagement, but to meet long-term user endorsement and considered values</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder#personalised-learning-systems">Personalised learning systems</a></strong> &#8212; When people want to learn about (or keep up-to-date on) a topic or area of work, they can get a personalised &#8220;curriculum&#8221; (that&#8217;s high quality, adapted to their preferences, and built around gaps in their knowledge) integrated into their routines, so learning is effective and feels effortless</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder#deep-briefing">Deep briefing</a></strong> &#8212; Anyone facing a decision can quickly get a summary of the key considerations and tradeoffs (in whichever format works best for them), as would be compiled by an expert high-context assistant, with the ability to double-click on the parts they most want to know more about</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder#reflection-scaffolding">Reflection scaffolding</a></strong> &#8212; People thinking through situations they experience as tricky, or who want to better understand themselves or pursue personal growth, can do so with the aid of an expert system, which, as an infinitely-patient, always-available Socratic coach, will read what may be important for the person in their choice of words or tone of voice, ask probing questions, and push back in the places where that would be helpful</p></li><li><p><strong><a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder#guardian-angels">Guardian angels</a></strong> &#8212; Many people use systems that flag when they might be about to do something they could seriously regret, and help them think through what they endorse and want to go for (as an expert coach might)</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5qrg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5qrg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 424w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 848w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5qrg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png" width="1456" height="989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:989,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:741372,&quot;alt&quot;:&quot;Hand-drawn concept board titled &#8220;Angels-on-the-shoulder,&#8221; with interface mockups for reflection scaffolding, deep briefings, aligned recommenders, personalized learning, and guardian angels, showing AI tools that support better real-time decisions.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://newsletter.forethought.org/i/186956416?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hand-drawn concept board titled &#8220;Angels-on-the-shoulder,&#8221; with interface mockups for reflection scaffolding, deep briefings, aligned recommenders, personalized learning, and guardian angels, showing AI tools that support better real-time decisions." title="Hand-drawn concept board titled &#8220;Angels-on-the-shoulder,&#8221; with interface mockups for reflection scaffolding, deep briefings, aligned recommenders, personalized learning, and guardian angels, showing AI tools that support better real-time decisions." srcset="https://substackcdn.com/image/fetch/$s_!5qrg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 424w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 848w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!5qrg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3183b9b8-f140-4bf4-a857-2aa8a022828e_2172x1476.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you have ideas for how to implement these technologies, issues we may not have spotted, or visions for other tools in this space, we&#8217;d love to hear them.</p><p><em>See the full article <a href="https://www.forethought.org/research/design-sketches-angels-on-the-shoulder">on Forethought&#8217;s website</a>.</em></p>]]></content:encoded></item></channel></rss>