Discussion about this post

User's avatar
NoraA's avatar

Thanks for writing this - I certainly agree with the thrust!

In terms of what transitions might be particularly important, a plausible candidate I've been thinking a lot about recently (and that I'd be interested in your reactions on) is "agentic transactioning": roughly, the shift toward AI agents negotiating, contracting, and exchanging on behalf of humans (as well as with each other).

("agentic transactioning" probably isn't the best term, but maybe if we all squint a bit for now, it can still be a useful starting ground for discussion.)

If we handle this poorly, the lion's share of economic activity could shift to agent-to-agent transactions with humans falling out of the loop entirely—failing to benefit, failing to oversee. It could undermine privacy, contribute to the erosion of societal checks & balances, and ultimately threaten the rule of law itself (eg. if we fail to effectively prevent or sanction agents colluding, committing fraud or other harmful behaviours).

In better worlds, humans could gain significantly, both individually and collectively: we'd be able to internalize externalities far more efficiently, both positive (appropriate investment in health, progress, flourishing, etc.) and negative (underinvestment in risk reduction, collective goods, ...).

TBC, I don't think this is just about the economy. I think it bears directly on what is arguably the core question of political philosophy: how do we live alongside each other well, given heterogeneous preferences and interests, and given that we share one physical universe (which means we can "step on each other's toes", but also achieve things collectively we couldn't achieve alone). IMO, at first approximation, ~all institutions (political, economic, cultural) can usefully be interpreted as trying to be a (partial) answers to that question.

For example, we centralise some power in 'the Sovereign' largely because we've lacked the civilisational fabric to reliably achieve good outcomes without it (e.g. managing tragedy of the commons-like dynamics, etc.) But the right combination of 'intelligence on tap', channelled through the right infrastructure, could represent a step change in society's capacity to organise itself—including the possibility of decentralising some of that power while still maintaining robust mutual accountability. (Just to avoid confusion, this is not trying to be an argument for the abolition of the state.)

In terms of what needs doing here: this is obv a big bucket. For example, it centrally includes epistemics, but also requires for example upgrading our 'root of trust' by hardening hardware & software infra, build strategy-proof deliberation & coordination infrastructure, etc. The AI alignment field has made some IMO encouraging progress towards training genuinely quite ~friendly (though certainly not infallible) AI, that can act as 'delegate' agents. This is a necessary, but not sufficient piece of the puzzle. It also critically requires a bunch of shared infrastructure -- building which seems potentially extremely high leverage to me!

Phil Bell's avatar

Thanks, I found the point about sequencing both useful and thought-provoking. I am exploring all the work you and others are doing on AI for human reasoning, and this nicely articulates the potentially multiplicative impact of an epistemic shield of this kind.

2 more comments...

No posts

Ready for more?