Discussion about this post

User's avatar
NoraA's avatar

Thanks for writing this - I certainly agree with the thrust!

In terms of what transitions might be particularly important, a plausible candidate I've been thinking a lot about recently (and that I'd be interested in your reactions on) is "agentic transactioning": roughly, the shift toward AI agents negotiating, contracting, and exchanging on behalf of humans (as well as with each other).

("agentic transactioning" probably isn't the best term, but maybe if we all squint a bit for now, it can still be a useful starting ground for discussion.)

If we handle this poorly, the lion's share of economic activity could shift to agent-to-agent transactions with humans falling out of the loop entirely—failing to benefit, failing to oversee. It could undermine privacy, contribute to the erosion of societal checks & balances, and ultimately threaten the rule of law itself (eg. if we fail to effectively prevent or sanction agents colluding, committing fraud or other harmful behaviours).

In better worlds, humans could gain significantly, both individually and collectively: we'd be able to internalize externalities far more efficiently, both positive (appropriate investment in health, progress, flourishing, etc.) and negative (underinvestment in risk reduction, collective goods, ...).

TBC, I don't think this is just about the economy. I think it bears directly on what is arguably the core question of political philosophy: how do we live alongside each other well, given heterogeneous preferences and interests, and given that we share one physical universe (which means we can "step on each other's toes", but also achieve things collectively we couldn't achieve alone). IMO, at first approximation, ~all institutions (political, economic, cultural) can usefully be interpreted as trying to be a (partial) answers to that question.

For example, we centralise some power in 'the Sovereign' largely because we've lacked the civilisational fabric to reliably achieve good outcomes without it (e.g. managing tragedy of the commons-like dynamics, etc.) But the right combination of 'intelligence on tap', channelled through the right infrastructure, could represent a step change in society's capacity to organise itself—including the possibility of decentralising some of that power while still maintaining robust mutual accountability. (Just to avoid confusion, this is not trying to be an argument for the abolition of the state.)

In terms of what needs doing here: this is obv a big bucket. For example, it centrally includes epistemics, but also requires for example upgrading our 'root of trust' by hardening hardware & software infra, build strategy-proof deliberation & coordination infrastructure, etc. The AI alignment field has made some IMO encouraging progress towards training genuinely quite ~friendly (though certainly not infallible) AI, that can act as 'delegate' agents. This is a necessary, but not sufficient piece of the puzzle. It also critically requires a bunch of shared infrastructure -- building which seems potentially extremely high leverage to me!

Neural Foundry's avatar

Compeling framework on transformation sequencing. The comparitive advantage argument for working on earlier transitions is underappreciated in AI safety discourse. I've noticed similar patterns in how organizational change unfolds where initial shifts determine future option spaces more than we typically model. The epistemics-first path particulary intrigues me as precursor infrastructure.

3 more comments...

No posts

Ready for more?