When will AI transform the physical world?
A podcast conversation with Tom Davidson and Will MacAskill
Tom Davidson and Will MacAskill are both researchers at Forethought.
They discuss:
What is the industrial explosion?
Why the case for recursive self-improvement is stronger for physical industry than for software
How fast the physical economy could grow, the case for weekly doubling times, and limits from natural resources
Three phases of the industrial explosion: AI-directed human labour → autonomous replicators → atomically precise manufacturing
Why authoritarian regimes might have a structural advantage in the industrial explosion — and whether they could lose the race to AGI, but win the race to industrial dominance
Could a leading country outgrow the entire world to achieve decisive dominance?
Why doubling your rival's GDP could mean 30 years of tech advantage — enough for military dominance
Why does the industrial explosion get ~1% of the attention of the intelligence explosion?
Here’s a link to the full transcript.
ForeCast is Forethought’s interview podcast. You can see all our episodes here.




Compelling breakdown of the physical economy transformation timeline. The distinction between software recursion and industrial recursion is especially sharp given how capital constraints work differently. When watching manufacturing sectors, the bottleneck has always been tooling and supply chains rather than pure design iteration. That point about authoritarian regimes having structural advantages during the industrial explosion is worth unpacking more, since coordination at scale could indeed trump earlyAGI leads if deployment lags infrastructure.
Admittedly haven't even finished the episode but I think the discussion of whether AGI/ASI will make price signals less important for organizing an economy is mostly missing an important conceptual distinction; that is, between:
1) price-based free market functionally doing computation; and
2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy
I think to a first approximation, ASI will be able to do (1) pretty well and (2) less well and plausibly not well at all (depending on how extreme/'sci fi' ASI really gets)
The hard part (potentially hard even for an ASI) is *eliciting* certain types of information from other actors (humans or robots or AI systems or companies)
Now some information currently elicited via prices is the kind of thing that ASI can itself gather or discover, like how much it takes in terms of overall resources to make a table or chair; the carpenter has lots of information relevant to that question that I do not, but plausibly ASI can model the world well enough to avoid having to deal with the carpenter as an intermediary, sure.
But other types of information, namely the preferences, willingness to pay, and supply and demand curves of individuals (again, not necessarily human) seems categorically different; ASI doesn't just have to understand carpentry really well, which it can - it has to be able to model *me* from the outside.
Maybe this is possible - maybe there will be extreme boundary violations (physically inspecting my brain for example and putting it back together) or less invasive technology that allows for extremely accurate modeling of other agents, but it's not *obvious* to me that the following two things are both true:
1) doing this sort of thing very very accurately is physically possible at all
2) conditional on (1), doing this sort of thing very very accurately doesn't decrease the productive capacity of the robots/humans/companies whose preferences and supply and demand curves need elicitation.
Like maybe ASI can reconstruct my exact willingness to drive Uber and to write Substack posts and to edit podcasts and to dig trenches at various price points but only after killing me; not super helpful, then!
I think the one big way around these concerns about feasibility is in the case where you have a single coherent, unified, agentic system or perfectly aligned sets of agentic systems with no misaligned subparts (eg employees in a company) working together. In that case, sure, eliciting info is a straightforward communication and algorithmic task/problem.
But I don't think that Will and Tom necessarily expect this degree of unification and alignment (correct me if wrong of course)