Compelling breakdown of the physical economy transformation timeline. The distinction between software recursion and industrial recursion is especially sharp given how capital constraints work differently. When watching manufacturing sectors, the bottleneck has always been tooling and supply chains rather than pure design iteration. That point about authoritarian regimes having structural advantages during the industrial explosion is worth unpacking more, since coordination at scale could indeed trump earlyAGI leads if deployment lags infrastructure.
Admittedly haven't even finished the episode but I think the discussion of whether AGI/ASI will make price signals less important for organizing an economy is mostly missing an important conceptual distinction; that is, between:
1) price-based free market functionally doing computation; and
2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy
I think to a first approximation, ASI will be able to do (1) pretty well and (2) less well and plausibly not well at all (depending on how extreme/'sci fi' ASI really gets)
The hard part (potentially hard even for an ASI) is *eliciting* certain types of information from other actors (humans or robots or AI systems or companies)
Now some information currently elicited via prices is the kind of thing that ASI can itself gather or discover, like how much it takes in terms of overall resources to make a table or chair; the carpenter has lots of information relevant to that question that I do not, but plausibly ASI can model the world well enough to avoid having to deal with the carpenter as an intermediary, sure.
But other types of information, namely the preferences, willingness to pay, and supply and demand curves of individuals (again, not necessarily human) seems categorically different; ASI doesn't just have to understand carpentry really well, which it can - it has to be able to model *me* from the outside.
Maybe this is possible - maybe there will be extreme boundary violations (physically inspecting my brain for example and putting it back together) or less invasive technology that allows for extremely accurate modeling of other agents, but it's not *obvious* to me that the following two things are both true:
1) doing this sort of thing very very accurately is physically possible at all
2) conditional on (1), doing this sort of thing very very accurately doesn't decrease the productive capacity of the robots/humans/companies whose preferences and supply and demand curves need elicitation.
Like maybe ASI can reconstruct my exact willingness to drive Uber and to write Substack posts and to edit podcasts and to dig trenches at various price points but only after killing me; not super helpful, then!
I think the one big way around these concerns about feasibility is in the case where you have a single coherent, unified, agentic system or perfectly aligned sets of agentic systems with no misaligned subparts (eg employees in a company) working together. In that case, sure, eliciting info is a straightforward communication and algorithmic task/problem.
But I don't think that Will and Tom necessarily expect this degree of unification and alignment (correct me if wrong of course)
> 1) price-based free market functionally doing computation; and 2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy
Lol I actually didn't have an answer ready to go and it took me a minute but the intuition is like:
You can in principle know the facts of the matter about what everyone would do in an arbitrary situation, like everyone has some function (maybe it's probabilistic or random in some sense but still, a function) mapping (world including decisions of other actors) -> (my actions) and you can model a market economy as doing two things, conceptually in order:
- (First but no. 2 above): documenting what every actor's function (eg supply and demand curves for everything) is; and
- (Second but no. 1 above): figuring out what happens when actors' functions and conditions in the real world and maybe ex post result of randomness are thrown at each-other
This "figuring out"^ is what I mean by (1).
Like I could in principle have perfect introspection/access to my own revealed preferences *and* perfect knowledge about everyone else's functions *and* perfect knowledge about what the weather will be and whether there will be an earthquake and so on, *and* be very smart *and* have access to arbitrary compute, and if you ask me "ok so are you going to be driving for Uber tonight at 7pm" my literal answer is "idk, I'd have to run the numbers/model things out and I'll tell you after I do that"
And I think this conceptual and literal gap between knowing all the low-level facts there are to know and knowing the outcome of those facts reflects the distinction between (1) and (2)
Also a sort of different but also-true answer is: "(1) means whatever Will is getting at when he says e.g., 'Once you've got superintelligence, you can more easily transmit information across this very large and complex economy, especially in a case where all of the AIs are working for the same boss in an autocracy.'"
Compelling breakdown of the physical economy transformation timeline. The distinction between software recursion and industrial recursion is especially sharp given how capital constraints work differently. When watching manufacturing sectors, the bottleneck has always been tooling and supply chains rather than pure design iteration. That point about authoritarian regimes having structural advantages during the industrial explosion is worth unpacking more, since coordination at scale could indeed trump earlyAGI leads if deployment lags infrastructure.
Admittedly haven't even finished the episode but I think the discussion of whether AGI/ASI will make price signals less important for organizing an economy is mostly missing an important conceptual distinction; that is, between:
1) price-based free market functionally doing computation; and
2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy
I think to a first approximation, ASI will be able to do (1) pretty well and (2) less well and plausibly not well at all (depending on how extreme/'sci fi' ASI really gets)
The hard part (potentially hard even for an ASI) is *eliciting* certain types of information from other actors (humans or robots or AI systems or companies)
Now some information currently elicited via prices is the kind of thing that ASI can itself gather or discover, like how much it takes in terms of overall resources to make a table or chair; the carpenter has lots of information relevant to that question that I do not, but plausibly ASI can model the world well enough to avoid having to deal with the carpenter as an intermediary, sure.
But other types of information, namely the preferences, willingness to pay, and supply and demand curves of individuals (again, not necessarily human) seems categorically different; ASI doesn't just have to understand carpentry really well, which it can - it has to be able to model *me* from the outside.
Maybe this is possible - maybe there will be extreme boundary violations (physically inspecting my brain for example and putting it back together) or less invasive technology that allows for extremely accurate modeling of other agents, but it's not *obvious* to me that the following two things are both true:
1) doing this sort of thing very very accurately is physically possible at all
2) conditional on (1), doing this sort of thing very very accurately doesn't decrease the productive capacity of the robots/humans/companies whose preferences and supply and demand curves need elicitation.
Like maybe ASI can reconstruct my exact willingness to drive Uber and to write Substack posts and to edit podcasts and to dig trenches at various price points but only after killing me; not super helpful, then!
I think the one big way around these concerns about feasibility is in the case where you have a single coherent, unified, agentic system or perfectly aligned sets of agentic systems with no misaligned subparts (eg employees in a company) working together. In that case, sure, eliciting info is a straightforward communication and algorithmic task/problem.
But I don't think that Will and Tom necessarily expect this degree of unification and alignment (correct me if wrong of course)
Great comment - thanks!
> 1) price-based free market functionally doing computation; and 2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy
Can you say more about what you mean by (1)?
Lol I actually didn't have an answer ready to go and it took me a minute but the intuition is like:
You can in principle know the facts of the matter about what everyone would do in an arbitrary situation, like everyone has some function (maybe it's probabilistic or random in some sense but still, a function) mapping (world including decisions of other actors) -> (my actions) and you can model a market economy as doing two things, conceptually in order:
- (First but no. 2 above): documenting what every actor's function (eg supply and demand curves for everything) is; and
- (Second but no. 1 above): figuring out what happens when actors' functions and conditions in the real world and maybe ex post result of randomness are thrown at each-other
This "figuring out"^ is what I mean by (1).
Like I could in principle have perfect introspection/access to my own revealed preferences *and* perfect knowledge about everyone else's functions *and* perfect knowledge about what the weather will be and whether there will be an earthquake and so on, *and* be very smart *and* have access to arbitrary compute, and if you ask me "ok so are you going to be driving for Uber tonight at 7pm" my literal answer is "idk, I'd have to run the numbers/model things out and I'll tell you after I do that"
And I think this conceptual and literal gap between knowing all the low-level facts there are to know and knowing the outcome of those facts reflects the distinction between (1) and (2)
Also a sort of different but also-true answer is: "(1) means whatever Will is getting at when he says e.g., 'Once you've got superintelligence, you can more easily transmit information across this very large and complex economy, especially in a case where all of the AIs are working for the same boss in an autocracy.'"