Discussion about this post

User's avatar
Neural Foundry's avatar

Compelling breakdown of the physical economy transformation timeline. The distinction between software recursion and industrial recursion is especially sharp given how capital constraints work differently. When watching manufacturing sectors, the bottleneck has always been tooling and supply chains rather than pure design iteration. That point about authoritarian regimes having structural advantages during the industrial explosion is worth unpacking more, since coordination at scale could indeed trump earlyAGI leads if deployment lags infrastructure.

Aaron Bergman's avatar

Admittedly haven't even finished the episode but I think the discussion of whether AGI/ASI will make price signals less important for organizing an economy is mostly missing an important conceptual distinction; that is, between:

1) price-based free market functionally doing computation; and

2) prices accurately eliciting information about not only the "external world" but also about the values, preferences, and internal states of actors in the economy

I think to a first approximation, ASI will be able to do (1) pretty well and (2) less well and plausibly not well at all (depending on how extreme/'sci fi' ASI really gets)

The hard part (potentially hard even for an ASI) is *eliciting* certain types of information from other actors (humans or robots or AI systems or companies)

Now some information currently elicited via prices is the kind of thing that ASI can itself gather or discover, like how much it takes in terms of overall resources to make a table or chair; the carpenter has lots of information relevant to that question that I do not, but plausibly ASI can model the world well enough to avoid having to deal with the carpenter as an intermediary, sure.

But other types of information, namely the preferences, willingness to pay, and supply and demand curves of individuals (again, not necessarily human) seems categorically different; ASI doesn't just have to understand carpentry really well, which it can - it has to be able to model *me* from the outside.

Maybe this is possible - maybe there will be extreme boundary violations (physically inspecting my brain for example and putting it back together) or less invasive technology that allows for extremely accurate modeling of other agents, but it's not *obvious* to me that the following two things are both true:

1) doing this sort of thing very very accurately is physically possible at all

2) conditional on (1), doing this sort of thing very very accurately doesn't decrease the productive capacity of the robots/humans/companies whose preferences and supply and demand curves need elicitation.

Like maybe ASI can reconstruct my exact willingness to drive Uber and to write Substack posts and to edit podcasts and to dig trenches at various price points but only after killing me; not super helpful, then!

I think the one big way around these concerns about feasibility is in the case where you have a single coherent, unified, agentic system or perfectly aligned sets of agentic systems with no misaligned subparts (eg employees in a company) working together. In that case, sure, eliciting info is a straightforward communication and algorithmic task/problem.

But I don't think that Will and Tom necessarily expect this degree of unification and alignment (correct me if wrong of course)

2 more comments...

No posts

Ready for more?