6 Comments
User's avatar
Misha Saul's avatar

Surely those in charge of AI and working to maximise that technology lever are optimising precisely for *flourishing*

Expand full comment
Will MacAskill's avatar

Depends what they're doing, exactly - if aiming to prevent takeover of valueless AI, then no. I do think harnessing AI is one of, or the most, valuable ways of getting to "Flourishing".

Expand full comment
Misha Saul's avatar

I think we just said the same thing?

Expand full comment
Kenneth Diao's avatar

I like the concept of viatopia. I think that mindset is better than current, more static paradigms like utopia or (in AI safety) alignment as traditionally construed.

Expand full comment
Alvin Ånestrand's avatar

Though you have probably considered it, it seems worth noting that urgency matters a lot too, not just neglectedness and how far we are from the ceiling. If existential risk is especially high in the coming years, it makes sense to focus on that first, until the risk is lower.

Though this only applies if you have to make a trade-off in effort allocation, it would be great to see more work in dealing with both existential risk and flourishing

Expand full comment
Will MacAskill's avatar

I agree with the basic point (though only if non-urgency converts into non-neglectedness), though I think that the "punt till later" argument is generally overstated.

Fin and I talk about this more in section 5 here: https://www.forethought.org/research/preparing-for-the-intelligence-explosion

Expand full comment