Depends what they're doing, exactly - if aiming to prevent takeover of valueless AI, then no. I do think harnessing AI is one of, or the most, valuable ways of getting to "Flourishing".
I like the concept of viatopia. I think that mindset is better than current, more static paradigms like utopia or (in AI safety) alignment as traditionally construed.
Though you have probably considered it, it seems worth noting that urgency matters a lot too, not just neglectedness and how far we are from the ceiling. If existential risk is especially high in the coming years, it makes sense to focus on that first, until the risk is lower.
Though this only applies if you have to make a trade-off in effort allocation, it would be great to see more work in dealing with both existential risk and flourishing
I agree with the basic point (though only if non-urgency converts into non-neglectedness), though I think that the "punt till later" argument is generally overstated.
Surely those in charge of AI and working to maximise that technology lever are optimising precisely for *flourishing*
Depends what they're doing, exactly - if aiming to prevent takeover of valueless AI, then no. I do think harnessing AI is one of, or the most, valuable ways of getting to "Flourishing".
I think we just said the same thing?
I like the concept of viatopia. I think that mindset is better than current, more static paradigms like utopia or (in AI safety) alignment as traditionally construed.
Though you have probably considered it, it seems worth noting that urgency matters a lot too, not just neglectedness and how far we are from the ceiling. If existential risk is especially high in the coming years, it makes sense to focus on that first, until the risk is lower.
Though this only applies if you have to make a trade-off in effort allocation, it would be great to see more work in dealing with both existential risk and flourishing
I agree with the basic point (though only if non-urgency converts into non-neglectedness), though I think that the "punt till later" argument is generally overstated.
Fin and I talk about this more in section 5 here: https://www.forethought.org/research/preparing-for-the-intelligence-explosion