Depends what they're doing, exactly - if aiming to prevent takeover of valueless AI, then no. I do think harnessing AI is one of, or the most, valuable ways of getting to "Flourishing".
"The tractability of better futures work is much less clear; if the argument falls down, it falls down here. But I think we should at least try to find out how tractable the best interventions in this area are. A decade ago, work on AI safety and biorisk mitigation looked incredibly intractable. But concerted effort *made* the areas tractable."
Has it really? Do you really think we are functionally 'safer' right now? or that somewhere in the undefined decades ahead, there is an inflection point that the work has pointed at?
I commend the looking at "flourishing" as something other than surviving, but most of this seems continually averse and maybe allergic to looking at what is happening right now in the world. I could agree more people perhaps are "talking about things in the space of wondering how to make futures better", but, has the last five years really brought things to a more tenable reality?
"So, I don’t talk about some obvious reasons for wanting to prevent near-term catastrophes - like, not wanting yourself and all your loved ones to die. But I’m not saying that those aren’t important moral reasons."
I get it, I just find it to be... skipping to the star trek future because it's kind of easier to work with and less messy. Or maybe even somewhat misleading that there aren't actual catastrophes right now.
I like the concept of viatopia. I think that mindset is better than current, more static paradigms like utopia or (in AI safety) alignment as traditionally construed.
Though you have probably considered it, it seems worth noting that urgency matters a lot too, not just neglectedness and how far we are from the ceiling. If existential risk is especially high in the coming years, it makes sense to focus on that first, until the risk is lower.
Though this only applies if you have to make a trade-off in effort allocation, it would be great to see more work in dealing with both existential risk and flourishing
I agree with the basic point (though only if non-urgency converts into non-neglectedness), though I think that the "punt till later" argument is generally overstated.
Surely those in charge of AI and working to maximise that technology lever are optimising precisely for *flourishing*
Depends what they're doing, exactly - if aiming to prevent takeover of valueless AI, then no. I do think harnessing AI is one of, or the most, valuable ways of getting to "Flourishing".
I think we just said the same thing?
"The tractability of better futures work is much less clear; if the argument falls down, it falls down here. But I think we should at least try to find out how tractable the best interventions in this area are. A decade ago, work on AI safety and biorisk mitigation looked incredibly intractable. But concerted effort *made* the areas tractable."
Has it really? Do you really think we are functionally 'safer' right now? or that somewhere in the undefined decades ahead, there is an inflection point that the work has pointed at?
I commend the looking at "flourishing" as something other than surviving, but most of this seems continually averse and maybe allergic to looking at what is happening right now in the world. I could agree more people perhaps are "talking about things in the space of wondering how to make futures better", but, has the last five years really brought things to a more tenable reality?
"So, I don’t talk about some obvious reasons for wanting to prevent near-term catastrophes - like, not wanting yourself and all your loved ones to die. But I’m not saying that those aren’t important moral reasons."
I get it, I just find it to be... skipping to the star trek future because it's kind of easier to work with and less messy. Or maybe even somewhat misleading that there aren't actual catastrophes right now.
I think so
I like the concept of viatopia. I think that mindset is better than current, more static paradigms like utopia or (in AI safety) alignment as traditionally construed.
Though you have probably considered it, it seems worth noting that urgency matters a lot too, not just neglectedness and how far we are from the ceiling. If existential risk is especially high in the coming years, it makes sense to focus on that first, until the risk is lower.
Though this only applies if you have to make a trade-off in effort allocation, it would be great to see more work in dealing with both existential risk and flourishing
I agree with the basic point (though only if non-urgency converts into non-neglectedness), though I think that the "punt till later" argument is generally overstated.
Fin and I talk about this more in section 5 here: https://www.forethought.org/research/preparing-for-the-intelligence-explosion