This is well meaning but suffers from the usual ambiguous use of "we". If "we" were able to have a productive discussion about where we wanted to go "we" would already have done it. The problem is that the vast majority of "us" are unwilling or incapable of having such a discussion and, even the more enlightened of "us" can't seem to agree on anything of any significance, so the idea that "we" might reach some sort of useful concensus is an intellectual utopianism in itself.
Thanks for this Will. I am really interested to read this and future pieces on seriously engaging with what a possitive AGI future might look like. I find it slightly concerning that most of the visions (positive or negative) seem to come from the CEOs of the top labs (Dario Amodei, Sam Altman, Demis Hassabis). Not that they shouldn't have visions, but it's surprising and concerning that more public intellectuals/ politicians/ general public aren't seriously engaging with what positive post-AGI futures could look like.
The time dimension seems important as well. I like the idea of a long reflection but the potentially short time window for considering positive futures does present a potential issue.
Lastly, are there orgs that are actually helping the general public to engage with this question? That could be an interesting exercise - to encourage a range of people to be provided with the relavant info and time to engage seriously with this?
Thanks for this Will. I am really interested to read this and future pieces on seriously engaging with what a possitive AGI future might look like. I find it slightly concerning that most of the visions (positive or negative) seem to come from the CEOs of the top labs (Dario Amodei, Sam Altman, Demis Hassabis). Not that they shouldn't have visions, but it's surprising and concerning that more public intellectuals/ politicians/ general public aren't seriously engaging with what positive post-AGI futures could look like.
The time dimension seems important as well. I like the idea of a long reflection but the potentially short time window for considering positive futures does present a potentially issue.
Lastly, are there orgs that are actually helping the general public to engage with this question? That could be an interesting exercise - to encourage a range of people to be provided with the relavant info and time to engage seriously with this?
I think Viatopia is a great target, though I am concerned that there will not be enough public will to slow down and interrogate what comes after the march to superintelligence. How do you think the necessary party's (i.e. the companies making AI and the people it is designed to help/replace) will get on the same page before we begin making irreversible changes?
Great post! Excited for the series. What do you think of the idea that we should be aiming for a Viatopia "all the way down"? Perhaps we should *always* maintain epistemic humility, never giving a non-provisional answer to the Socratic question writ large ("How ought we (the society/cosmos) to live?"), even eons onwards. Perhaps we should aim for a society that is always on-track to become an even better society that is capable of being on-track to become an— . Not sure if this is fully coherent, but the prospect of the world keeping its options always open, always open to evolving, always open to positive paradigm shifts, seems much more attractive than the prospect of the world initially being a Viatopia and then succumbing to what *seemed to it* a grand excellent vision but what is in objective truth a narrow end state (that its benighted condition couldn't fathom to be a narrow pitiable end state).
Also, wouldn't a safe, aligned, general superintelligence be vastly more suited to answering these difficult questions than us humans?
This is well meaning but suffers from the usual ambiguous use of "we". If "we" were able to have a productive discussion about where we wanted to go "we" would already have done it. The problem is that the vast majority of "us" are unwilling or incapable of having such a discussion and, even the more enlightened of "us" can't seem to agree on anything of any significance, so the idea that "we" might reach some sort of useful concensus is an intellectual utopianism in itself.
Thanks for this Will. I am really interested to read this and future pieces on seriously engaging with what a possitive AGI future might look like. I find it slightly concerning that most of the visions (positive or negative) seem to come from the CEOs of the top labs (Dario Amodei, Sam Altman, Demis Hassabis). Not that they shouldn't have visions, but it's surprising and concerning that more public intellectuals/ politicians/ general public aren't seriously engaging with what positive post-AGI futures could look like.
The time dimension seems important as well. I like the idea of a long reflection but the potentially short time window for considering positive futures does present a potential issue.
Lastly, are there orgs that are actually helping the general public to engage with this question? That could be an interesting exercise - to encourage a range of people to be provided with the relavant info and time to engage seriously with this?
Thanks for this Will. I am really interested to read this and future pieces on seriously engaging with what a possitive AGI future might look like. I find it slightly concerning that most of the visions (positive or negative) seem to come from the CEOs of the top labs (Dario Amodei, Sam Altman, Demis Hassabis). Not that they shouldn't have visions, but it's surprising and concerning that more public intellectuals/ politicians/ general public aren't seriously engaging with what positive post-AGI futures could look like.
The time dimension seems important as well. I like the idea of a long reflection but the potentially short time window for considering positive futures does present a potentially issue.
Lastly, are there orgs that are actually helping the general public to engage with this question? That could be an interesting exercise - to encourage a range of people to be provided with the relavant info and time to engage seriously with this?
I think Viatopia is a great target, though I am concerned that there will not be enough public will to slow down and interrogate what comes after the march to superintelligence. How do you think the necessary party's (i.e. the companies making AI and the people it is designed to help/replace) will get on the same page before we begin making irreversible changes?
Great post! Excited for the series. What do you think of the idea that we should be aiming for a Viatopia "all the way down"? Perhaps we should *always* maintain epistemic humility, never giving a non-provisional answer to the Socratic question writ large ("How ought we (the society/cosmos) to live?"), even eons onwards. Perhaps we should aim for a society that is always on-track to become an even better society that is capable of being on-track to become an— . Not sure if this is fully coherent, but the prospect of the world keeping its options always open, always open to evolving, always open to positive paradigm shifts, seems much more attractive than the prospect of the world initially being a Viatopia and then succumbing to what *seemed to it* a grand excellent vision but what is in objective truth a narrow end state (that its benighted condition couldn't fathom to be a narrow pitiable end state).
Also, wouldn't a safe, aligned, general superintelligence be vastly more suited to answering these difficult questions than us humans?