Interesting read, but I want to push back against the framing of AI being a good citizen. The examples you used were AI being a good neighbor or community member, not necessarily citizen. It seems like a bad idea to train AI to engage well politically, since its not a member of the public. Its worth separating pro-sociality and citizenry imo.
What if agents merely acted as bonded fiduciaries of a defined class, and acted in identifiable manner according to basic governance principles according to predefined goals and measures gathered across a crowd of epistemically rich and redundant civic observers, also holders of the bond. Being “good” is not then a matter of virtue but merely of optimization under dense constraints, where agents can and indeed must labor to explore possibilities for complex opportunities, not simple exploitation of time efficiency or a market opportunity (like personal servitude) that is fully corrigible. An army of bonded public agentic AI servants on public benefit contracts paid ex post (by the market defined in the bod) would serve us better than individually bonded serfs.
I don't see how this is any different from responsible and ethical AI, and the alignment problem.
Also, I don't believe there is a solution. There is no such thing as an uncontroversial social drive. Every social drive is an ethical contraint and all ethical constraints are controversial.
Interesting read, but I want to push back against the framing of AI being a good citizen. The examples you used were AI being a good neighbor or community member, not necessarily citizen. It seems like a bad idea to train AI to engage well politically, since its not a member of the public. Its worth separating pro-sociality and citizenry imo.
agreed - 'good community member' captures it better
What if agents merely acted as bonded fiduciaries of a defined class, and acted in identifiable manner according to basic governance principles according to predefined goals and measures gathered across a crowd of epistemically rich and redundant civic observers, also holders of the bond. Being “good” is not then a matter of virtue but merely of optimization under dense constraints, where agents can and indeed must labor to explore possibilities for complex opportunities, not simple exploitation of time efficiency or a market opportunity (like personal servitude) that is fully corrigible. An army of bonded public agentic AI servants on public benefit contracts paid ex post (by the market defined in the bod) would serve us better than individually bonded serfs.
I don't see how this is any different from responsible and ethical AI, and the alignment problem.
Also, I don't believe there is a solution. There is no such thing as an uncontroversial social drive. Every social drive is an ethical contraint and all ethical constraints are controversial.
Maybe I'm missing something? How is doing prosocial AI "properly" any different to solving the alignment problem?