Discussion about this post

User's avatar
Joe's avatar
Aug 9Edited

If moral realism is true, should we still expect that the blackmailing scenarios are bad in expectation? If so, this seems like it assumes people will be badly-calibrated with respect to their confidence of their moral views; but wouldn't poor-calibration fade given intelligence increases?

Also, at the end when you mention "expected value of 1%", what is this in reference to (i.e. what would be 100%)?

Expand full comment
Houston Wood's avatar

I wonder about the assumption that our post-AGI Earth will host one unitary AGI. Why won’t there be multiple AGIs with different characteristics and goals? If there are, clashes over values will be much more complex than is considered here

Expand full comment
5 more comments...

No posts