Discussion about this post

User's avatar
Nick Hounsome's avatar

The main problem with Maxipok is that it assumes that we know how to reduce existential risk. This is clearly false given, for example, the two alternatives to reducing AI risk:

1) Try to stop or slow AI development.

2) Press on with AI development in the hope that we solve alignment before the bad guys create a badly aligned AI.

If we don't know which of these to pursue it renders most other discussion meaningless.

Suppose we focus on dealing with climate change - Maybe the optimal way to do that is to press on with AI development. Who knows?

No posts

Ready for more?