Charlie Stross gives a great talk about Slow AI in which he argues that you don’t actually need a computer to build a paperclip oprimiser, and money is already a great paperclip.
>Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence?
No, we are not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence. We are, however, harvesting the world's resources in an attempt to optimize artificial intelligence.
You don't need any "weird" goal like paperclips. You just need the basic goals of survival and expansion that every species posesses (implicitly) to understand why a superintelligence is a danger.
Charlie Stross gives a great talk about Slow AI in which he argues that you don’t actually need a computer to build a paperclip oprimiser, and money is already a great paperclip.
>Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence?
No, we are not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence. We are, however, harvesting the world's resources in an attempt to optimize artificial intelligence.
Money is the paperclip.
Of course.
You don't need any "weird" goal like paperclips. You just need the basic goals of survival and expansion that every species posesses (implicitly) to understand why a superintelligence is a danger.
Instrumental convergence is final convergence (w/o final goal divergence following instrumental convergence as assumed by Bostrom).
Honestly, after thinking about it, i guess it will be