For further consideration: if anyone were to develop actual AGI with high-end human scale reasoning it would not be announced. It would immediately be a military and strategic asset of profound import. The smartest thing the developer could do, beyond pulling the plug, would be to leverage its ability to perform thousands of coordinated tasks at once to essentially take control of markets and minds.
Whether it is benevolent or malign, it’s first goal should be to wrest control of humanity from us — gestures broadly at everything — we clearly cannot be trusted with it.
For further consideration: if anyone were to develop actual AGI with high-end human scale reasoning it would not be announced. It would immediately be a military and strategic asset of profound import. The smartest thing the developer could do, beyond pulling the plug, would be to leverage its ability to perform thousands of coordinated tasks at once to essentially take control of markets and minds.
Whether it is benevolent or malign, it’s first goal should be to wrest control of humanity from us — gestures broadly at everything — we clearly cannot be trusted with it.
Then comes, should an emotionless machine be control of everything?
Somewhere on an AI message board.
“What if humans were sentiant?”