From a discussion on machine learning:
"For any program, there are two possibilities. Either it works exactly as intended, of some bug crops up in the code. In the first case, we probably have nothing to fear. The latter case isn't terribly concerning, either."
WRONG! When it comes to something as powerful as AI, both of these scenarios should be deeply terrifying. For the first, it takes all the usual problems of trust, accountability, and intention, and magnifies them a hundred thousand fold. For the second, it introduces a possibly apocalyptic uncertain consequences. What will it do? What will happen? Fuck if we know. :/
Another AI Rant™ Afficher plus
@Angle Self awareness is such a complicated subject and it's exactly where philosophy comes in handy.
It's hard to find a point of comparison because unlike living organisms an AI has an utterly alien existence to our own: It has no needs to compete for food or look out for predators. It does not breed or age and is theoretically immortal.
So I don't have a good answer, only more questions. When does an AI stops being a tool and becomes an individual?