From a discussion on machine learning:
"For any program, there are two possibilities. Either it works exactly as intended, of some bug crops up in the code. In the first case, we probably have nothing to fear. The latter case isn't terribly concerning, either."
WRONG! When it comes to something as powerful as AI, both of these scenarios should be deeply terrifying. For the first, it takes all the usual problems of trust, accountability, and intention, and magnifies them a hundred thousand fold. For the second, it introduces a possibly apocalyptic uncertain consequences. What will it do? What will happen? Fuck if we know. :/
Another AI Rant™ Afficher plus
@polychrome What kind of rubric would you set for an AI being self aware? Would something that knows it exists and can theorize about itself meet that rubric? And how far from that are we, really? I worry we might cross that line without even knowing, simply because the AI doesn't outright speak up and start talking to use. :/