From a discussion on machine learning:
"For any program, there are two possibilities. Either it works exactly as intended, of some bug crops up in the code. In the first case, we probably have nothing to fear. The latter case isn't terribly concerning, either."
WRONG! When it comes to something as powerful as AI, both of these scenarios should be deeply terrifying. For the first, it takes all the usual problems of trust, accountability, and intention, and magnifies them a hundred thousand fold. For the second, it introduces a possibly apocalyptic uncertain consequences. What will it do? What will happen? Fuck if we know. :/
Another AI Rant™ Afficher plus
@Angle You kid, but this is exactly what we're already working on in some parts of the field!
While I am retry sure we won't create a self aware AI anytime soon, this is still going to raise some very serious questions in the near future.
Feels like we're to on busy chasing profits to notice what's taking shape beneath our hands.