From a discussion on machine learning:
"For any program, there are two possibilities. Either it works exactly as intended, of some bug crops up in the code. In the first case, we probably have nothing to fear. The latter case isn't terribly concerning, either."
WRONG! When it comes to something as powerful as AI, both of these scenarios should be deeply terrifying. For the first, it takes all the usual problems of trust, accountability, and intention, and magnifies them a hundred thousand fold. For the second, it introduces a possibly apocalyptic uncertain consequences. What will it do? What will happen? Fuck if we know. :/
Another AI Rant™ Afficher plus
@polychrome Maybe we should use machine learning for our debugging tools. What could go wrong with that? :V