From a discussion on machine learning:
"For any program, there are two possibilities. Either it works exactly as intended, of some bug crops up in the code. In the first case, we probably have nothing to fear. The latter case isn't terribly concerning, either."
WRONG! When it comes to something as powerful as AI, both of these scenarios should be deeply terrifying. For the first, it takes all the usual problems of trust, accountability, and intention, and magnifies them a hundred thousand fold. For the second, it introduces a possibly apocalyptic uncertain consequences. What will it do? What will happen? Fuck if we know. :/
Another AI Rant™ Afficher plus
@Angle i dont even agree with the premise when relating to ML. we can't say with any certainty that the program does exactly as expected because not even the people who "program" these know what's going on and why it makes the decision it did.
the entire statement is absurd, including the conclusion.