AI researchers, 1988: Hey neural nets are dumb but they could maybe do a thing
Venture capitalists, 1988: haha lol nope you're stupid and and drunk we're cutting you off
AI researchers, 2018: Hey neural nets are still dumb but they could maybe do an old thing again, only now our computers and databases are a million times bigger and faster
Venture capitalists, 2018: YOU ARE A GENERATION OF PURE NEVER BEFORE IMAGINED GENIUSES BIRTHED DIRECTLY FROM THE GODS TAKE ALL OUR MONEY AND MORE
AI researchers, 2019: So um turns out neural nets actually do have limitations, who could ever have imagined
Venture capitalists, 2019: SELL SELL SELL OMG SELL ALL THE TECH STOCKS BURN IT ALL DOWN WE'RE INTO KNITTING NOW SPREAD THE WORD THE NEW COOL THING IS KNITTING
@natecull looool
I'm a gofai fan.
@pnathan I feel like Good Old-Fashioned AI has two strong things in its favour:
1. It's transparent and can explain itself. Because it's rules created by humans, it's parseable *by* humans. Self-trained neural nets:... lol nope. A black box that just does stuff.
2. It tends toward smallness, not bigness. Small databases search faster and can be customised to the user. But a big ol' neural net classifies things better the more users' data gets shoved into it.
Neither of these are value-neutral
@natecull @pnathan Hmm. Any examples of GOFAI being used for stuff? It sounds interesting. And it'd be interesting to combine the two approaches - maybe a GOFAI core that uses neural net modules? :/
Though on the other hand, I'd be wary about doing anything too useful with A.I., as I don't think our civilization is very well prepared to handle it. :/