AI researchers, 1988: Hey neural nets are dumb but they could maybe do a thing
Venture capitalists, 1988: haha lol nope you're stupid and and drunk we're cutting you off
AI researchers, 2018: Hey neural nets are still dumb but they could maybe do an old thing again, only now our computers and databases are a million times bigger and faster
Venture capitalists, 2018: YOU ARE A GENERATION OF PURE NEVER BEFORE IMAGINED GENIUSES BIRTHED DIRECTLY FROM THE GODS TAKE ALL OUR MONEY AND MORE
AI researchers, 2019: So um turns out neural nets actually do have limitations, who could ever have imagined
Venture capitalists, 2019: SELL SELL SELL OMG SELL ALL THE TECH STOCKS BURN IT ALL DOWN WE'RE INTO KNITTING NOW SPREAD THE WORD THE NEW COOL THING IS KNITTING
@natecull looool
I'm a gofai fan.
@natecull @pnathan Hmm. Any examples of GOFAI being used for stuff? It sounds interesting. And it'd be interesting to combine the two approaches - maybe a GOFAI core that uses neural net modules? :/
Though on the other hand, I'd be wary about doing anything too useful with A.I., as I don't think our civilization is very well prepared to handle it. :/
I think the Semantic Web people are the only ones really doing what was Symbolic AI back in the 1980s
But also if you're doing Prolog or Kanren (eg Core Logic, or any of its million variants) you probably are doing 'AI'
also if you're running any kind of Lisp or Scheme, or even just have a data structure that uses lists (including JSON)
It was never a particularly well defined term, is maybe part of the problem.
But Peter Norvig's book is now free!
@natecull @Angle so "doing AI" is definitely more than lists and lisp.
Lisp in particular was motivated by symbolic reasoning and then self-referential reasoning - changing the terms on the fly.
Broadly a lot of gofai reduces down to a search problem, which then turns it into a "how to define states and moves" problem.
@pnathan @Angle I think that self-referentiality part is what I see as missing from the modern C-and-Unix-derived desktop model.
You have to severely 'switch mode' to modify code, eg, boot up a whole IDE/compiler toolchain, where a modern Lisp-inspired OS would let you make the UI equivalent of cheap anonymous closures.
@natecull @Angle welllllllllllllllllllllllll.
That's largely Microsoft's fault. In a Deeply Unix world, your CLI and the editor is your IDE. Things start breaking down with multiple languages and .so versions etc.
Now it's true that a Lispy world is amazing. I would advise doing a year of Common Lisp work in a fully powered up Emacs install to start grasping it.
@pnathan @Angle it's not just Microsoft though. The Linux and Android ecosystems - at least the GNOME/KDE desktop app side - do not really lend themselves to the user getting into the code.
Like I've got the code to Firefox and Evolution and LibreOffice... but modifying stuff at the object level is still not really feasible.
@natecull @Angle Mmm.... ehhh.... large code bases require becoming expert. It's the bar you have to jump over. Miserable for big codebases. I remember looking at Dreamwidth's 200,000 lines of perl once and nearly falling over screaming.
It's not hard to knock together a basic KDE app if you have the source code.
the other wrinkle is windowing systems fundamentally fight the CLI Unix approach, which works pretty well. I'm told Symbolics was great. Never VM'd it tho'.
Do not even try Chromium then!
That's exactly what I mean when I say that #FreeSoftware and #OpenSource are completely different things: https://medium.com/@giacomo_59737/what-i-wish-i-knew-before-contributing-to-open-source-dd63acd20696
But, actually I would not put #Firefox and #Chromium in the same set.
Part of the problem is #architectural: the platforms we are using today are a broken stack of patches over patches, each fixing the problems created by the previous ones.
And at the basement, #hardware issues that do not exist anymore.
@natecull @Angle n.b. lisp systems, iiuc, had issues with multiple processes and no security at the kernel level. lisp has no idea of shared libraries.
that said.
the fading of lisp and prolog from the computer world is one of the great errors of the computer world and a genuine tragedy.
richard gabriel has a lot of (bitter) things to say here.
@sydneyfalk @natecull @Angle Havn't followed Perl 6 in ages. Its development process seemed to be a bit haywire. I suspect Wall is turds as a project manager.
TBH I think any dynamic language should be asking itself *very hard* why it isn't built on Common Lisp as a substate at least. I mean, binary compilation out of the gate... :-S
@sydneyfalk @natecull @Angle cool! I'll eyeball it. Last I looked it was pre-release!
What about games?
When I was young, "the quality of the AI" was one of those things that was used to differentiate (and market) video games.
Actually, I've just realized that I know nothing about the #AI technologies used back then.
Do you know what AI techniques were cutting edge in the #game industry 10/15 years ago? And what about now?
I wonder if the most hyped #ML and AI techniques have a role in the game industry today...
@Shamar @Angle @natecull iirc, it was a-star search and neural networks.
your big-hype techniques today don't train very fast. if you can deploy a trained model that's relevant to a game, it's probably useful for highly complex games making a decision. Not sure what the payoff would be, but, meh.
I imagine your AAA RTS AIs these days are pretty substantial tho;'.
@Angle @natecull iiuc, the best in class systems are neural networks for edge recognition and gofai style work for the higher level decisions.
AI tends to be very weak as a strong system, but powerful as an auxiliary. E.g., Google search. Great tool. Bad overlord. Or image recognition.
In practice the symptom recognition systems that the local docs use as diagnostic aid are gofai based.
@Angle @pnathan
well, it's basically just 'make a database and search it', so....