That Very Quinn has Moved utilise witches.town. Vous pouvez læ suivre et interagir si vous possédez un compte quelque part dans le "fediverse".

We seem to be fixated on the AI as destroyer narrative, which I understand, but I like to think that for all of our doomsaying, we would ultimately find a way to create an Artificial Intelligence that, while different and foreign to us, is benevolent or indifferent to a point.

I think of the Octopus, which according to my fiance presents a similar level of intelligence to us despite being a solitary creature. So difficult for us to understand for that reason, because if they had developed as a pack oriented creature with the ability to walk on land, they would likely be the dominant species.

But they have no interest in it, from what we know.

@ThatVeryQuinn it's not my area either but I thought the whole 'paperclip maximizing' thing was meant to show there is no 'indifferent'

That Very Quinn has Moved @ThatVeryQuinn

@joop Perhaps so! I'm a little unfamiliar with what you're talking about, but I haven't done a quick cursory search on it yet either. If you wouldn't mind, what do you mean by Paperclip Maximizing?

@ThatVeryQuinn

Long version is on the LW wiki [0] but the basic thought is that if you give an AI a task (e.g. 'get/make as many paperclips as possible) it will stop at literally nothing to do so, because it doesn't have the sense of "this is a rational point to stop" that we do (like if you start killing people to convert their bodies into paperclips).

It doesn't have common sense or the values we more or less share; so even something "innocent" can be dangerous.

0: wiki.lesswrong.com/wiki/Paperc

@joop Ohhh, I was thinking that might be what you meant! That's a good point.