On Intelligence

As part of my thesis reading – but also because I don’t know anything about the subject – I’m working my way through a book on artificial intelligence by Blay Whitby. It’s a teeny weeny affair, just an overview of the subject for those interested, and apparently this makes up half the Jurong East Library’s collection on AI.

I don’t think this is the best book ever, because I feel like it could do with a few more theoretical definitions (I’ve got another book though and that might help) but there is one part here that I don’t agree with. Maybe I don’t understand it, but it’s an interesting question all the same. It’s to do with Searle’s Chinese Room thought experiment.

Basically, the idea is that you have a huge room, and it’s filled with instructions. There is a man – Searle, maybe – sitting in the middle of the room, receiving pieces of paper with squiggly lines on them. He consults the instructions, matches up the squiggly lines exactly, and writes down on piece of paper the answer he’s supposed to give them. Then he passes that out through the other end. It turns out that Searle is actually receiving instruction in, say, Malayalam. If you were to consider the system as a whole, it would be as though Searle actually understood Malayalam – because he’s processing input in that language and giving you appropriate output in it too. In essence, Searle’s argument is that you cannot have an intelligent machine without understanding. The machine processes input and output according to a very complicated set of instructions, but it doesn’t understand what’s going on behind those instructions. A weather prediction program, he says, even though it has elements of AI within it (and most of them do) isn’t completely intelligent because you can’t have a conversation with it about the weather. (Although it’s anyone’s guess why you’d really want to.)

Whitby thinks that Searle is wrong because it’s the entire system that is under consideration, and he thinks you can’t break down the system to say which part of the system demonstrates understanding.  His idea is, if you had a Malayalam speaker inside the room, you couldn’t break her down either to demonstrate where the understanding is in her brain.

I think this is a pretty silly argument, because it’s missing the point. Searle probably needs to be interpreted another way – what would the system do if you sent in an input that a) didn’t match the instructions you’d given the system, or b) didn’t make sense?

My brother suggested that a Malayalam speaker would do exactly the same thing as the computer when given inappropriate input: not know what to do with it and “crash” (by the way, failing gracefully, or not completely crashing, is another proposed requirement for artificial intelligence). So the speaker would at least say “I don’t know” or “I don’t understand what [the word] means”, “explain that please” or any number of natural responses.

Yes, it’s trivial to program the computer to output exactly those same responses. But you couldn’t get a computer to guess contextually. You’d need the program to understand the meanings behind each of the words, what order of the words makes up what kind of sentence… and then you might be able to guess at a meaning. And that, I think, is part of real understanding.

You could have given inputs and outputs and instructions to handle the two of them, but you couldn’t have a conversation, you couldn’t ask for advice. And that’s where I think Whitby has the wrong response to Searle – what to do with an input that’s totally out of the capabilities of the instructions.

AI is completely fascinating to me, not least because it asks you to make some guesses about how humans in general learn and respond and exhibit intelligence. To me, intelligence also consists of “consciousness” and “understanding”, perhaps even self-awareness. It was demonstrated that dolphins are particularly smart because they show something called meta-cognition, which is the process of thinking about your thinking. If you’re a dumb little fruitfly and you’re given a choice of flying into one of two holes, where one is the “right” one and the other is the “wrong” one (maybe out of a maze or something), then the fruitfly would just choose something and do it. Dolphins, on the other hand, exhibit hesitation, a space of time where they’re not sure if they know what they’re doing. That would be really interesting, to see a machine capable of doing something like that.

And what about the question of emotion?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s