What is AI?

AI is the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

Newell (1982) characterized the actions of such an knowledge level agent as...

If that sounds too pompous for you, try this instead:

(Actually, it probably also means looking after your leap, so you can learn from the past to be more rational in the future.)

Inhuman Rationality?

Note that Newell makes no commitments as to how the knowledge level is operationalized. Underneath the knowledge level there could be any number of substrates (biological, mechanical, a collection of wind-powered beer cans, whatever) that implement rationality.

Now you might object at this separation of "rationality" from "humanity". You might protest that the only thing that can be rational like a person is another person. And many people would agree with you.

But I don't think that I am the only kind of thing that can think. That would be like saying that only birds can fly and that air planes, which don't flap their wings, don't "really" fly.

What I do think is that there is some abstract notion of flying/thinking that is independent of birds/humans. Like Spock said: "Intelligence does not require bulk, Mr. Scott".

Every computer scientist knows this to be true. Two generations of algorithms research has shown that there exist properties of computation that are independent of what processor the algorithm runs on, or the implementation language. Dijkstra once said "computer science is no more about computers than astronomy is about telescopes"- and he could have been talking about AI.

Not convinced? Well, try another example. Do you think that a robot could/should walk like a human? (see movie, right, of Dinesh Pai's Platonic Beast). This little fellow walks by occasionally throwing a spare limb over the top of itself. Such a move would tear us apart, but it is the natural way to do it for that kind of walking thing.

And here's another example:

Now the point of this example is that you would not expect a human to think using stochastic search (too much CPU twiddling). But for a computer, stochastic search is a useful inference method since each local twiddle can be done very quickly.

So, once again, how we best think is a local decision, based on the properties of the thing doing the thinking. And just because humans do it one way, does not mean that that is the best way for AIs to do it.

(Note an opposing view to the above. The literature is full of claims that AI works like people do. For example in Edward Feigenbaum's knowledge transfer view- which I don't agree with- building knowledge-based systems was like "mining the jewels in the expert's head"; i.e. looking at the cogs and wheels in people's head and replicating them on a computer. While I agree that at the knowledge level, beer cans can think like the wet-ware between our ears, I think we need to respect the substrate in order to select the best method for implementing rationality.

And whatever substrate we select, some issues will be the same; e.g. Newell's knowledge level insight and issues relating to representation and search.)

Based on all this, I offer two predictions for the future. One, that we will see a growing number of rational computers but, two, they are going to be aliens (i.e. won't work exactly like human intelligence) with very different motivations, needs, and desires to us. Instead, the 21st century will see a menagerie of many different kinds of intelligence. Some you'll know about, like the book-buying assistants wired into Amazon.com that sometimes send you recommendations about what books to read. And some you won't even see-

Think of it as a jungle of AIs, working together, all living in their little ecological niches. And like any ecology, we'll learn that:

But does it work?

But does this sounds crazy to you? Too optimistic? Where is the proof, you might demand, that this different-to-humans AI-approach is equal to (or better than) the human way?

Well, there's lots of proof. AI is no longer a bleeding-edge technology -- hyped by its proponents and mistrusted by the mainstream. AI has achieved much:

But don't expect AI to be all flash and dazzle. In the 21st century, AI is not necessarily amazing. Rather, it's often routine. Evidence for AI technology's routine and dependable nature abounds. See for example, the list of applications in a special issue I edited 21st-Century AI - Proud, Not Smug IEEE Intelligent Systems (May/June 2003).

Hopefully, that is enough motivation for you. As Nils Nilsson says: