This subject is about inhuman AI, all the tricks that computers can use to be smart that humans may or may not use.

Just to give the humans' a little more equality in this subject, today were going to talk about humans and AI. The field of cognitive science is devoted to discovering more about human intelligence using insights from a range of other areas:

Brief notes on all these follow. Note that:

Neuro-physiology

Human brain cells are very different to computer chips. In your brain, there is:

A never cell can have up to 1000 dendritic branches, making connections with tens of thousands of other cells. Each of the 1011 (one hundred billion) neurons has on average 7,000 connections to other neurons.

It has been estimated that the brain of a three-year-old child has about 1016 synapses (10 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1015 to 5 x 1015 synapses (1 to 5 quadrillion).

Just to say the obvious- that's a BIG network.

Neuro-physiology is a very active field. The latest generation of MRI scanners allow for detailed real-time monitoring of human activity, while they are performing cognitive tasks.

This field shows great promise but, as yet, they are still working on locomotion and pain perception and vision and have yet to rise to the level of model-based reasoning.

The field of neural networks originally began as an experiment in exploiting massive repetition of a single simple structure, running in parallel, to achieve cognition. As the field evolved, it turned more into some curve fitting over non-linear functions (and the tools used to achieve that fit have become less and less likely to have a biological correlate).

For another example of AI research, initially inspired by a biological metaphor, see genetic algorithms.

Linguistics

Noam Chomsky is one the towering figures of the 20th century. He's a linguistic and a political commentator. Every few years he disappears to re-emerge with a new book the redefines everything.

A lot of computer science parsing theory comes from Chomsky's theory of language grammars.

In AI circles, Chomsky is most famous for his argument that we don't learn language. Rather, we are born with a general grammar and , as a child grows all up, all they are doing is filling some empty slots referring to the particulars of the local dialect.

This must be so, argues Chomsky otherwise language acquisition would be impossible.

The implications are staggering. Somewhere in the wet-ware of the brain there is something like the grammars we process in computer science. At its most optimistic, this also means that grammar-based languages (like LISP, etc) have what it takes to reproduce human cognition.

But is there really a "language" of thought? Or is this just an interpretation of chemicals sloshing around the dendrites (under the hood) which we interpret as language.

Well, there is evidence of some model-based manipulation by our wet ware. In classic mental rotation experiments, it was shown that the time required to check if some object was rotation of another was linear proportional to the size of the rotation. It is as if some brain box is reaching out to a sculpture of the thing we are looking at, the turning it around at some fixed rate.

Anyway, if ask a philosopher, "is it really neurons, or are their symbolic models in between our ears?", they might answer who cares?. Whatever stance works best is the right one.

Philosophy: part1 (we love AI)

Daniel Dennett asks a simple questions. Try and beat a chess playing program. What are you going to do?

Which is the right stance? The answer is, it depends. What do you want to do? Stop being short circuited by a loose wire? You want the physical stance? Beat the program at chess? You want the intentional stance.

Bottom line: a computer is not just "a machine". It is a mix of things, some of which are best treated like any other intelligence.

Don't believe me? Well, pawn to king four.

(By the way, for a good introduction to AI and philosohy, see The Mind's I.)

Philosophy: Part2 (AI? You crazy?)

I think therefore I am. I don't think therefore...

There used to be a savage critic by certain philosophers along the lines that AI was impossible. For example, John Searle is a smart guy. His text Speech Acts: An Essay in the Philosophy of Language. 1969 is listed as one of the most cited works on the 20th century.

In one of the most famous critiques of early AI, Searle invented the Chinese Room: an ELIZA-like AI that used simple pattern look ups to react to user utterances. Searle argued that this was nonsense- that such a system could never be said to be "really" intelligent.

Looking back on it all, 27 years later, the whole debate seems wrong-headed. Of course ELIZA was the wrong model for intelligence- no internal model that is refined during interaction, no background knowledge, no inference, no set of beliefs/desires/goals, etc.

Searle's argument amounts to "not enough- do more". And we did. Searle's kind of rhetoric (that AI will never work) fails in the face of AI's many successes.

Here's some on-line resources on the topic:

And here's some more general links:

Mathematics

Godel's Incompleteness Theorem

There is some mathematical support for Searle's pessimism. In 1930, the philosophical world was shaken to its foundation in 1930 by a mathematical paper that proved:

That is, formal systems have fundamental limits.

So Godel's theorem gives us an absolute limit to what can be achieved by "formal systems" (i.e. the kinds of things we can write with a LISP program).

Godel's theorem might be used to argue against the "logical school" of AI. If formal logics are so limited, then maybe we should ignore them and try other procedural / functional representations instead:

Cook and NP-Complete

Godel's theorem is somewhat arcane. He showed that somethings were unknown but he did not say what those things are.

Enter Steve Cook. In 1971, he showed that commonly studied problems (e.g. boolean satisfiability) belong to a class of systems for which the solution takes exponential time to compute.

An army of algorithms researchers have followed Cook's lead and now there are vast catalogues of commonly studied programs for which there is no known fast (less than exponential time) and complete (no heuristic search) solution.

Relax

All the above talk about limits to AI is wrong-headed. Instead of talking about the limits to AI, how we explain its competency despite the above issues?

Maybe just because a representational system like LISP is limited does not mean that it is useless. I don't know the length of my 1000th hair above my right ear but I can still buy a house, write programs, balance my check book, etc. So Godel's theorem does not make me want to junk my LISP compiler and go off into procedural neural net land.

And maybe my LISP interpreter can't implement a complete and fast solution to a problem, we can get pretty close. And (using stochastic search) we can do it real quickly.

But please sleep easy tonight. And keep typing away at LISP.