The Early Days of a Better Nation

Friday, January 04, 2008



An AI skeptic writes

My friend George Berger emailed this comment, which I reproduce here with his permisssion.
You have done the SF world a great service by posting that article about AI. I hope that many people will read it, since it precisely echos my thoughts on the subject. I've had similar views since I started watching the "development" of AI in 1977. It's an example of what Lakatos called a "degenerating research program."

I became interested in technical psychology in 1969, when a colleague told me that neurons work "by FM, not AM." I started reading physiology and psychology and have not yet stopped. As a teacher of mathematical logic I had to learn the foundations of computation, which lie quite deep indeed. All this gave me the tools for evaluating the claims of the AI people since my first exposure to them while at work at the Technische Hogeschool Delft, here in the Netherlands.

The claims and their intellectual back-up, first in programming and now in neuroscience, never convinced me. I saw and see no conclusive reason to assimilate the brain with any kind of computer or connectionist device. Briefly, there are too many disanalogies (and other difficulties). My skepticism was not well-received in some philosophical quarters. It was dismaying to see how many of my colleagues adopted various philosophical notions (they're no more than that) directly or indirectly based on the "computer metaphor." Most of these people were unequipped to understand the necessary logic and maths, so their dogmatism amazed me.

The scientific literature I have read was all too often equally dogmatic. The popular stuff contained all sorts of unfounded, optimistic claims, All of which turned out to be mere hype. In this way I've seen one project, idea, and theory after the other fail. I've seen little else. We are no closer to creating "Strong AI" than we were when the field was started in the 50s. A good number of neuroscientists are at least as sceptical as I am.

To me the issues are largely empirical. I am equally unconvinced by supposedly principled arguments against strong AI (e.g. those of Searle, other analytical philosophers, and some phenomenologists). Only technical developments will decide this, unless someone comes up with a convincing in-principle argument against Strong AI. So I hope that your post will start more people thinking. I won't decry for one picosecond the many fine SF stories that are predicated on the success of the AI programs. I read them with pleasure. All I can say is that my scientific training and reading gives me no reason to accept the claims of the AI proponents. I can go on and on about this, with documentation, but I won't do so now. Do keep up the good work.

44 Comments:

Sigh. From my viewpoint, your original AI critic doesn't know enough math (and he hasn't heard of genetic "algorithms", which suggests to me he doesn't understand the problem.) Broadly, if Church-Turing holds, AI can be achieved with something like current technology, though it may take a very long time, and if not, not without new technology and theories, perhaps based in quantum computing.

And does Church-Turing hold? Well, in one corner, we have Alan Turing. In the other, we have Roger Penrose. I'm not going to be winning arguments on mathematical philosophy with either man (and besides, Turing is dead). So we wait for new insights. But...I'd take the failure, so far, of the AI project as a sign that our understanding is wrong. We've had 50 years of work and, really, we are no closer than when we began. The project, understand, has been worthwhile; many of the greats of the field have addressed it, and a many valuable algorithms and a great deal of technology have been discovered thereby. And yet the thing itself eludes us, which to me suggests that our basic hypothesis is flawed, in the same way that the failure to account for observed phenomena indicated that the "luminiferous ether" hypothesis was flawed.

As a materialist, I can't be an AI "skeptic". I can be skeptical that current work in the AI field will be relevant to developing actual AI (and I am), and I can believe that we'll get there either a long ways in the future, or else serendipitously, and I do. But since the brain (or some larger chunk of the human organism) *is* a machine that produces a mind, I can't very well doubt that it's possible.

To me the issues are largely empirical.

I agree that the issues are empirical. But the empirical evidence is that AI research is making progress. For example, recently an algorithm achieved human-level performance on multiple choice analogy questions from the Scholastic Aptitude Test, by statistical analysis of a large quantity of text:

http://arxiv.org/abs/cs.CL/0608100

DD-B, I agree with you, and I guess George would too.

None of which stops me writing novels where human-level AI exists a lot sooner than I expect it to in the real world.

Sigh. Materialism can be valid and AI still impossible. In any event, is faith in materialism any more rational than any other faith?

Caw!

And does Church-Turing hold? Well, in one corner, we have Alan Turing. In the other, we have Roger Penrose.

(1) There is no evidence that the brain uses quantum computation. (2) Quantum computers are computationally equivalent to Turing machines (ignoring speed); that is, they compute the same class of functions. (3) There is no evidence that quantum computation can solve NP-complete problems in polynomial time. (4) I do not believe that Penrose has ever denied the Church-Turing thesis. (5) It is not clear that the Church-Turing thesis is relevant, since one could accept the Church-Turing thesis and deny that it has any implications for AI.

"Quantum computers are computationally equivalent to Turing machines (ignoring speed); that is, they compute the same class of functions."

Has that been proven? I remember results that pointed in the other direction.

Penrose argues, basically, that humans solve problems which can't be solved by Turing machines; he offers quantum computation in the central nervous system as a possible mechanism. Maybe he's right, maybe not. He's not alone, however, even among mathematicians.

Has that been proven? I remember results that pointed in the other direction.

"Quantum computers are not known to be able to solve NP-complete problems in polynomial time."

http://www.scottaaronson.com/blog/

"There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false."

http://tinyurl.com/3yublg

Penrose argues, basically, that humans solve problems which can't be solved by Turing machines...

The Church-Turing thesis is that every function that would naturally be regarded as computable can be computed by a Turing machine. It is not clear that computing a function is the same as solving a problem.

Raven, is 'your original AI critic' George Berger, or the author of the article I linked to? Because George Berger is well pissed at being told he doesn't know enough math ...

Another recent success for AI: "A sports utility vehicle with a mind of its own was declared the winner of DARPA's urban robot car race on Sunday. It travelled autonomously through traffic for six hours and 60 miles (100 kilometres) around a ghost town in California, US, to scoop the prize."

http://tinyurl.com/26d8lh

I'll reply to comment 1. As Ken suggests, I agree with comment 3.
As to 1, Í agree with its second paragraph. Of course we need new insights, but they just might supply principled arguments AGAINST Strong AI! Who knows?
its first paragraph demands a polemical response. I DO know the required maths, having taught most of them. I learned the remainder from several standard texts. The required maths are mathematical logic through Gödel's incompleteness theorem and the theory of recursive functions through the upshot function. This mouthfull suggests that the COMMENTATOR doesn't know his/her maths! For by Church's Thesis ANY kind of algorithm is covered by this math. Hence the reference tp "genetic" algorithms is irrelevant and quite misleading.

My apologies, Dr. Berger. With all due respect to your erudition I don't think you have the uncomfortable intimacy with this subject which I have developed.

"Of course we need new insights, but they just might supply principled arguments AGAINST Strong AI!" Yes. I agree.

Peter, well, so the computational abilities of quantum computers are still up in the air. It's still an open question, then. That's rather less than your first claim.

"It is not clear that computing a function is the same as solving a problem." Well, yes. But if computing a function is not solving a problem then AI on any Turing machine equivalent is doomed to failure, not so?

In any event, Dr. Berger, again my apologies.

Thanks Raven, it's no problem. My competences are in analytical philosophy, philosophy of science, some maths, and logic. That's sufficient for understanding the AI issues. As you say, I have no "intimacy" with the nuts and bolts of AI.

Peter, well, so the computational abilities of quantum computers are still up in the air. It's still an open question, then. That's rather less than your first claim.

My claim was: Quantum computers are computationally equivalent to Turing machines (ignoring speed); that is, they compute the same class of functions. Here is support for this claim:

"Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (albeit possibly an amount that could never practically be brought to bear). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church-Turing thesis."

http://tinyurl.com/3yublg

But if computing a function is not solving a problem then AI on any Turing machine equivalent is doomed to failure, not so?

Earlier, you wrote, "if Church-Turing holds, AI can be achieved." The Church-Turing thesis is that every function that would naturally be regarded as computable can be computed by a Turing machine. Therefore you are assuming that, if very function that would naturally be regarded as computable can be computed by a Turing machine, then AI can be achieved. Personally, I don't have a problem with this assumption, but somebody who doubts that AI is possible is likely to also doubt this assumption. Therefore you are assuming what you are trying to prove. You are trying to change a debate about AI into a debate about the Church-Turing thesis. But the people who doubt AI is possible are the very people who would not accept your attempt to shift the topic of the debate. And the people who already believe AI is possible are not going to find anything useful in discussing the Church-Turing thesis, when the topic is really AI.

I'm a physicalist (I don't like the term "materialist", which seems to carry some baggage from the pre-Einsteinian view of physical reality), and I believe that in the long run, some purely physical model for cognition, choice, and even qualia will be arrived at. But I'm not a believer in algorithmic AI, which I regard as an obsolete scientific hypothesis—The Raven's comparison to luminiferous ether is along the right lines, though I tend to think of phlogiston. That is, I don't believe that a Turing machine or a von Neumann computer is a good model for how the human brain processes information.

Among other things, we know how long it takes a nerve impulse to travel along a neuron, and we know how long it takes a human brain to go from sensory input to perceptual identification. That doesn't allow even a thousand sequential steps. It's just not believable that an algorithm with no more than a thousand steps could do anything as sophisticated as, say, face recognition. We also know that neurons are not binary on/off switches; what they send to other neurons is not a "Yes/No" signal, but a sequence of pulses with some frequency. I'm sure anyone who has better than my amateur knowledge of neuroscience can list a bunch of other differences.

That doesn't say, of course, that a very, very fast computer couldn't run a simulation model of whatever physical processes in the brain generate cognition and choice, by brute force step by step number crunching—though it would take a tremendous amount of data storage to model an entire brain, a massive speedup to do it in real time, and a much more detailed understanding of how the brain actually works to make the simulation accurate. But even supposing that we will jump past all those difficulties, the fact that an algorithmic process could simulate the brain's cognitive activites would not show that those activities are algorithmic, any more than the fact that differential equations can describe the orbit of a satellite, and that an algorithmic process can solve differential equations, shows that an orbiting satellite is either solving differential equations or carrying out algorithms. "A is a model of X" does not imply "X is a form of A."

So when I read science fiction in which algorithmically based systems are capable of human-equivalent cognition, choice, and self-awareness, I view them with a certain nostalgia, as I do those Heinlein juveniles where Mars and Venus have oxygen in their atmospheres, or as I view the role-playing game Space 1889, in which the luminiferous ether is real and provides a way to travel between the planets.

Um, Peter, "...those described above..." It's not like we know all designs for quantum computers yet. This is still a very new field.

"somebody who doubts that AI is possible is likely to also doubt this assumption." An excellent point, and indeed--if I understand him correctly--William H. Stoddard (forgive me if I have omitted a title you deserve) has expressed exactly that doubt. But I am not trying to turn the debate into one of Church-Turing; I tripped over one of my own assumptions.

Computers can compose music [1], make paintings [2], play poker [3], drive cars [4], and solve analogies [5]. AI research is making progress. Yes, it's harder than we thought, but it's happening. Even machine translation is becoming a useful tool [6]. Let's forget the stale philosophical debate and look at what's actually going on in the field.

[1] http://tinyurl.com/2pgg8x
[2] http://tinyurl.com/2sjkak
[3] http://tinyurl.com/lb92
[4] http://tinyurl.com/26d8lh
[5] http://tinyurl.com/2meyo4
[6] http://tinyurl.com/lrfzb

William H. Stoddard: remember the parallelism. A thousand steps of serial computation (limited in certain ways--no matrices, for instance) could probably not do face recognition. But a thousand steps of massively parallel computation perhaps could.

"the fact that an algorithmic process could simulate [...] solving differential equations or carrying out algorithms." True, true. On the other hand, how could you tell the difference? With a satellite, the physicality of the satellite makes the difference. But thought is immaterial to begin with. The computer you describe could, provided one could provide it with some sort of body, pass the Turing test handily--it would behave just like any intelligence. And if you are a physicalist, I can't really see that the method of achieving the behavior matters. Or am I missing something?

BTW, my human amanuensis has seen demonstrations of both neural network and genetic "algorithms". He thinks they are uncanny, and, despite all oversimplifications, do show some of the behaviors he expects of what is called cognition. Can such things, in total, be assembled into an intelligence? He thinks not; rather he expects that, as with many previous promising approaches, ultimately a dead end will be reached. But of course, he could be wrong.

Caw! Caw! Caw!

And computers can create patentable inventions:

http://tinyurl.com/lxc3u
http://tinyurl.com/3alptb

But thought is immaterial to begin with.

This is a key assumption that I reject. As a physicalist, I believe that thought is a physical process, taking place in the brain, involving changing electrochemical potentials at neural membranes, release of various neurotransmitters, and other processes that a neurophysiologist could explain in far more detail than I can. This process is complex enough, and depends on such microscale events, that our current technology is not able to track or measure it—though we are developing toward that ability; I've seen, for example, images of neural activity in a monkey's visual cortex that clearly show a topological equivalent of the image the monkey was looking at (I believe this was in one of Paul Churchland's books). But any claim that thought or feeling or choice is "immaterial" is at best assuming what it needs to prove. I believe the contrary, and I believe that any model of human cognition that does not take into account what we know about the actual material processes within the human brain is assuming away the actual difficulties of the question.

As to massive parallelism, of course that's what's going on in the brain. I would take that as a reason for believing that the brain is not a Turing machine, because Turing machines are not parallel at all; they have a single read/write head going step by step along a single tape. I imagine you can, in theory, program a Turing machine to emulate a massively parallel system by using different parts of its tape to track the activities of different parallel processors, and progressive through them all, one by one, cycle after cycle—but again, the fact that system A (a Turing machine) can emulate system X (a massively parallel computer) does not mean that system X is an example of system A.

A note on this question of whether human cognition transcends the limits of the Church-Turing thesis. In my reading of the history of the Turing machine concept, it appears that Turing did not originally propose the "Turing machine" as a scheme for a machine that could actually be built. He proposed it as a kind of "thought experiment" to explore the limitations of logical proof. A Turing machine was in fact an idealized model of a human logician, as conceived in the formalist program of early 20th century philosophy of mathematics; it was capable of solving all those problems and only those problems that a mathematician could solve by using strictly valid proofs and theorems.

Now, if you are interested in the broader topic of human cognition rather than in the specific topic of human logical proofs, there may be human cognitive abilities that exceed a Turing machine's capabilities. But I'm not persuaded that the theorems a human being can prove are different from those a Turing machine can prove.

(Caveat: I am avoiding the whole issue of whether formalist philosophy of mathematics is valid. But in any case, its main competitors, intuitionism and its descendant constructivism, seem more inclined to accept a narrower set of proofs as meaningful than are accepted in formalist mathematics—so a constructivist mathematician might admit to being able to prove fewer, not more, things than a Turing machine could prove. In particular, constructivism seems to reject proofs based on applying the Law of Excluded Middle to transfinite sets.)

Wm. Stoddard: velocity is physical. But is it material?

"Turing machines are not parallel at all"

A non-deterministic Turing machine is infinitely parallel. It turns out that anything a non-deterministic Turing machine can compute can be computed by a deterministic Turing machine; as mathematical abstractions they are equivalent. In practice, of course, we cannot build a Turing machine. Turing machines have an infinite number of states; any computer we can build necessarily has a finite number of states, though in practice these cannot be enumerated.

"But I'm not persuaded that the theorems a human being can prove are different from those a Turing machine can prove."

Nor am I. But a number of very knowledgeable mathematicians are so persuaded and, if it turns out to be true, it means that no current computer can ever even simulate intelligence. So it's a very important question in AI.

Peter, AI research is starting to resemble the alchemical quest to turn lead into gold. In 50 years of research, despite numerous announcements--and genuine useful solutions to other problems--still no gold.

I've spent a two months, recently, head down in the history of those interesting failures. We're still there, unfortunately. David Cope's music generator is an aid to a human. The poker-playing programs cannot read faces. And so on. The general form of these successes is that some problem-solver which works in a limited domain is devised, and the claim is made that it is "intelligent". But it never is successfully generalized, and it ends up being used--if at all--as an aid to human intelligence.

One correction to Raven. A Turing Machine doesn't have an infinite number of states. At any time it can be in any one of its FINITE number of (internal) states. It has a tape with a potentially INFINITE number of squares on which it can operate. Any real computer has the former and lacks the latter. A computer's internal design must be that of some UNIVERSAL Turing machine. All this is clearly and simply described by the maths I mentioned yesterday.

I am delighted to see that Mr. Stoddard and The Raven have raised and discussed the connections between foundations of math and AI skepticism. The bottom line is, I think, this: If an epistemology and metaphysics of cognition can't account for our knowledge and practice of math, then they're no good. It seems (as The Raven points out) that some excellent mathematicians (Gödel!) reject what we'd now call Strong AI on precisely this ground. These are clearly critical issues. Among other things they give importance to the classical disputes between constructivists, formalists, and Platonists. I hope this adds to Mr. Stoddard's remarks and to The Raven's justified doubts and hesitations.

A non-deterministic Turing machine is infinitely parallel. It turns out that anything a non-deterministic Turing machine can compute can be computed by a deterministic Turing machine; as mathematical abstractions they are equivalent.

I don't see that a nondeterministic Turing machine is parallel at all. It's still one tape head moving up and down one tape and reading and writing one symbol at a time. It's not a substantial number of different heads moving along different tapes and somehow sending information back and forth. That looks serial to me. The fact that it may come up with different outcomes on different runs from the same initial tape and starting point doesn't mean that any one run is other than serial.

Of course, perhaps "parallel" is being used in a different sense here. But in that case, any argument that applies it to a physically massively parallel system such as the human brain seems to be purely verbal.

And "as mathematical abstractions they are equivalent" seems to beg the question. For a deterministic Turing machine to emulate a nondeterministic Turing machine it presumably has to go through some fairly complicated internal process, involving multiple steps to do what the nondeterministic machine does in one step, and perhaps storing additional data on a longer tape to do the emulated randomizing. But when we deal with the brain, we are talking about a system that only has time for dozens or hundreds of steps between input and output. So multiplying the length of the emulation process on a universal Turing machine just makes that machine a worse model of the brain.

There are old jokes about the mathematician who turns to the study of animal husbandry, and begins his first paper with "Assume a spherical cow." The spherical cow strikes me as the height of realism compared to algorithmic AI. I'm confident that in time we will learn to build artificially intelligent entities; I don't think those entities will achieve their abilities by carrying out an algorithm. Except in the trivial sense that an algorithm for simulating the evolving states of a brain would be an algorithm—but the "intelligence" there would come from the data that defined the brain being simulated, not from the algorithm that processed those data.

I tend to think that the great majority of the problem with AI as a field of research stems from the axiomatic assumption that some entity or property of intelligence exists.

I don't think one does.

Which is at least a viewpoint which has little difficulty with the idea that constructing something that doesn't exist is going to be very difficult.

(Which is not to say that I don't think it's possible to build a machine to emulate the abilities of the human brain; that's pretty obviously the case.)

I'm going to hold my hands up here and admit that I'm lost in some of the technicalities of the discussion. It's been a LONG time since I played seriously with electronics and physics.

However, I am a psychologist and although AI and cognition isn't my area of expertise per se my understanding of the issues from a psychological viewpoint suggest that the main barrier to AI is that we don't actually understand how we ourselves function.

I suppose that it depends on what we mean by AI. Do we mean and artificial intelligence that is self aware and creative or do we mean a artificial intelligence that gives the impression of self awareness and creativity.

We can see the former in most of the great apes, we ourselves develop it at around the age of 2 1/2 years. one of the main psychological issues is how we become self aware and without the knowledge of that I'm not convinced we can artificially create self awareness in AIs.

If we however are looking at creating interfaces that are easy to use I suppose we are in a way approaching that goal.

an interesting set of info re cognitive approach to ai can be found here...http://www.aaai.org/AITopics/html/cogsci.html#lehrer.

ok back to the marking

Thanks Mr. Turney,
The approach to cognition via metaphor is interesting. I've read a bit of it. As a philosopher let me point out that its intellectual roots are in Heidegger's Being And Time, as filtered through Merleau-Ponty's Phenomenology Of Perception. Two books I plan to RE-read, hopefully with more comprehension. I have no fixed position of the nature of the mind, although I'd certainly prefer a form of physicalism that has room for sensations. That's not an inconsistent position.

Hi George,
I am also a philosopher (at least, my PhD is in philosophy, in the analytical tradition, with an emphasis on formal logic, philosophy of mind, philosophy of science, and philosophy of mathematics), although I have spent the last 20 years doing research in AI (applied philosophy?). With regard to sensations (qualia), I accept the reasoning of the philosopher David Cole.

Good to hear about your background, Mr. Turney. I guess you have read about mine, mentioned above. My PhD thesis was in the Philosophy of Physics but I've been interested in the mind since 1969. See my first comment.

I'm not sure I understand what is going on with Cole's argument; it seems too abbreviated to be figured out easily. But as a physicalist, I inclined toward Nicholas Humphrey's theory of qualia, which is that where perceptions are states of sensory neurons that are attributed by the brain to the world, sensations are states of sensory neurons that are attributed by the brain to the body, and associated with physiological states of the body. Redness goes with a slight rise in body temperature and a slight dilation of the blood vessels; sweetness with a slight release of saliva; coldness with a tendency to erect the hairs and shiver; pain, well, we know what pain goes with, all too well, which is why it's the prototype for the category of qualia.

So the reason that qualia are private is not that they are part of the mind, and share the privacy of the mind; it's that they are part of the body and share the uniqueness of each person's embodiment in one body.

Assuming that we could make a conscious AI, I can easily see that it could have perceptions; but it's not so clear that it could have sensations, unless we gave it a body with a physiology. It might be an entity that only had perceptions, which sounds like a peculiarly detached state.

I'm not a professional philosopher, by the way; I'm a philosophy hobbyist.

Here is more on Cole's argument:

Functionalism and inverted spectra
Thought and qualia

His argument is an attack on the Inverted Spectrum argument. The basic idea is that you could actually turn this thought experiment into reality, by giving a person some video goggles that inverted the spectrum. Although this experiment has not actually been done, empirical evidence from psychology strongly suggests that the person would eventually have a kind of mental "flip", in which the inverted colours seem natural again. This supports the functionalist position in philosophy of mind.

Hi Peter,
The inverted spectrum and related arguments make my head spin. They are too hard for me and too impalpable. For example, I tried to read S. Shoemaker's 2 early articles long ago, but got lost. Not my cup of tea. I prefer the crispness of logic, logical reconstructions, and, say, Frege on logic. How about you?

Hi George,
I'm happy to discuss this with you, but we're wandering a fair bit from the topic of the blog post, and I suspect that our host (Ken MacLeod) would prefer that we switch over to email.

Hi Peter,
You're right. I just wrote down your e-address. Mine is alicebesch@wanadoo.nl I live in Amsterdam and will try to get back to you lat this evening. If not then, then definitely tomorrow.
This reply is more-or-less on topic, since I must now go to the monthly SF-Cafe of a Dutch SF organization.

I'm sorry to have had to drop out of this for a few days, though perhaps it was just as well--winter term started & I was too busy. A few final remarks:

"I tend to think that the great majority of the problem with AI as a field of research stems from the axiomatic assumption that some entity or property of intelligence exists."

Perhaps it does not, but we still cannot get computers to do many cognitive tasks which humans do. We've had 50 years of trying and it's reasonable to ask why.

Peter Turney objects that AI has turned to emulating biological systems--evolution and neurology. The early approaches did not rely on emulating biological systems and were failures; for this reason the emulation of biological systems has become a popular approach. It is also hoped that we might understand what is missing from our approaches by studying biological systems. Perhaps this is so, but we have not so far succeeded.

"A Turing Machine doesn't have an infinite number of states. At any time it can be in any one of its FINITE number of (internal) states."

You are right--my mistake. I was running mathematical and software engineering terminology together. In software engineering, memory is not, of course, actually infinite, and for some purposes it useful to call the contents of memory a computer's "state". I had not known that Gödel was such an uncompromising Platonist--thank you for the information.

"I don't see that a non-deterministic Turing machine is parallel at all."

"Non-deterministic" in this type of mathematics means, specifically, a machine that explores all alternative paths until a solution is reached. And, in fact, non-deterministic and deterministic Turing machines compute the same set of functions, though deterministic machines can take many more steps to do so. It is noteworthy that this is very similar to the idea of superposed states and wave-function collapse in quantum mechanics.

I am not a philosopher in any modern sense at all, as I am sure is evident here; I just keep getting ambushed by this problem and have had to learn about it in self-defense.

Ken, thank you for your patience and for hosting this unexpectedly long and contentious discussion.

It's been a plaeasure to have you all here. I don't often get to see, let alone host, such a high-level and civilised discussion.

I don't know what to say... If we're trying to acheive the level of intelligence displayed in this discussion with AI work, we REALLY have our work cut out for us.

Peter Turney objects that AI has turned to emulating biological systems--evolution and neurology.

This is not what I said. You are confusing methods with goals. The goal of AI research is to study and create algorithms that perform well on tasks that would usually be said to require intelligence. AI researchers are using many different methods to reach this goal. One method is to look to biology for inspiration. I have absolutely no objection to this method; it a good method, but it is not the only method. My point was that progress in AI should be measured in terms of the goal of AI. Looking to biology is a method, not a goal. Progress in AI should not be measured in terms of the similarity between AI algorithms and biological mechanisms.

The early approaches did not rely on emulating biological systems and were failures; for this reason the emulation of biological systems has become a popular approach.

This claim is not historically accurate. AI researchers have looked to biology (and elsewhere) for inspiration since the early days of AI. For example, Marvin Minsky built a neural network machine in 1951.

Even magnificent failures shed new light upon the darkest of subjects, perhaps objects too. On the journey to A.I. the investigations into brain/mind problems will reveal better and better understanding of thoughtful beings.

Although I don't know about shifting spectrum. there is a well known effect of using prism glasses that invert the image projected into the eye. After a relatively short period of time the individual has a perception switch and the brain accounts for the inverted image. The fun is when you remove the prisms since the individual has to readjust to the non manipulated world image.

One of the interesting things about he nueral netweok in the brain is that to an extent it has a large amount of redundancy and is self correcting. I'd imaging that any AI would also hve to have some element of this inbuilt.

Somebody earlier noted that it would be difficult to create an AI when we have difficulty in defining Intelligence in the first place. After all Intelligence is just what Intelligence tests, test!

Post a Comment


Home