The Early Days of a Better Nation |
Ken MacLeod's comments. “If these are the early days of a better nation, there must be hope, and a hope of peace is as good as any, and far better than a hollow hoarding greed or the dry lies of an aweless god.”—Graydon Saunders Contact: kenneth dot m dot macleod at gmail dot com Blog-related emails may be quoted unless you ask otherwise.
Emergency Links
LINKS
Self-promotion
The Human Genre Project
Comrades and friends
Colleagues
Genomics
Edinburgh
Writers Blog
Editor Blogs
Publisher Blogs
Brother Blogs
Skiffy
Brits Blog
' ... a treeless, flowerless land, formed out of the refuse of the Universe, and inhabited by the very bastards of Creation'
Amazing Things
Faith
Reason
Evolution
War and Revolution
Mutualist Militants
Democratic Socialists
Impossibilists and Ilk
Viva La Quarta
Communist Parties
Other revolutionaries
Radical Resources
Readable Reds
For the sake of the argument
|
Friday, January 04, 2008
You have done the SF world a great service by posting that article about AI. I hope that many people will read it, since it precisely echos my thoughts on the subject. I've had similar views since I started watching the "development" of AI in 1977. It's an example of what Lakatos called a "degenerating research program." 44 Comments:As a materialist, I can't be an AI "skeptic". I can be skeptical that current work in the AI field will be relevant to developing actual AI (and I am), and I can believe that we'll get there either a long ways in the future, or else serendipitously, and I do. But since the brain (or some larger chunk of the human organism) *is* a machine that produces a mind, I can't very well doubt that it's possible.
To me the issues are largely empirical.
DD-B, I agree with you, and I guess George would too.
Sigh. Materialism can be valid and AI still impossible. In any event, is faith in materialism any more rational than any other faith?
And does Church-Turing hold? Well, in one corner, we have Alan Turing. In the other, we have Roger Penrose.
"Quantum computers are computationally equivalent to Turing machines (ignoring speed); that is, they compute the same class of functions."
Has that been proven? I remember results that pointed in the other direction. Raven, is 'your original AI critic' George Berger, or the author of the article I linked to? Because George Berger is well pissed at being told he doesn't know enough math ...
Another recent success for AI: "A sports utility vehicle with a mind of its own was declared the winner of DARPA's urban robot car race on Sunday. It travelled autonomously through traffic for six hours and 60 miles (100 kilometres) around a ghost town in California, US, to scoop the prize."
I'll reply to comment 1. As Ken suggests, I agree with comment 3.
My apologies, Dr. Berger. With all due respect to your erudition I don't think you have the uncomfortable intimacy with this subject which I have developed. Thanks Raven, it's no problem. My competences are in analytical philosophy, philosophy of science, some maths, and logic. That's sufficient for understanding the AI issues. As you say, I have no "intimacy" with the nuts and bolts of AI.
Peter, well, so the computational abilities of quantum computers are still up in the air. It's still an open question, then. That's rather less than your first claim.
I'm a physicalist (I don't like the term "materialist", which seems to carry some baggage from the pre-Einsteinian view of physical reality), and I believe that in the long run, some purely physical model for cognition, choice, and even qualia will be arrived at. But I'm not a believer in algorithmic AI, which I regard as an obsolete scientific hypothesis—The Raven's comparison to luminiferous ether is along the right lines, though I tend to think of phlogiston. That is, I don't believe that a Turing machine or a von Neumann computer is a good model for how the human brain processes information.
Um, Peter, "...those described above..." It's not like we know all designs for quantum computers yet. This is still a very new field.
Computers can compose music [1], make paintings [2], play poker [3], drive cars [4], and solve analogies [5]. AI research is making progress. Yes, it's harder than we thought, but it's happening. Even machine translation is becoming a useful tool [6]. Let's forget the stale philosophical debate and look at what's actually going on in the field.
William H. Stoddard: remember the parallelism. A thousand steps of serial computation (limited in certain ways--no matrices, for instance) could probably not do face recognition. But a thousand steps of massively parallel computation perhaps could.
But thought is immaterial to begin with.
A note on this question of whether human cognition transcends the limits of the Church-Turing thesis. In my reading of the history of the Turing machine concept, it appears that Turing did not originally propose the "Turing machine" as a scheme for a machine that could actually be built. He proposed it as a kind of "thought experiment" to explore the limitations of logical proof. A Turing machine was in fact an idealized model of a human logician, as conceived in the formalist program of early 20th century philosophy of mathematics; it was capable of solving all those problems and only those problems that a mathematician could solve by using strictly valid proofs and theorems.
Wm. Stoddard: velocity is physical. But is it material?
Peter, AI research is starting to resemble the alchemical quest to turn lead into gold. In 50 years of research, despite numerous announcements--and genuine useful solutions to other problems--still no gold. One correction to Raven. A Turing Machine doesn't have an infinite number of states. At any time it can be in any one of its FINITE number of (internal) states. It has a tape with a potentially INFINITE number of squares on which it can operate. Any real computer has the former and lacks the latter. A computer's internal design must be that of some UNIVERSAL Turing machine. All this is clearly and simply described by the maths I mentioned yesterday. I am delighted to see that Mr. Stoddard and The Raven have raised and discussed the connections between foundations of math and AI skepticism. The bottom line is, I think, this: If an epistemology and metaphysics of cognition can't account for our knowledge and practice of math, then they're no good. It seems (as The Raven points out) that some excellent mathematicians (Gödel!) reject what we'd now call Strong AI on precisely this ground. These are clearly critical issues. Among other things they give importance to the classical disputes between constructivists, formalists, and Platonists. I hope this adds to Mr. Stoddard's remarks and to The Raven's justified doubts and hesitations. Mathematics is based on metaphor and analogy and we are making progress in computational modeling of analogy and metaphor.
A non-deterministic Turing machine is infinitely parallel. It turns out that anything a non-deterministic Turing machine can compute can be computed by a deterministic Turing machine; as mathematical abstractions they are equivalent.
I tend to think that the great majority of the problem with AI as a field of research stems from the axiomatic assumption that some entity or property of intelligence exists.
I'm going to hold my hands up here and admit that I'm lost in some of the technicalities of the discussion. It's been a LONG time since I played seriously with electronics and physics. I had a lot to say about this, so I put it in my blog.
Thanks Mr. Turney,
Hi George, Good to hear about your background, Mr. Turney. I guess you have read about mine, mentioned above. My PhD thesis was in the Philosophy of Physics but I've been interested in the mind since 1969. See my first comment.
I'm not sure I understand what is going on with Cole's argument; it seems too abbreviated to be figured out easily. But as a physicalist, I inclined toward Nicholas Humphrey's theory of qualia, which is that where perceptions are states of sensory neurons that are attributed by the brain to the world, sensations are states of sensory neurons that are attributed by the brain to the body, and associated with physiological states of the body. Redness goes with a slight rise in body temperature and a slight dilation of the blood vessels; sweetness with a slight release of saliva; coldness with a tendency to erect the hairs and shiver; pain, well, we know what pain goes with, all too well, which is why it's the prototype for the category of qualia.
Here is more on Cole's argument:
Hi Peter,
Hi George,
Hi Peter,
I'm sorry to have had to drop out of this for a few days, though perhaps it was just as well--winter term started & I was too busy. A few final remarks: It's been a plaeasure to have you all here. I don't often get to see, let alone host, such a high-level and civilised discussion. I don't know what to say... If we're trying to acheive the level of intelligence displayed in this discussion with AI work, we REALLY have our work cut out for us.
Peter Turney objects that AI has turned to emulating biological systems--evolution and neurology. Even magnificent failures shed new light upon the darkest of subjects, perhaps objects too. On the journey to A.I. the investigations into brain/mind problems will reveal better and better understanding of thoughtful beings.
Although I don't know about shifting spectrum. there is a well known effect of using prism glasses that invert the image projected into the eye. After a relatively short period of time the individual has a perception switch and the brain accounts for the inverted image. The fun is when you remove the prisms since the individual has to readjust to the non manipulated world image.
|
Sigh. From my viewpoint, your original AI critic doesn't know enough math (and he hasn't heard of genetic "algorithms", which suggests to me he doesn't understand the problem.) Broadly, if Church-Turing holds, AI can be achieved with something like current technology, though it may take a very long time, and if not, not without new technology and theories, perhaps based in quantum computing.
And does Church-Turing hold? Well, in one corner, we have Alan Turing. In the other, we have Roger Penrose. I'm not going to be winning arguments on mathematical philosophy with either man (and besides, Turing is dead). So we wait for new insights. But...I'd take the failure, so far, of the AI project as a sign that our understanding is wrong. We've had 50 years of work and, really, we are no closer than when we began. The project, understand, has been worthwhile; many of the greats of the field have addressed it, and a many valuable algorithms and a great deal of technology have been discovered thereby. And yet the thing itself eludes us, which to me suggests that our basic hypothesis is flawed, in the same way that the failure to account for observed phenomena indicated that the "luminiferous ether" hypothesis was flawed.
By randolph, at Friday, January 04, 2008 1:20:00 pm