Both strong artificial intelligence (here in the sense of sentient/conscious AI) and faster than light travel (FTL) are dreams of science fiction and (some parts of) humanity, but what is the difference between the two?
Proposing FTL today is pretty much a lost cause (although there are interesting ideas such as the Alcubierre Drive – at least it’s still a viable option for Sci-Fi literature). The problem is that the Einstein’s special theory of relativity (SRT) postulates that nothing is able to go faster than light (more exactly: no information can be transmitted faster than light. Spaceships and their passengers are of course matter organized in certain ways, viz information – so they can’t go faster than light).
To date, we have no observations which contradict relativity, on the contrary, SRT and GRT are highly successful and thoroughly corroborated. To put it another way, observation of an FTL object would be theoretically quite…a surprise.
The situation is very different for strong AI: first of all, there is no theory whatever which predicts that such a thing were not possible. We don’t know quite exactly how this thing called consciousness appears in the brain, but it is a very active subject of research.
But there is something more important: whereas we have not observed FTL objects, we observe strong “I’s” (intelligences) every day: your fellow humans, yourself etc.
We are conscious, and we live in this physical world, made of the same physical stuff as everything else. Our consciousness is an organizational property of the matter we are made of, not something magical tacked on as an afterthought. What a wonderful insight: we know by simple observation of our surroundings that physical matter configurations can become conscious! Easy to see, yes? But, as Aristoteles said: “Just as bats’ eyes are to daylight, so is the mind blind to that which is most obvious of all.”
Strong AI is not a problem in the sense of “could it possibly exist?”; it is evidently only an engineering problem (albeit a complex one). Maybe we will need molecular biology for solving it (meaning that AI will only run on proteins and not on silicon, which is more in line with materialism than computationalism; but it is engineering nonetheless). We just have to find out in which way matter has to interact (in a sufficiently reentrant way) to build an AI.
After all, the distinction artificial and natural is pretty thin anyway. Ants are natural. Their nests are natural. Humans are evolved, they are natural. So why should we not call their artifacts natural? It is only a philosophical word quibble – the distinction natural/artifact is sometimes interesting – for instance, when we stumble upon an artifact on an alien world, this is astounding not because an “artifact” were something “supernatural”, totally out of this world, but because an “artifact” suggests an “artificer” - an intelligence, an agent, which made it – a natural being of some sophistication. The intelligence that made the thing would be quite a natural inhabitant of its environment. Never let the distinction artificial/natural confuse you!
So, to get back on topic: what opponents of strong AI would actually have to claim is that we will never (1000 years? 1 million years? our descendants on different planets in 5 billion years?) be able to engineer a conscious artifact because of mysterious reasons which would go against everything we know about this world. And this claim, now, seems completely ludicrous to me. And if it does not sound ludicrous to you, go look in a mirror! (you are conscious, are made of matter, but are not going faster than light ) Opponents of strong AI are fighting the same losing battle as vitalists did in the 19th/20th century.
I found this amusing quote by Francis Crick on the wikipedia article on vitalism: “And so to those of you who may be vitalists I would make this prophecy: what everyone believed yesterday, and you believe today, only cranks will believe tomorrow.”
Similar issues are raised in this blog post, which motivated me to publish this (now slightly overworked) draft (which otherwise would have slumbered for many months on my harddrive before being polished enough to publish ).
A commenter on that blog (Accelerating Future) says this:
Michael Bishop (2003). Dancing with pixies: Strong artificial intelligence and panpsychism
This paper argues against computationalism by showing it implies panpsychism.
Here it should be emphasized that in fact every realist monist naturalism (and who’s a dualist nowadays? nobody, for good reasons!) implies panpsychism – see this paper by Galen Strawson. (If you want more of this stuff, there’s also a book.) So the problem (if it is one) lies somewhere else and certainly not in positing strong AI or computationalism. The problem can be solved by a radical (“radical” in the sense of “going to the root”) monism, but on that more later, because it’s a topic of its own.