Lots of papers available over here. All those French (pseudo) intellectuals should look to guys like him (and Fodor, and Quine, and ...) for how real philosophy (as opposed to the mere striking of fashionable poses) ought to be done: one needn't be obscure to be profound.
[Via Internet Commentator.]
please Christ no. If this was an archive of Searle, Davidson, Putnam or someone, then you'd merely be misrepresenting French philosophy, but to be a publicist for Dennett is just horrendous.
Dennett's life's work is the striking of "fashionable poses" (for a nebulous version of "strong AI", against "consciousness", for "memes", against "ethics"), and he's never once come up with an argument for any of them[1]. He's even boasted that he wasn't going to in the introduction of one of his books with that guff about "intuition pumps". Take a look at the main arguments of Dennett's career:
1. The human mind is a computer, because if it wasn't it would have to be a magical non-physical thing. All those people who claim it's not as simple as that are lying and probably believe in God.
2. You might think you have subjective experiences but you don't not really. What, are you claiming that God gave you them?
3. Dawkins was talking sense about memes, no really.
4. Evolutionary psychology is the totality of what it means to be moral, but don't expect me to prove this.
I defy anyone to seriously claim these four weren't fair enough summaries. The man's a far more annoying (because more arrogant) charlatan than Lacan, and that's saying something. One needn't be obscure to be profound, but sounding like a newspaper columnist is no guarantee that one's arguments are any better than those of a newspaper columnist.
[1] this is hyperbole, probably.
Posted by: dsquared | September 16, 2005 at 01:59 PM
"1. The human mind is a computer, because if it wasn't it would have to be a magical non-physical thing."
But this is true almost by definition: the laws of the universe as we know them are such that the universe can be rigorously thought of as a Turing Machine, and therefore each individual brain within said universe must be conceivable in such terms.
"2. You might think you have subjective experiences but you don't not really. What, are you claiming that God gave you them?"
Again true pretty much by definition. Your subjective sensory experiences seem real enough to you, but at the end of the day, all they consist of are chemical reactions and electrical flashes between synapses; outside of this, what else is there but other-worldly notions of ghosts in machines?
"3. Dawkins was talking sense about memes, no really."
If this is wrong, it isn't obvious to me. Care to actually demonstrate that Dennett is off his rocker here?
"4. Evolutionary psychology is the totality of what it means to be moral, but don't expect me to prove this."
I don't think much of the state of the subject as it currently is, but it's clear that any explanation of morality, which is a universal human phenomenon, must be rooted somehow in our evolutionary history, and indeed our fellow anthropoid primates do seem to display recognizable rudiments of moral sentiments which are familiar in ourselves.
"sounding like a newspaper columnist is no guarantee that one's arguments are any better than those of a newspaper columnist."
True enough, but I don't see that Dennett quite qualifies here.
Posted by: Abiola Lapite | September 16, 2005 at 02:22 PM
FWIW, I thought Dennett's first book on free will (Elbow Room) was quite excellent, in the sense described by Abiola -- very clear and insightful, not obscure at all.
Posted by: Barbar | September 16, 2005 at 03:01 PM
Abiola, they're all defensible positions, it's just that the specific philosopher Daniel Dennett doesn't bother arguing for any of them, which puts him well below people like Derrida and Foucault who do bother to argue for their conclusions.
Your Turing machine argument looks very circular, by the way. Nothing can be interpreted as a Turing machine /tout court/. Anything can be thought of as a Turing machine if you have a definition of what you are going to count as input and what you are going to count as output. For both of your examples (the universe as a whole and human minds), I don't see how you're going to do this in a non-question-begging way.
Posted by: dsquared | September 16, 2005 at 03:20 PM
[[1] this is hyperbole, probably.]
Hyperbole = incorrect now, is it? There isn't anyone to beat Dennett when it comes to supporting arguments. In any case, since when is attempting to build a theory of consciousness "against" consciousness?
[I defy anyone to seriously claim these four weren't fair enough summaries.]
Ok, I'll bite, these are ludicrous caricatures. I'll even go as far as to suggest that you haven't read any Dennett book but instead have merely read the reactions of others.
[1. The human mind is a computer, because if it wasn't it would have to be a magical non-physical thing. All those people who claim it's not as simple as that are lying and probably believe in God.]
This is a ridiculous characterisation of the materialist position. All that Dennett claims is that there isn't anything mysterious and ineffable about the mind. This is eminently defensible. It is the unsupported, circular assertions of dualists with their zombies and qualia which are indefensible
[2. You might think you have subjective experiences but you don't not really. What, are you claiming that God gave you them?]
No, again. The point is that there's no such thing as first person science. You can't derive anything merely from introspection. The assertion that "subjective experiences" have some sort of intrinsic character which is inaccessible to a third person is just that, an assertion a folk concept.
[3. Dawkins was talking sense about memes, no really.]
I don't see what's so objectionable. The meme model is a fairly robust way of looking at how ideas spread.
[4. Evolutionary psychology is the totality of what it means to be moral, but don't expect me to prove this.]
I don't know where this comes from, I don't recognise what position of his you are caricaturing. Maybe you are confusing him with someone like Pinker.
Perhaps you could at least attempt one knockdown argument against any Dennett position, seeing as it's apparently obvious the man is a charlatan.
Posted by: Frank McGahon | September 16, 2005 at 03:30 PM
I've read all of them, repeated times, having written review essays. I was once a big Dennett fan.
[All that Dennett claims is that there isn't anything mysterious and ineffable about the mind]
No, he specifically claims (in "Consciousness Explained") that the mind can be modelled as a Turing machine and suggests a software architecture with which to do it. Likewise, he ends up (also in CE and in "Quining Qualia") denying that there are qualia and asserting that the experience of seeing something is a Turing machine state. This is where he runs out of (non)arguments and sets up exactly the straw opponent that you're raising here.
[The point is that there's no such thing as first person science]
Lousy phil of sci, because there's no such thing as a third person observation. Dennett never comes up with an argument to back this slogan to explain why observing that pain hurts is different from observing falling cannonballs from a tower.
[The meme model is a fairly robust way of looking at how ideas spread.]
No it isn't because 1. the way in which ideas spread is not usefully analogous to the way in which DNA sequences spread and 2. neither Dawkins nor Dennett have ever given individuation criteria for memes. It's the old Quine slogan "no entity without identity" - if you're going to claim that there are Xs, then a minimum commitment is that you have to say what it is for an X to be the same X as another X o to be a different X from another X.
[I don't know where this comes from]
Second half of "Darwin's Dangerous Idea", plus the book "Freedom Evolves" (which last one even diehard fans admitted was a stinker).
[Perhaps you could at least attempt one knockdown argument against any Dennett position]
As I mentioned above, the man in general puts forward defensible views without advancing arguments, so no this was not possible. I think that the Searlean argument against Turing-machine models of the mind that I mentioned to Abiola is a very serious problem indeed for the general school on consciousness of which Dennett is a part.
Posted by: dsquared | September 16, 2005 at 04:08 PM
"I think that the Searlean argument against Turing-machine models of the mind that I mentioned to Abiola is a very serious problem indeed for the general school on consciousness of which Dennett is a part."
Are you referring to the Chinese room argument? because I for one have never been convinced by it.
Posted by: Andrew | September 16, 2005 at 04:20 PM
"Your Turing machine argument looks very circular, by the way."
It isn't: algorithms are *generalizations* of deterministic physical laws (which is precisely why simulations are even possible).
"Anything can be thought of as a Turing machine if you have a definition of what you are going to count as input and what you are going to count as output. For both of your examples (the universe as a whole and human minds), I don't see how you're going to do this in a non-question-begging way."
For the universe, the inputs are the initial conditions which govern it, and the outputs are its state evolution. For human minds, the input is the sensory data and the output the behavior.
"I think that the Searlean argument against Turing-machine models of the mind that I mentioned to Abiola is a very serious problem indeed for the general school on consciousness of which Dennett is a part."
Searle's "Chinese room" argument is egregiously flawed, as it would imply that neurons had to be homunculi in our heads. It is perfectly sensible to speak of consciousness as an emergent phenomenon arising from individually unconscious and even inanimate activities - if a Chinese room of the sort Searle posits were indeed possible, I would be completely willing to grant it the label "sentient" even if I knew that underneath it all was just a bunch of individuals shuffling symbols according to rules. I don't subscribe to vitalism.
Posted by: Abiola Lapite | September 16, 2005 at 04:24 PM
[No, he specifically claims (in "Consciousness Explained") that the mind can be modelled as a Turing machine and suggests a software architecture with which to do it.}
That's quite a different thing to saying that the mind *is* a computer. In any case this model is considerably more robust than your implicit dualism.
[Likewise, he ends up (also in CE and in "Quining Qualia") denying that there are qualia and asserting that the experience of seeing something is a Turing machine state. This is where he runs out of (non)arguments and sets up exactly the straw opponent that you're raising here.]
This is nonsense. His whole point is that the notion of "qualia" is wrong, so of course he'd deny there are qualia. He produces a number of well reasoned arguments with references to actual empirical phenomena to explain why the concept is redundant. And your counter-argument is that, what, qualia exist.
[No it isn't because 1. the way in which ideas spread is not usefully analogous to the way in which DNA sequences spread and 2. neither Dawkins nor Dennett have ever given individuation criteria for memes. It's the old Quine slogan "no entity without identity" - if you're going to claim that there are Xs, then a minimum commitment is that you have to say what it is for an X to be the same X as another X o to be a different X from another X.]
It's a simple analogy. It's rather pedantic to suggest that a meme ought to track a gene in every aspect for the model to hold up. Memes couldn't behave exactly like genes, for one lamarckianism and lysenkoism is permitted by "memetic evolution"
[Second half of "Darwin's Dangerous Idea", plus the book "Freedom Evolves"]
There's precious little *normative* recommendations in either so I don't see how you can draw any conclusion about ethics from Dennett's *positive* theories of how we evolved and how free will evolved.
[(which last one even diehard fans admitted was a stinker).]
Well this one didn't.
[As I mentioned above, the man in general puts forward defensible views without advancing arguments, so no this was not possible]
Neat trick that, Isn't that precisely what you (incorrectly) accuse Dennett of?
For instance, what precisely do you find objectionable about his characterisation (there is no place where it "all comes together", our "conscious experience" is pretty much the stuff we commit to memory. For example, there's not much difference between "automatically" driving to work and "consciously" driving to work, at the time, but in the later case the drive is, for now, committed to memory) of consciousness ?
Posted by: Frank McGahon | September 16, 2005 at 04:37 PM
Here's the short argument against "naive memes", which is practically all you hear anyone ever talk about.
Let's suppose that concepts or ideas are expressions of memes, and the memes replicate themselves mostly-reliably as the ideas spread through a population, in tight analogy with genes in population genetics. This, then, lets you use evolutionary game theory to talk about the spread of memes, and then you can look at the replicator dynamics and so one, to build Markov models to show how the distribution of concepts will change over time in the population.
However, it's been known for almost half a century that Markov learning models are not powerful enough to describe language acquisition. Roughly speaking, a Markov model determines a probabilistic finite automaton, and context-free grammars (which are the absolute minimum you need to describe the syntax of a natural language) can't be modelled with FSMs -- you need a pushdown automaton, or something else with a memory.
So, "naive memes" are totally inadequate to describe even the spread of the syntax of a language. Ideas, which are more complicated still (eg, deciding whether a fact in propositional logic is true is NP-complete), are even further beyond the descriptive power of naive memetics.
I suppose it's possible you could do some radical surgery to try and rescue the concept of memes, since it's obvious that ideas do spread throughout the human population, but it seems to me that the necessary changes would basically mean abandoning the analogy with population genetics, which is where memes draw their persuasive power from.
Posted by: Neel Krishnaswami | September 16, 2005 at 04:47 PM
As a historical aside, this Markov model argument was first deployed by Chomsky in the 1950s. At the time, behaviorist methods were the rage in linguistics, and his demonstration that they don't work pretty much shattered that orthodoxy.
The really funny thing is that exactly the same thing happened to the new orthodoxy he established back in the 70s, when Richard Montague and Barbara Partee brought in the lambda calculus and computation to move beyond syntax to start talking formally about meaning.
Posted by: Neel Krishnaswami | September 16, 2005 at 04:54 PM
[For human minds, the input is the sensory data and the output the behavior]
that's what I meant by "non-question-begging". There isn't a way of dividing the physical entities of the universe into "sensory data", "behaviour" and "neither" which doesn't make reference to exactly the kind of statements this model ought to be explaining.
[your implicit dualism]
Frank, this is an argumentative habit you've learned from Dennett and it's both unattractive and bound to lead to error.
[For instance, what precisely do you find objectionable about his characterisation {...} of consciousness ?]
That it doesn't have a theory of subjective experience, that it doesn't help us get any closer to a theory of reference and that it deals with Chalmers' zombie-argument in a really hand-waving way.
Posted by: dsquared | September 16, 2005 at 05:00 PM
" There isn't a way of dividing the physical entities of the universe into "sensory data""
Oh yes there is: they're just those aspects of the universe which initiate sensations in your eyes, ears, nose, tongue, skin, etc. The only photons that matter are the ones that hit your skin or retina, the only massive (non-massless) particles the ones that you can sense, etc. This is all perfectly straightforward, and industrial robots can already replicate a fair subset of such functionality.
As for "behavior", what's so difficult about that? As dumb as they are, already existing robots *do* have behavioral repertoirs which reflect the sensory information they receive, and yet they're governed by rules of the sort you're suggesting shouldn't be able to divide up the world in a certain manner - completely deterministic software programs running on tape-limited Turing machines.
Posted by: Abiola Lapite | September 16, 2005 at 05:14 PM
Nope. Lots of molecules hit our skin every day; only some of them "produce sensations". Similarly, our bodies (and those of robots) do lots of things, only some of which are behaviour. To take a trivial example, while operating a robot raises the ambient temparature of the room it is in, but this isn't behaviour. The point is (and it is an argument from Searle which is unrelated to the Chinese Room one; I swear that some peope act like they think "'s Chinese Room" was the guy's surname) that the output of Turing machines can only be considered to have any content at all (even to *be* symbols, let alone to be *meaningful* symbols) in so far as it is interpreted by something which is not only a Turing machine (in Searle's view, something with something like a human brain, which has the biological property that it allows a first person perspective).
I appreciate that if you're a thoroughgoing Denettite you might be willing to say that a thermostat is conscious and that there is no real qualitative difference between what your robots do and the "behaviour" of an automobile when you press the accelerator. But if you want to escape this reductio ad absurdum (and even Dennett often looks like he wants to, hence all the stuff about S*O*A*R architectures and so on), then you need something over and above the mechanical connection; you need the symbols on the Turing machine tape to have causal power because of their *symbolic* role rather than their physical role. And Searle's (in my view entirely convincing) argument is that this ain't going to happen, because outside of a context in which there is someone to interpret them as symbols, they don't *have* a symbolic role.
Otherwise, all Dennett appears to be left with is an amazingly weak hand-waving suggestion that intensionality "emerges" in some utterly mysterious way when a system gets complicated enough and all we know about the way in which it emerges is that it corresponds exactly to his theories (or on a really good day, to both his and those of Dawkins).
Posted by: dsquared | September 16, 2005 at 08:39 PM
"Nope. Lots of molecules hit our skin every day; only some of them "produce sensations". Similarly, our bodies (and those of robots) do lots of things, only some of which are behaviour. To take a trivial example, while operating a robot raises the ambient temparature of the room it is in, but this isn't behaviour."
No one said it was; for one thing, it isn't an action initiated by the robot in order to pursue some end, any more than is your raising the temperature of whichever room you happen to be in by radiating body heat. The bottom line: robots can most certainly be driven to act in response to sensory feedback in ways any reasonable person would call "behavior": a surface rover which is programmed to prefer a particular elevation and temprature and is then placed on a strange landscape will exhibit just that, so your claims about the shortcomings of Turing machines hold no water.
"in Searle's view, something with something like a human brain, which has the biological property that it allows a first person perspective"
And what exactly is so special about biological objects that only they should be capable of "a first person perspective", an "elan vitale" or a pineal gland? Why is it that one set of creatures which arise from detailed programmes (DNA) are granted the right to possess consciousness, while another set happens not to be? The following is most definitely a programme, and we're already rapidly deciphering its instructions.
http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml
"I appreciate that if you're a thoroughgoing Denettite you might be willing to say that a thermostat is conscious and that there is no real qualitative difference between what your robots do and the "behaviour" of an automobile when you press the accelerator."
Are viruses and bacteria conscious? Could they be, even in theory? All you're really indicating here is that some things are too simple to display what one could call "consciousness", not that there's something special about flesh which enables only creatures made from it to be capable of awareness.
"all Dennett appears to be left with is an amazingly weak hand-waving suggestion that intensionality "emerges" in some utterly mysterious way when a system gets complicated enough"
I don't see what is "amazingly weak" about this: every single human being on this planet is made up mostly of carbon, oxygen, hydrogen, iron and a few other elements arranged in an incredibly complex form, but if you assembled these raw ingredients in amounts equivalent to what goes into a person, you wouldn't expect it to speak to you. That he doesn't claim to know the details of how consciousness emerges from inanimate matter is to his credit, not the opposite, as anyone can make grandiose claims to knowledge. If you're going to condemn thinkers for not giving complete instructions as to how their ideas are translated into reality, you'd better be ready to write off the likes of Charles Darwin and Gregor Mendel as "hand wavers" as well.
Posted by: Abiola Lapite | September 16, 2005 at 09:23 PM
[any reasonable person would call ]
yup that's the problem; try to find a way of deciding what's behaviour and what isn't without bringing a person (reasonable or otherwise) into the definition.
Posted by: dsquared | September 16, 2005 at 10:09 PM
btw, there is no vitalism or chauvinism in Searle's theory; he explicitly says that anything could in principle be made to be conscious and have intensional states, but that nothing which does so, does so by virtue of instantiating a particular Turing machine.
Posted by: dsquared | September 16, 2005 at 10:11 PM
drsquared is right. Dennett never explained away the question of qualia at least in CE. He mearly showed that some forms of qualia experiences don't have some of the properties we usually associate with them and that he *hopes*, given the current trent in cognitive science, eventually they will be totally redundent in explaining consciousness, but he never showed they don't exist qua qualia.
This was Chalmers' major point of emphasizes in his attack on Dennett in his book The Conscious Mind.
Posted by: JuJuby | September 16, 2005 at 11:10 PM
D-squared writes:
>I think that the Searlean argument against Turing-machine models of the mind that I mentioned to Abiola is a very serious problem indeed for the general school on consciousness of which Dennett is a part.
D-Squared,
Have you read Frank Tipler's criticism of Searle's basic premise: that a human hand-simulating a program that could pass the Turing Test is a physical impossibility of the same order as jumping to the moon?
Other than that, I am critical of the mind-as-computer model too, but I subscribe to Karl Popper's arguments in this regard.
- Daniel
PS: I like your work elsewhere.
Posted by: Daniel Barnes | September 18, 2005 at 06:13 AM
Daniel: thanks. I'm not referring to the Chinese Room argument here; Searle had a number of other arguments against Turing-Machine functionalism and IMO the Chinese Room is the weakest one.
Popper is actually a Cartesian of a sort on this one isn't he? I must confess that I didn't read Popper & Eccles (mainly because I was a hardcore Dennettite when it came out) but it didn't get terribly good reviews.
Posted by: dsquared | September 18, 2005 at 08:26 PM
D-squared writes:
>Popper is actually a Cartesian of a sort on this one isn't he?
Popper is even worse than a dualist - he is a trialist!
Are you familiar with his '3 Worlds' hypothesis? It's been slagged off by all and sundry since he first launched it in the '60s.
Which can only mean one thing, naturally - that it is a really interesting and potentially fruitful theory. If you want the short version (Popper and Eccles is excellent, but very long) the wikipedia has Popper's "Three Worlds" lecture in PDF form under 'Popperian Cosmology'. Worth a look. Be interested to know what you make of it. (The same page also has a daft "critical assessment" of Popper by the Ayn Rand fan Nicholas Dykes, but we won't hold that against the wiki)
Posted by: Daniel Barnes | September 18, 2005 at 09:43 PM
I remember reading about the 3 worlds theory when the reviews came out and thinking it looked not entirely unlike a few good bits in Derrida and Irigaray. Thanks for the heads-up on the lecture; I'll give it a look.
Posted by: dsquared | September 18, 2005 at 09:48 PM