Intuition Pumps and Other Tools for Thinking
- by Daniel C. Dennett
W.W. Norton & Company, 2013
512 pages, hardback
Review by Jim Walker
An intuition pump describes a kind of thought experiment or a short story that provokes an intuition such as, "Yes, of course, it has to be so!" and although philosophers have been using thought experiments for centuries, no one has ever labeled thought provoking thought experiments. Dennet coined the term "intuition pump" and by giving it a name, it allows people to "grasp" this idea (just as all nouns do) and to pass it around for others to use, like a tool. Dennett quotes Bo Dahlbom: "You can't do much carpentry with your bare hands and you can't do much thinking with your bare brain." In this respect an intuition pump works like a lever for the mind. Just as the use of words and ideas have spread through culture over time increasing our intelligence, thought experiments and intuition pumps have increased human intelligence even more. This could be partly why intelligence test scores have increased in many parts of the world since the 1930s (see the Flynn effect).
Not all intuition pumps increase our intelligence, however. Some, in fact, might make us dumber. Creationist theories, for example, may well work as an inspiring explanation for religious people but does it really explain anything? A poorly designed thinking tool can lead people to believe in essences, ghosts, and gods. They do nothing useful except deceive people into believing in things that aren't there. Some look like a good intuition pump on the surface but on further examination, they unintentionally mislead and can create confusion. For example, Dennett goes to great lengths to expose the problems with " Mary the Color Scientist," (also known as the knowledge argument) and the Zombie Hunch. But well designed intuition pumps do useful things such as explaining complex ideas through easy to understand stories.
A good intuition pump explains or provides a stage for which an explanation can be created and tested. Plato, Galileo, Einstein, for example, and most philosophers use intuition pumps to help explain their ideas. Most of the intuition pump tools in this book are inventions by Dennett and he divides them into categories such as:
A Dozen General Thinking Tools for Thinking
Tools for Thinking About Meaning or Content
Tools for Thinking About Evolution
Tools for Thinking About Consciousness
Tools for Thinking About Free Will
I have read most of Dennetts books but I spent more time on this book than all of them combined. Dennett does not present anything new here, and in fact, it is a sort of anthology of thinking tools from his past works. The 77 short chapters allow convenient stopping points from which to contemplate his past ideas. I had to go back to his other books and re-read his fuller explainations to get a better understanding of his viewpoints. In spite of this, I confess that I am still confused about several claims he makes.
I do understand (or at least I think I do) Dennett's explanations of general consciousness being composed of a cascade of homunculi, cranes vs skyhooks, competence without comprehension, why the zombie hunch doesn't work, his excellent theory of free will, and how evolution developed the brain from genes to subroutines. But some of Dennett's explanations elude me. I will give brief examples below of some of the problems I see:
1. One of Dennett's most controversial topics involve the concept of qualia, a term that attempts to describe the 1st person subjective experience of sensations (i.e., the color of things, the taste of wine, pain, pleasure, etc.). The chapter titled, "The Curse of the Cauliflower" comes from a longer philosophical piece written in 1988 called, "Quining Qualia" (I encourage readers to read this longer version to fully understand Dennett's explaination). In short, Dennett wishes to overthrow the philosophical concept of qualia. Dennett does not intend to dismiss sensations but rather the philosophical attempts to define qualia, which, according to Dennett have been "vague or equivocal" and are so "thoroughly confused" that it is "far better, tactically, to declare that there are simply no qualia at all." According to philosophers qualia have the conditions of: 1. infeffable, 2. intrinsic, 3. private, and 4. directly apprehensile ways things seem to one. The problem comes from attemting to understand conciousness from these perspectives. Dennett claims that it can only confuse because in a sense, qualia are illusions.
Note, the image below is not in Dennett's book or any of his articles but in an attempt to show how qualia can mislead, look closely at the image:
The qualia you experience of the image above clearly shows two panels with different shades of gray. Right? The problem is that if you make such a judgement, you would be misled because in actuality, the two shades of gray are identical. To see this, simply place your finger across the the horizon and you will suddenly "see" that the two panels are the same shade of gray. Of course your second view of this image could also mislead you so the only way to resolve the truth about this image is to use an external measuring device (a third-person scientific viewpoint) to measure the gray scale. It turns out that the panels are exactly the same shade. According to Dennett, all ineffable sensations (qualia) are not what people think they are even when it appears obvious to them. Even when you know that you're looking at an illusion, the illusion doesn't go away.
Although I understand Dennett's point about the problems of qualia (and I have no reason to disagree with him), I do not fully understand why Dennett would want to declare that there are no qualia at all. This kind of declaration at first blush makes it look like Dennett denies sensations. Although I understand why Dennett would want to overthrow the philosophical use of qualia, it seems to me that qualia in a softer term is still useful in the same way that the word "magic" is still useful to describe magic tricks. If qualia are illusions, then why not use qualia as a way to describe the tricks that the brain uses to fool us into thinking in these ways? Even if scientists one day fully understand all brain functions and that we learn that the mind is completely materialistic, the brain will continue to produce the illusions of qualia and it is this that also needs explanation in the same way that magic tricks need to be explained, even though there is no real magic there. Many times the definitions and meanings of words change and why not for the word qualia? And if not qualia, it would be necessary to invent a new word to sum up the sensations and emotions that the brains produces even if they are illusions, no?
2. Dennett writes: "What brains are made of is kazillions of molecular pieces that interact according to the strict laws of chemistry and physics, responding to shapes and forces; brains, in other words, are in fact only syntactic engines."
Taking this statement at face value, any chemical reaction such as fire is syntactic which is, in my view, seems ridiculous, but I don't think Dennett means this at this level. I suspect he means that chemistry and physics allowed the evolution of neurological structures to develop and that this is where syntax is created. If this is what he means, I agree with this in principle, especially in regards to intelligence, but even here I'm not convinced that brains are only syntactic. What about raw feelings and emotions? Of course it would require syntax to express feelings and emotions, but what theory describes feelings and emotions as built up from only syntactic engines? We have gone a long way to understand intelligence as syntactic. After all, we have created computer systems that have beat the best intelligent human in subjects like chess (Deep Blue) and question answering games such as Jeopardy (Watson). But where has anyone come close to creating a syntactic system that creates raw feels?
3. In Dennett's chapter on heterophenomenology he writes: "I defend the claim that there is a straightforward, conservative extension of objective science that handsomely covers all the ground of human consciousness, doing justice to all the data without ever having to abandon the rules and constraints of the experimental methods that have worked so well in the rest of science. This third-person methodology, heterophemonenology (phenomenology of an other not oneself), is the sound way to take the first-person point of view as seriously as it can legitimately be taken."
If Dennett means that the understanding of human consciousness uses third-person data along with the scientist's first-person data, I understand and agree with his position. By first person data, I don't mean just first person data from a subject's point of view (after all this is part of the methodology of heterophemenology) but rather, first person data from the scientist's viewpoint. For example, in his criticism of the Mary the Color scientist thought experiment, Dennett provides an alternate explanation where Mary the scientist is not surprised upon seeing color for the first time when her captors try to trick her by showing her a blue banana. The problem here is that Mary in this thought experiment is using her first-personal data to verify her color theory, which appears to conflict with strict heterophemenology. Without that last step of seeing color for the first time, Mary would only have a hypothesis, not knowledge about color. If, however, heterophemonology allows first person data to verify third-person data, then I agree with Dennett. But Dennett seems to think that knowledge without experience can cause one to know about the experience of color. Dennett has Mary saying: "You have to remember that I know everything--absolutely everything--that could ever be known about the physical causes and effects of color vision."
I can understand how a scientist can know all about the effects of color in a similar way that a race car designer who has never driven, can design a successful race car. A person can design a chess computer to beat the best human chess player in the world without ever having played the game. I can understand how brain scientists might know enough about brains to create artificial brains that feel and think, and see colors, but how does that explain how knowledge alone can understand the experience of a sensation? So in this respect, I am still confused how a scientist (even though she knows everything about color) can still understand the experience of color without ever having a first personal experience of color. Can an unconscious, but highly intelligent Watson-like computer using pure heterophenomenology understand the experience of color, or pain, or pleasure?
Despite my confusion or misunderstanding of a few of Dennett's points, this book not only explains many of the deep problems in philosophy but it encourges the reader to think about them through a class of thought experiments called intuition pumps. I suspect that this book will make people want to read more of Dennett's books, and this is a good thing.
A few quotes from the book:
There is no such thing as philosophy-free science, just science that has been conducted without any consideration of its underlying philosophical assumptions.
Asking the wrong questions risks setting any inquiry off on the wrong foot. Whenever that happens, this is a job for philosophers!
Ray Jackendoff and I have argued that we must drop the almost always tacit assumption that consciousness is the "highest" or "most central" of all mental phenomena, and I have argued that thinking of consciousness as a special medium (rather like the ether) into which contents get transduced or translated is a widespread and unexamined habit of thought that should be broken.
If understanding comes in degrees, as this example shows, then belief, which depends on understanding, must come in degrees as well...
Before there can be comprehension, there has to be competence without comprehension.
Physics will always trump meaning.
...meaning is always relative to a context of function, and there need be no original intentionality beyond the intentionality of our (sorta) selfish genes, which derive their sorta intentionality from the functional context of evolution by natural selection, instead of from an Intelligent Designer playing the role of our rich client ordering a giant robot.
Darwin's idea of evolution by natural selection is, in my opinion, the single best idea that anybody has ever had, because in a single bold stroke it unites meaning with matter, two aspects of reality that appear to be worlds apart.
Numerals are human inventions, numbers are not.
You are a walking ecosystem, and while some of the visitors are unwanted (the fungi that cause athlete's foot, or the bacteria that lead to bad breath or crowd around any infection), others are so essential that if you succeeded in evicting all the trespassers, you would die.
"Oh, so you're working on an evolutionary theory of religion. What good do you think religions provide? They must be good for something, since apparently every human culture has religion in some form or other." Well, every human culture has the common cold too. What is it good for? It's good for itself.
Consciousness is more like fame than television: fame in the brain, cerebral celebrity, a way in which some contents came to be more influential and memorable than the competition.
Of course in you define qualia as intrinsic properties of experiences considered in isolation from all their causes and effects, and logically independent of all dispositional properties, then qualia are logically guaranteed to elude all functional analysis.
If the day arrives when all the demonstrable features of consciousness are explained, all the acknowledged intellectual debts are paid, and we plainly see that something big is missing (it should stick out like a sore thumb at some point, if it is really important), those with the unshakable hunch will get to say they told us so.
If, to harken back to Wilfrid Sellars once again, qualia are what make life worth living, then qualia may not be the "experiential basis" for our ability to recognize colors from day to day, to discriminate colors, to name them.
Some of us, myself included, think the Hard Problem is a figment of Chalmers's imagination, but others--surprisingly many--have the conviction that there is or would be a real difference between a conscious person and a perfect zombie and that this is important.
I have tried for years to show that however tempting the intuition may be, it must be abandoned. I am quite sure that the tempting idea that there is a Hard Problem is simply a mistake, but I cannot prove this.
...whereas we used to think (before Turing) that human competence had to flow from comprehension (that mysterious fount of intelligence), we now appreciate that comprehension itself is an effect created (bubbling up) from a host of competences piled on competences.
Then what might the self be? I propose that it is the same kind of thing as a center of gravity, an abstraction that is, in spite of its abstractness, tightly coupled to the physical world.
It is not so much that we, using our brains, spin our yarns, as that our brains, using yarns, spin us.
I defend the claim that there is a straightforward, conservative extension of objective science that handsomely covers all the ground of human consciousness, doing justice to all the data without ever having to abandon the rules and constraints of the experimental methods that have worked so well in the rest of science. This third-person methodology, heterophemonenology (phenomenology of an other not oneself), is the sound way to take the first-person point of view as seriously as it can legitimately be taken.
The folk ideology of color is, let's face it, bonkers; color just isn't what most people think it is, but that doesn't mean that the manifest world doesn't really have any colors; it means that colors--real colors--are quite different from what most folks think they are.
Is compatibilism to good to be true? I think not; I think it is true, and we can soundly and roundly dismiss the alarmists, at the same time reforming and revising our understanding of what underwrites our concept of moral responsibility. But that is a task for the future, and it would be the work of many hands. So far as I can see, it is both the most difficult and most important philosophical problem confronting us today.