Consciousness: More like Fame thanTelevision


February 26, 1995

Consciousness: More like Fame than Television

Daniel C. Dennett
Center for Cognitive Studies
Tufts University
Medford, MA 02155
[In the four years since Consciousness Explained appeared, I have given many lectures on thetopic to a wide variety of audiences, and gradually developed an improved series of illustrationsand explanations of the hard-to-grasp theory presented in the book. This paper presents therelatively stable result of all those valuable encounters.]

Saul Steinberg's marvelous New Yorker cover from October 8, 1969 (see Figure 1), provides thebest picture of human consciousness I have encountered. Not just words, but colors and shapessucceed each other in delicious association. Even the genius of Steinberg can't quite renderaromas, tickles and sounds on the cover of a magazine, but at least he suggests the likelihood oftheir inclusion. And the whole rendering is made possible by his exploitation of a familiarcartoonists' convention, the thought balloon or thought bubble. Calling this a conventionunderplays its naturalness; I doubt that children ever need to have the convention explained tothem--it's quite wonderfully obvious what it depicts, metaphorically: the stream of consciouscontents in the mind of the man looking at the painting in the museum. This powerful and naturalmetaphor provides a nice setting of the problem of consciousness: If this picture gives us themetaphorical truth, what is the literal truth? How can an account of what happens in the man'sbrain ever do justice to the familiar--indeed intimate--facts we recognize in this metaphoricalrendering?

Consciousness appears to us to consist of a sequence of contentful items, arranged in a sequence,the so-called "stream of consciousness," in which each item in turn bursts quite suddenly intoconsciousness and thereby enters memory, perhaps only briefly to be remembered, and thenforgotten. I think that hidden in this comfortable and largely innocent picture of consciousness is adeep and seductive mistake. I intend to expose and elucidate that mistake, and describe analternative vision.

Descartes is the most salient source of the error I wish to combat. He was the first to appreciatethe possibility of what we now call a "reflex arc"--an ultimately mechanical transaction in whichinput is appropriately reacted to without the need for any consciousness at all. In Figure 2 theboy's leg is "automatically" withdrawn from the heat of the flame

as a result of a relatively simple and direct chain of causation leading from the tugging, in effect,of nerves in the foot to the release of "animal spirits"--cerebrospinal fluid--which inflates themuscles in the leg, pulling it out of harm's way. The details were all wrong, of course, even if theidea of a reflex is fine in itself. A less innocent byproduct of Descartes' promulgation of the reflex,however, is the sharp distinction it suggests between purely unconscious (and mechanical) input-output arcs and those somehow fancier arcs of causation that leap higher in the brain, passingthrough the special medium of consciousness. For Descartes, this variety of arc led to--and thenfrom--the pineal gland, a sort of fax machine to and from the soul [Figure 3].

Today we have almost all abandoned both the details and the metaphysics of Descartes dualisticvision, but, I claim, we have not discarded quite enough. We are still enthralled by the idea ofthere being a special medium, not of ectoplasm or other dualistic mystery-stuff but of brain-stuff,entry into which marks off conscious events from unconscious ones. We might call this mediumthe Medium, since whatever enters my Medium is what is something to me, is something thatcontributes to "what it is like to be me" in a way that events in, say, my kidney or stomach (orspinal cord) do not directly contribute. This all too natural way of thinking is my target, and inorder to destroy its lure, I must pose an alternative metaphor. But first let's consider how veryattractive--well nigh irresistible--it is.

To begin, I want you to recall an occasion on which you have seen fireworks. Perhaps as a childyou were startled to realize that a distant flash and a somewhat later boom were caused by thesame explosion in the sky. Let's call that the fireworks effect. No doubt some adult explained toyou that the reason you had that conscious experience was that the light, traveling much fasterthan the sound, arrived at you before the sound. You, the observer, are located at a point inspace, and when light and sound (and aromas and heat and so forth) reach that point you becomeconscious of them. This introduces the idea that there is a sort of finish line somewhere in yourbrain; crossing this line marks the onset of consciousness of any item or content.

Such a crossing is called transduction by biologists and engineers, and can be usefully contrastedto such events as reflection and refraction, in which a signal "turns a corner" without changingmedia. [Figure 4] We seem to be given an instance of the special corner-turning known as consciousobservation in such phenomena [Figure 5] as a human subject saying "red light" when shown a flashing red light. But althoughsuch verbal report is the canonical mark of conscious apprehension, it is not without problems,especially when we try to generalize from it, and deal with the obvious consideration that suchverbal "report" is generally held to be neither sufficient (what if the subject has been saying "redlight, red light, red light" for several minutes?) nor necessary (the subject can remain silent butconscious) for passage through the imagined medium of consciousness. For instance, suppose thesubject's response to a sudden onset of red light is to jam his foot on the brake pedal of the car heis driving. [Figure 6] Can such a response not be rather like a Cartesian reflex reaction--with consciousnessof the red light coming along independently and later (if ever)? What is the evidentiary status ofsuch responses to red light as an eyeblink or a galvanic skin response? There is no body of theoryor even informal consensus among researchers about the probative value of such reactions.

The lens of the eye is not the finish line--for that is mere refraction, not transduction. The retina isalso not the finish line, obviously--since it is fully engaged in any "unconscious" or "reflex"response to a visual stimulus. What happens if we start marching into the interior of the brain insearch of the crucial finish line? Consider the diagram from Frisby(1979) [Figure 7], showing the fate of the various portions of the retinal image as they move from theretina to their curiously distorted registration in V1, the visual area at the back of the brain. Eventhough damage to V1 does indeed produce such phenomena as blindsight, arrival at V1 cannotmark the onset of consciousness for a stimulus, since even if this is necessary for conscious visualexperience--a contentious claim in need of careful unpacking--it is hardly sufficient. (We shouldbe careful to dismiss a compelling but fallacious ground for rejecting V1 as the seat ofconsciousness, however: the fact, made vivid in Frisby's diagram, that the image registered in V1is so weirdly distorted, inverted, partitioned. What strikes our eyes as alien and unfamiliar issimply irrelevant, since no similarly structured "mind's eyes" gaze on V1.)

Figure 8 is a famous diagram by David Van Essen (Felleman and Van Essen, 1991), showing the variousdifferent visual areas--"retinotopic maps"--in the brain of the macaque monkey, a close relative ofours that figures prominently in vision research. Each of the different regions is specialized, someconcentrating on color, others on shape, still others on motion, or location, and so forth. V1, thefirst such region in the cortext to receive signals from the eyes, isn't the seat of consciousness, andnone of these other, more specialized, regions is a plausible candidate precisely because of theirspecialization--their lack of information on so many of the topics our conscious experience isconcerned with. So the temptation is to look farther inboard, nearer "the center," for the place inthe brain where conscious experience happens. But if we succumb to this temptation we make acrucial mistake, as is revealed by DeYoe and Van Essen's (1988) diagram [Figure 9] of the connections between some of these separate visual areas. Notice that there are even more"downward" or "outbound" pathways as "inbound". In fact, we've already reached the end of theline, where "inbound and outbound" lose their meaning. The habit of asking "Has the signalarrived at the conscious observer yet?"--a habit that works perfectly when we're dealing withmacroscopic phenomena such as lightning and thunder reaching the eyes and ears--must bebroken at this point, and replaced--forcibly, if necessary!--with some alternative habits of thought.

It takes some real exertion to change habits here, because the bad habit is so familiar and appealing. Consider Figure10.This is a deliberate parody, but it differs only in emphasis from models that have been published in textbooks and thatsurreptitiously haunt the imaginations and warp the thinking of just about everybody. I call this model the CartesianTheater, the place in the brain here the consciousness happens. Endnote 1 Whenever it is explicitly exposed, as in thisfigure, we all laugh--we all know that this vision of how the mind works is hopelessly wrong. The hard question is: withwhat can we replace it? To answer that question we should first see if we can say exactly what is wrong with this vision.It is not that the homunculi watching the screen have white coats on, or that there are two of them there, or that film,not video, is the medium of transportation. The diagram is more deeply and abstractly wrong. Its central mistake is insupposing that the work of consciousness is a distinct sort of work, different from the work done by the merelyunconscious information-processing modules in the brain; work done by a distinct faculty, a salient "add-on" that mightin principle be "subtracted," leaving a cognitively competent but entirely unconscious zombie. The first step towards anysatisfactory replacement of the Cartesian Theater lies in the recognition that

the work done by the homunculus in the Cartesian Theater must be distributed in both space and time withinthe brain.

And to make the implications of this distribution vivid, we need an opposing metaphor, something that we can cling toin our attempts to avoid the powerful attractions of the Cartesian Theater. Andy Warhol provides just what we need:

In the future, everybody will be famous for fifteen minutes.

What Warhol nicely captured in this remark was a reductio ad absurdum of a certain (imaginary) concept of fame.Would that be fame? Has Warhol described a logically possible world? If we pause to think about it more carefully thanusual, we see that something has been stretched beyond the breaking point. It is true, no doubt, that thanks to the massmedia, fame can be conferred on an anonymous citizen almost instantaneously (Rodney King comes to mind), andthanks to the fickleness of public attention, can evaporate almost as fast, but Warhol's rhetorical exaggeration of thisfact carries us into the absurdity of WonderLand. We have yet to see an instance of someone being famous for justfifteen minutes, and in fact we never will. Let some citizen be viewed for fifteen minutes or less by hundreds of millionsof people, and then--unlike Rodney King--be utterly forgotten. To call that fame would be to misuse the term (ah yes,an "ordinary language" move, and a good one, if used with discretion). If that is not obvious, then let me raise the ante:could a person be famous forfive seconds (not merely attended-to-by-millions of eyes but famous)? There are in facthundreds if not thousands of people who every day pass through the state of being viewed, for a few seconds, bymillions of people. Consider the evening news, presenting a story about the approval of a new drug. An utterlyanonymous doctor is seen (by millions) plunging a hypodermic into the arm of an utterly anonymous patient--that'sbeing on television, but it isn't fame! Endnote 2

I propose as the antidote to the Cartesian Theater model the claim that consciousness is a species of mental fame.Almost literally. Those contents are conscious that persevere, that monopolize resources long enough to achieve certaintypical and "symptomatic" effects--on memory, on the control of behavior and so forth. Not every content can befamous, for in such a competition there must be more losers than winners. And instantaneous fame is a disguisedcontradiction in terms. Endnote 3

Being "in consciousness" is more like being famous than like being on television, in at least the following regards.Television is a specific medium; fame isn't. The "time of transduction" can be very precise for television, but not forfame. Fame is a relative/competitive phenomenon; television isn't. (Some people can be famous only if others, who losethe competition, remain in oblivion.) And then consider the curious American institution, the Hall of Fame. There's aBaseball Hall of Fame, a Football Hall of Fame, and for all I know, a Candlepin Bowling Hall of Fame. But as manyinductees into such edifices must have realized at the time, if you're already famous, then being inducted into the Hall ofFame is a mere formality, acknowledging the undeniable; and if you're not already famous, being inducted into a Hall ofFame doesn't really make you famous. No "quantum leap" or momentous transition in phase space or "catastrophe"occurs when you cross the finish line and enter the Fame module--unless of course that event is famous on its ownhook, because of your own current fame or the current fame of the institution.

But the idea of such a finish line, such a threshold of consciousness, is still almost overpoweringly attractive. Considerit, for instance, in its guise as the front door of memory. Michael Lockwood (1992) has put it this way:

"Consciousness is the leading edge of perceptual memory."

This is an idea which strikes many people--not least Lockwood himself--as so obvious as to need no serious discussion,but if I am right, it is one more version of exactly what we must resolutely deny. The idea is certainly appealing. If wethink about the fireworks, we can see just how appealing it is. Imagine watching "in slow motion" as a little girlexperiences the fireworks effect. We see the light start to spread from the explosion (at the speed of light, of course), and soon, when it hits the retina, you say, "Well, she's not conscious of it yet--not quite yet!" After all, merely arrivingat her retina isn't enough. You watch as the neural signal from the retina slowly travels up the optic nerve to a relay inthe lateral geniculate nucleus, and then on to area V-1 in the cortex, and you say, "Well, still not conscious yet." It istempting to suppose that somewhere slightly deeper, and at a time slightly later, something special must happen. At thatinstant--and not before then--the little girl becomes conscious of the light. Then at some much later instant the soundarrives at her ear and works its way slowly from the eardrum on up through the brain, until it too arrives at the imaginedfinished line at some still later time.

Consider the diagram [Figure 11] inspired by Lockwood's remark about the leading edge. We can read the diagram from left to right, following thesequence in time of events involved in conscious experience of external events. (It does not matter that the eventexperienced be an external, perceived event rather than an internal, introspected event, but for simplicity I willconcentrate on perceptual cases, and more particularly on visual perception.) Light from events reaches the eye, andthis is followed by processing in various parts of the brain. This, one deems, is all unconscious processing. Or perhapswe should call it preconscious processing. It is in the medium of neuronal activity in various tracts--we need not bemore specific for our general purposes. This processing takes place over a brief interval of time, and then . . . and then,and then . . . . and then finally the message passes into the theater of conscious. At this point your brain transduces whathas been merely unconscious brain activity into some special sort of conscious activity which happens at the leadingedge of memory, as you can see in the diagram. In other words, an event in your life enters memory after beingprocessed; it enters through the front door of memory--that's when the consciousness happens.

One of the virtues of Figure 11, apparently, is that it does justice to a list of truisms:

  1. Light must strike the retina before processing can begin.
  2. Processing must happen before consciousness can happen. (We're not directly aware of the light that falls on ourretinas.)
  3. Consciousness must happen before you can remember an experience. (That, I would suppose, is a tautology.)
  4. A memory must be laid down before you can report it.

So this model seems at first simply to illustrate some undeniable truisms about the nature of consciousness. But nowconsider a simple phenomenon that exposes the difficulty with this model: meta-contrast, a visual illusion that has beenmuch studied by experimental psychologists. If you were a subject in a meta-contrast experiment, you would sitwatching a screen, on which shapes would be very briefly flashed. Suppose, for instance, that a colored disk is flashedbriefly in the center of the screen. You would have no difficulty at all seeing the disk--this is not a "subliminalperception" experiment. You would be able to say "a blue disk/ a red disk/ a green disk," with never an error. The flashis long enough and bright enough so that anybody can see it. However, if this disk stimulus were then to be followedswiftly with the flash of a slightly larger ring, "surrounding" the place where the disk just was, what you would see--orreport--would depend on how long the delay was before the second stimulus, the ring, was flashed.

If the interval of time between the two stimuli is made very small--a few milliseconds, a few thousandths of a second--aremarkable thing happens. All you see (well, all you say that you see) is just the second stimulus. You don't see the diskat all, you see only the ring. It is a stunning effect. When researchers first started theorizing about this, they weretempted to tell a story along the lines illustrated in Figure12:The disk arrives at the eyeball first, of course, and it starts getting processed on its way up, up, up through the nervoussystem, and the ring arrives at the eyeball slightly later. Somehow the later ring overtakes the disk! It intercepts it, andambushes it on its way to the theater of consciousness, so that the only thing that enters the theater is just the secondstimulus. Theorists who thought this way then wondered how the ring message came to be accelerated through thesystem. How could it "catch up" and "pass" or "intercept" the disk message?

Before we try to answer that question, notice that there's another story that could be told about what happens in meta-contrast: the disk does make it all the way to the theater all right; it has its brief moment in the limelight as it crosses thestage; it's only afterwards, in memory, that there's dirty work done. The tampering happens after the disk has come inthe front door of memory. The memory of the second stimulus, the ring, erases the memory of the conscious experienceof the disk, as shown in Figure13.

These are apparently two different theories of the same phenomenon. I want to give them simple, memorable names,because the contrast they exhibit arises again and again. Figure 13 in which the dirty work happens after thepresentation event in consciousness, depicts a contamination of memory, so I'll call it the Orwellian theory, recallingGeorge Orwell's novel 1984. You will recall, in the Ministry of Truth, the evil historians who rewrote the state archivesafter the fact, concealing from all future. investigators what really happened. So Figure 13 is the Orwellian theory ofmeta-contrast, which diagnoses it as a hallucination of memory; you simply fail to remember something that you reallyexperienced, and remember something different instead.

The earlier theory exhibited in Figure 12, which says that the dirty work happens before consciousness and is thenaccurately recorded thereafter , I call Stalinesque, because it reminds us of Stalin's "show trials" in the 30s, whereelaborately staged counterfeit events were presented in a show trial, and then accurately recorded for the archives. There was no tampering with the archives; the tampering came before the show.

So now we seem to have a question for scientists to answer: Is the truth about meta-contrast Stalinesque or Orwellian? But notice that the only difference between the two hypotheses is whether the dirty work is taken to happen before orafter the postulated "leading edge." Both theories agree that the processing of the second stimulus has interfered withwhat would have been the normal processing of the first stimulus. But was it "pre-conscious" tampering or "post-conscious" tampering. It perhaps seems to you that this question must have an answer, even if we can't yet--or ever--determine what it is. If so, you are succumbing to an illusion yourself, for this conviction of yours is simply an artifact ofthe model we have used in Figures 11-13. There is nothing necessary about that particular model of consciousness.

Consider a different model, shown in Figure 14.I trust you can see that I've simply turned most of the first model on its side. In this model we have "processing" and"memory" continuing along in time simultaneously, in parallel, and the "leading edge" has simply disappeared. Lookwhat happens when we superimpose our question about meta-contrast on the new diagram. There may be some realuncertainty or ignorance about just when and where in the brain the interference happens between the effects of the firststimulus and the effects of the second stimulus. Eventually, we can resolve this ignorance by further scientificinvestigation, and in the meantime we can represent all the possible alternatives by sliding the diagram of theinterference from left to right across the diagram. Does the interference happen relatively early, as shown in Figure15,

or relatively late, as in Figure 16? We can imagine that future neuroscientific discoveries will locate the interference wherever in time you like. But thiswill not answer the question: Orwellian or Stalinesque?--because the defining feature distinguishing the two apparentpossibilities is no longer in the model. There is no finish line!

Let me elaborate on this alternative model. According to Figure17, your visual system decomposes its tasks into separate transductions--separate corner turnings--which determine variousvisual properties in different places in your brain. Shape, color, motion, location are fixed in different places and atdifferent times, for even a single event, such as the flashing of a colored disk. These properties, once transduced, arethen available for influencing later transductions, later bindings, revisions, erasures. Your perceptual judgments evolvegradually, but since they continually replace their predecessors, your brain normally keeps no record of before and after,and hence you are unable to detect this revision process--though its traces can be uncovered by subtle experiments. Inthe case of meta-contrast, if you are shown sometimes a single stimulus--the ring--and sometimes both stimuli, you willclaim in each case to have seen just a single stimulus, the ring, but if you are required to guess each time whether it waspreceded by a disk, your guesses will be substantially better than chance, which shows that some residual effects are stillin your brain.

You may well wonder what the horizontal line dividing processing from memory signifies in figure 17. What work isthat line doing? The answer is: it isn't doing any work; it is, in fact, simply a vestigial trace of the bad model in figure11. On the alternative model, Figure 18, there isn't any real boundary in time or space separating processing from memory. Even transient effects on the retinacan be quite properly considered a sort of memory effect, a trace laid down that can modulate or inform or misinformsubsequent cognitive activity. Now this really should not be so surprising, for a common theme routinely alluded to indiscussions of memory and perception is that each involves processes that evolve their constructions in time, revising,embellishing, dissolving, changing. The mistake lies in supposing that in addition to these editing processes, there is aprivileged process that amounts to the "official" presentation of a canonical version (rather like the frames of a filmbeing illuminated in turn by a sort of Cartesian cinema projector.)

This alternative model, which I call the Multiple Drafts Model, fits the neuroscientific facts better, much better, than themodel in figure 11, the Cartesian Theater. In the case of meta-contrast, for instance, Figure19 shows what happens if you are shown just a single stimulus, the disk or "first" stimulus: The first thing that your braindecides is simply that something has happened--you don't yet know what. If you give the brain enough time, it will goon to determine that what happened was, lets say, on the left, and that it was a circle, and then that there was some blue,and finally these contents get bound together to create the discriminated content: there was a blue disk. What is goingto be the future of that blue-disk-content bound together? It may almost immediately deteriorate and have no moreeffects at all. Or if the green ring doesn't come along, it may not only hang around, but be recapitulated, as shown inFigure 20.Each time that happened, this would further consolidate its fame so that even years later you would remember that bluedisk. But if the green ring comes along, as shown in Figure21, it cuts short the career of the blue disk, co-opting the shape that would have been--or had just been--bound to the blueto produce blue disk, enlisting it in the cause of helping to define the inner boundary of the green ring, leaving the blueand its outer boundary to sink into swift oblivion.

It may be useful to triangulate my utterly abstract and non-detailed model with some more specific claims that haverecently been defended in the neuroscientific literature. According to the model of Larry Squire and Stuart Zola-Morgan(1991), in order for long(ish)-term memory of a perceptual event to be distributed in the cortex after it has beenprocessed by the cortex, it must, in effect, bounce through the hippocampus and then back onto the cortex (for a briefdescription, see Flanagan, 1992, pp.18-19). A somewhat different role for the hippocampus in the crucial processingunderlying consciousness is offered by Gray (forthcoming, BBS, plus see my commentary). Suppose, in any case, thatthe hippocampus is playing a very important role in securing the fame of the events that we retrospectively categorize asconscious. Then my Orwell/Stalin claim amounts to this: there is no good reason to mark the onset of fame (as opposedto "mere influence") before the hippocampal boost or after the hippocampal boost. One theorist's (e.g. Benjamin Libet's)"rising time to consciousness" (Libet, 1993) is another theorist's "curing time for memory consolidation". Many of theeffects we deem to anchor our pretheoretical concept of consciousness are accomplishable without hippocampal help,and many others are not. One could try to argue for placing the "onset of consciousness" early, at the time, say, atwhich "binding" has put the color with the shape (permitting the initiation of a response to a blue disk, for instance)even though this memory of a seen blue disk might immediately fade, lacking the hippocampal booster. Or one could tryto argue that all responses, no matter how sophisticated, to the "bound" features of such stimuli are mediated by merelyfancy unconscious processes, reserving the honor of consciousness for contents that, thanks to hippocampal boosting,hang around long enough to be reported. Or better yet, one could see that this invited argument is not about anythingreal--any work-to-be-done--over and above the work done that both theories already agree about.

The temporal freedom provided by the Multiple Drafts Model permits us to explain other initially puzzling, evenapparently paradoxical, phenomena, and I will briefly present one example. For almost a hundred years, psychologistshave studied phenomena of apparent motion, known as phi phenomena. We are all familiar with phi phenomena; theyare the basis for motion pictures and television. The rapid succession of stationary shapes slightly displaced creates theillusion of motion. In the simplest cases (which are the best cases for psychological research), single spots of coloredlight are the stimuli. If a little red light is flashed on a screen in front of you, and then another little red light is flashed onthe screen slightly to one side or the other, you will see what appears to be a single moving spot of red light.

The philosopher Nelson Goodman once asked the psychologist Paul Kohlers some years ago, what happens if the lightsare different colors? (Goodman, 1978) For instance, what if you flash a red light, and then you flash a green light? Willthere be apparent motion? Kolers and von Grünau (1976) ran the appropriate experiments, and the answer is: yes, thereis motion. Now you may well wonder: what happens to the color of the "single" light that you see? It starts off red andthen there is an abrupt mid-trajectory change to green. But this is an illusory trajectory, of course, not a real trajectory,and this presents a puzzle, illustrated in Figure 22. Your brain cannot create the content of a mid-trajectory color change--it cannot create frames C and D in themetaphorical diagram--until it has received and analyzed the second stimulus (as represented by frame B in thediagram). It has to "know" that there's a second light, and it has to know where it is and what color it is, before it canstart creating the illusion that we observe in this case. A Stalinesque theory to "solve" this problem would be to supposethat there's something like a "delay loop" in the brain: that A and B arrive in sequence at some antechamber, someediting studio, somewhere between the eyeball and the theater of consciousness. And in that studio, during that briefdelay but after B has arrived and been recognized, frames C and D are rapidly constructed or confabulated, and insertedinto the film that is then sent up to the theater for viewing in a Stalinesque show trial.

But apparently there's another, Orwellian, theory which also could explain the illusion. According to it, you're consciousof A, and then you're conscious of B, and then your memory plays a trick on you. Frames C and D are spuriouslyinserted in memory by the Orwellian historians after the fact (of consciousness). Almost immediately you seem toremember having seen motion occurring between A and B, but this illusion, represented by frames C and D, is simply acontamination of memory.

Now which theory might be the truth? Once again, the Multiple Drafts Model declares that neither one is the truth. Thetruth is that the brain is quite capable of putting retrospective content elements into it's narrative stream. It can decidethere's a circle on the left and it's red and there's a circle on the right and it's green, and that there must have beenchange in between them, as shown in Figure 23. Then this natural but mistaken conclusion is "pre-dated": it is given a "postmark" which places it at an earlier time in thesequence in your own stream of consciousness. Now this is an idea that many people find extremely hard to acceptbecause it suggests to them that there must be some sort of backwards causation in time, or the "projection" backwardsin time of a later event. For instance, the Oxford physicist Roger Penrose, in his recent book The Emperor's New Mind(1989), suggests that we have to have a revolution in physics in order to explain these effects.

What such phenomena actually show is indeed that the subjective sequence of conscious experience does not always lineup with the objective sequence of the events in your brain that determine your subjective experience. Graphically,experienced time can have backwards kinks in it when we map it onto objective time, as shown in Figure24.The order in which events seem to happen to you in your stream of consciousness is not the same as the order of eventsoccurring in your brain which are the very vehicles of those contents in your experience.

I want to show you that this idea is not as strange and revolutionary as it may first appear. It is composed, you mightsay, of two familiar facts that don't in themselves give us the metaphysical heebie-jeebies. The first is the simpledissociation we tolerate--without even

noticing, usually--between the temporal properties of a sentence we hear, and the temporal properties of the events thesentence informs us about. Consider hearing the following sentence:

Tom arrived at the party after Bill did.

When you hear this (or read it from left to right) you learn of Tom's arrival before you learn of Bill's arrival, but whatyou learn is that Bill arrived before Tom. Our language lets us do that, without requiring us to stop the universe or tosqueeze any time-travel into its cracks. But don't we have to use the sentence to render a little scene in our minds inwhich first Tom and then Bill makes an appearance? No, we don't. Understanding doesn't require play-acting of thatsort, and our brain doesn't have to do any such rendering in order to understand a subjective sequence in perception,either. Supposing otherwise would be analogous to thinking that the diagram in Figure 24 needs to be completed bylooping the narrative string around and running it film-wise through a projector of sorts somewhere in the brain [Figure25]. It is precisely that extra presentation-process that the Multiple Drafts model eliminates from our thinking.

If the loss seems hard to bear, we can perhaps find some solace in reflecting on the second familiar fact: periscopes[Figure 26] .The adjustment we need to make in our thinking about the representation of time is one we are already quitecomfortable with when it is applied to space. When you look through a periscope, you experience a rather strikingeffect: the light bounces off the mirrors into your eyes, and this has the effect of shifting your point of view, almostmiraculously, up to where the top mirror is. That is where your eyes seem to be; indeed, that is where you seem to bewhen you use a periscope. The actual events in your brain that accomplish vision are happening down in the brain, butwhere you seem to be is translated up in space by a couple of mirrors which preserve the content of vision as it wouldbe in the higher location. This does not involve any mysterious projection in space of some ghostly or immaterial eye ormind; it is merely a logical projection. (Surely you wouldn't be foolish enough to start examining the space behind thetop mirror, looking for turbulence or "force fields" or other ghostly goings-on at the spatial location of the apparent eye--or I.) This phenomenon involves

a projection that is embedded in the content of your vision, not a property of the vehicles of that content. In anexactly parallel way, phenomena such as the phi phenomenon (and other, more complicated phenomenadiscussed in Dennett 1991 and Dennett and Kinsbourne, 1992), show that the brain can create what we mightcall temporal periscopes, curious occasions when time itself is apparently bent by the way the brain deals withthe events falling on it.

This has some rather striking implications. What we learn from the periscope is that the idea of here--theobserver's spatial location--is fixed by the content, not by the physical location of the brain events the neuralevents that are its vehicles. It is also true, I am claiming, that the subjective sense of now--the observer'stemporal location--is fixed by the content of those brain events, not by their temporal locations. That is, thetemporal sequence of subjective experience is not fixed by the sequence in which the relevant events actuallyhappen in in the brain, but by the sequence that they represent. In other words, for the same reason thatsubjective location is not to be equated with some location of transduction, temporal location is not to beequated with some time of transduction. Notice that the apparent location of your eye when you use a periscope(as shown in Figure 26) is not due to a special transduction event. Reflection in a mirror is not transduction atall--there is no change in medium. The transduction of the light actually happens in your eye, and it is followedby later transductions and other operations in the brain, but the apparent or subjective location of the observer--of you--is determined by the content (not by the vehicle), which is fixed by the structure of the light at that point. By the same token, subjective timing--subjective sequence and subjective simultaneity, which constitute the orderin which your stream of consciousness unfolds--is not actually determined by the order of the contentful eventsthat occur in your brain , but rather by the content: by the sense that your brain makes of all of those contents.

One last little story will illustrate my point. Figure 27 is in fact an early diagram of the brain by Vesalius. Right in the middle, marked "L" is the pineal gland. But Iwant to make opportunistic use of his diagram to illustrate my main point on a different scale of space and time.Let's pretend that this is a map of the Earth. "L" can stand for London. "G" can stand for Ghent. What is knownin American history as the War of 1812 was fought between the British and the Americans, and on ChristmasEve, 1814, in Ghent, the two opposing nations signed a peace treaty. The news of that signing thereupon beganto travel out around the globe in all directions at a rather slow pace. It arrived in London, no doubt, within a fewhours, at most a day, after the signing of the treaty in Ghent. It arrived in New Orleans too late to prevent abattle, the notorious Battle of New Orleans, which was fought two weeks after the treaty was signed. Over athousand British troops were killed.

Now suppose we were to ask the following somewhat bizarre question: when did the British Empire learn aboutthe signing of the truce? The ambassador in Ghent learned about the signing of the truce instantaneously; hewatched his own hand sign the treaty. The members of Parliament, and the King, and the other officials inLondon learned it some time later. The poor commander of the British forces near New Orleans learned it onlytoo late,

alas, several weeks after the event. Suppose we knew to the day, to the minute, to the second, when eachelement, each agent, of the British Empire learned of the signing of the truce. This still wouldn't tell us when "theBritish Empire" learned of the signing of the truce, because no one of those agents counts as the place where theBritish Empire resides.

You might be tempted to say this is false: what matters is when the King learns. As Louis XIV said, "L'état, c'estmoi!" But in this instance, the King was George III, and it really didn't make much difference when he learnedthings! He was not really in charge. So the best we can do, in answering the question about just when the BritishEmpire learned of the signing of the truce, is to say something along the lines of "late 1814 to early 1815." Inexactly the same way, since you are not located in any one place in your brain, but are rather distributed aroundthroughout your brain--since Descartes was wrong about there being a point in the brain "where it all comestogether"--if you ask yourself the question, "When did I become conscious of some particular event?" thatquestion can have only a vague answer, not a precise answer. It could have a precise answer only if we couldlocate you at some point in your brain (only if there were a special Medium in your brain). Since the transmissionof information around in the brain is relatively slow, the dating of events in consciousness--the dating for you--has to be smeared over maybe as much as 200 msec, a fifth of a second.

And so, in conclusion, we see that the time of becoming conscious cannot be precisely defined, and it followsfrom that that although consciousness is, as tradition would insist, a sort of door into memory, it is an entrywithout a clear threshold. There isn't any such moment as the instant of onset of consciousness. Some areinclined to interpret this shocking conclusiion as a denial of the very existence of consciousness, but that isbecause they are still clinging to the forlorn model. Consciousness, real consciousness, is simply not liketelevision at all; it is more like fame.


Dennett, D., 1991, Consciousness Explained, Boston: Little Brown.

Dennett, D. and Kinsbourne, 1992, M, "Time and the Observer: the Where and When of Consciousness in theBrain," Behavioral and Brain Sciences, 15, pp 183-247.

DeYoe and Van Essen, 1988, "Concurrent processing streams in monkey viual cortex," TINS, 11, no. 5, pp. 219-226.

Felleman and Van Essen, 1991, "Distributed Hierarchical Processing in the Primate Cerebral Cortex," CerebralCortex, 1, no. 1, pp. 1-47.

Flanagan, Owen, 1992, Consciousness Reconsidered, Cambridge, MA: MIT Press.

Frisby, John P., 1979, Seeing: illusion, brain and mind, Oxford, UK: Oxford University Press.

Goodman, Nelson, 1978, Ways of Worldmaking, Hassocks, Sussex: Harvester.

Kolers, P. A., and von Grunau, M., 1976, "Shape and Color in Apparent Motion," Vision Research, 16, pp329-55.

Libet, B. 1993, "The neural time factor in conscious and unconscious events" (and exchange with Dennett), inExperimental and Theoretical Studies of Consciousness, London: Ciba Foundation.

Lockwood, M., 1993, "Dennett's Mind," Inquiry, 36, pp59-72.

Penrose, R., 1989, The Emperor's New Mind, Oxford: Oxford University Press.

Sherrington, C. S., 1934, The Brain and its Mechanism, Cambridge University Press.

Squire, L., and Zola-Morgan, S., 1991, "The Medial Temporal Lobe Memory System," Science, 253, pp.1380-1386.


1. C. S. Sherrington was a great neuroscientist earlier in the century, and one could hardly improve on hisexpression (1934) of the Cartesian view: "The mental action lies buried in the brain, and in that part mostdeeply recessed from outside world that is furthest from input and output."

2. Several philosophers have risen to the bait of my rhetorical question and offered counterexamples to myimplied claim about the duration of fame. Here is how somebody could be famous for 15 seconds: He goeson international TV, introduces himself as the person who is about to destroy our planet and thereupon doesso. Oh, they got me! But notice that this example actually works in my favor. It draws attention to theimportance of the normal sequelae: the only way to be famous for less than a longish time is to destroy thewhole world in which your fame would otherwise reverberate. And if anybody wanted to cavil about whetherthat was really fame, we could note how the question could be resolved in an extension of the thoughtexperiment. Suppose our antihero presses the button and nothing much happens. The world survives, and init we either observe the normal sequelae of fame or we don't. In the latter case, we would conclude ,retrospectively, that our candidate's bid for fame had simply failed, in spite of his widely broadcast image.

3. Those philosophers who see me as underestimating the power of future research in neuroscience when Iclaim that no further discoveries from that quarter could establish that there was indeed a heretoforeundreamt-of variety of evanescent--but genuine--consciousness might ask themselves if I similarly undervaluethe research potential of sociology when I proclaim that it is inconceivable that sociologists could discoverthat Andy Warhol's prediction had come true. This could only make sense, I submit, to someone who is stillcovertly attached to the idea that consciousness (or fame) is the sort of semi-mysterious property that mightbe discovered to be present by the tell-tale ticking of the phenomenometer or famometer (patent pending!).