A Cognitive Theory of Consciousness
Bernard J. Baars

Preface and Chapter 1
Full text (PDF and ascii) available at Baars's homepage.
Bernard J. Baars
The Wright Institute
2728 Durant Ave
Berkeley, CA 94704
Published by Cambridge University Press, 1988-1998. Electronic version published by author (© B.J. Baars, 1998). Individual copies may be made for educational purposes. Please notify author of copies made electronically, bbaars@wrightinst.edu. Republication not permitted without permission.

This book is gratefully dedicated to the pioneers in cognitive science, who made it possible.
We shall not cease from exploration and the end of all our exploring will be to come back to the place from which we came and know it for the first time. --- T.S. E liot

Table of Contents


Part I. Introduction

Chapter 1. What is to be explained? Some preliminaries.

We focus on the issue of conscious experience as such by comparing pairs of similar events that seem to differ only in that one event is conscious while the other is not. There are many such minimally contrastive pairs of well- established facts. Different models emerge depending on which set of contrasts we emphasize. Global Workspace (GW) theory captures most of the evidence in a single coherent framework.
Part II. The basic model

Chapter 2. Model 1: Conscious representations are internally consistent and globally distributed.

In which we develop the basic theoretical metaphor of a global workspace (GW) operating in a distributed system of specialized processors. A first- approximation model based on these ideas fits a sizable subset of the evidence.
Chapter 3. The neural basis of conscious experience.
The Global Workspace metaphor has a natural neural interpretation in the Extended Reticular-Thalamic Activating System (ERTAS) of the brain. Parts of the frontal and parietal cortex seem to control access to this system.
Part III. The fundamental role of context.

Chapter 4. Model 2: Unconscious contexts shape conscious experiences.

In which we contrast the objects of conscious experience with numerous unconscious contextual systems that are needed to shape, define and evoke them.
Chapter 5. Model 3: Conscious experience is informative --- it always demands some degree of adaptation.
Repeated events tend to fade from consciousness, yet they continue to be processed unconsciously. To be conscious an event must be novel or significant; it must apparently trigger widespread adaptive processing in the nervous system. One result of this view is an interpretation of learning as a change in the context of experience that alters the way the learned material is experienced. Numerous examples are presented.
Part IV. Goals and voluntary control

Chapter 6. Model 4: Goal contexts, spontaneous problem-solving, and the stream of consciousness.

Intentions can be treated as largely unconscious goal structures which use conscious goal images to recruit effectors and subgoals to accomplish their goals. This suggests ways in which conscious experience works to solve problems in learning, perception, thinking, and action.
Chapter 7. Model 5: Volition as ideomotor control of thought and action.
William James' ideomotor theory can handle a number of puzzling questions about voluntary control. The Global Workspace model can incorporate James' theory very comfortably; it implies that volition always involves conscious goal images that are tacitly edited by multiple unconscious criteria. Abstract concepts may be controlled by similar goal images, which may be conscious only fleetingly.
Part V. Attention, self, and conscious self-monitoring

Chapter 8. Model 6: Attention as control of access to consciousness.

Common sense makes a useful distinction between conscious experience as a subjectively passive state, versus attention as the active control of access to consciousness. GW theory easily absorbs this distinction.
Chapter 9. Model 7: Self as the dominant context of experience and action.
We can adapt the method of minimal contrasts from previous chapters to give more clarity and empirical precision to the notion of self. It appears that "self" can be treated as the enduring context of experience, one that serves to organize and stabilize experiences across many different local contexts. The "self-concept" can then be viewed as a control system that makes use of consciousness to monitor, evaluate, and control the self-system. Part VI. Consciousness is functional.
Chapter 10. The functions of consciousness.
Contrary to some, we find that conscious experience serves a multitude of vital functions in the nervous system.
Part VII. Conclusion

Chapter 11. A summary and some future directions.

We review the flow of arguments in this book, and attempt to distill the necessary conditions for conscious experience that have emerged so far. Many phenomena remain to be explained. We sketch some ways in which GW theory may be able to accomodate them.
I. Glossary of theoretical terms.
II. Index of tables and figures.
Subject Index.
References and Author Index.


Conscious experience is notoriously the great, confusing, and contentious nub of psychological science. We are all conscious beings, but consciousness is not something we can observe directly, other than in ourselves, and then only in retrospect. Yet as scientists we aim to gather objective knowledge even about subjectivity itself. Can that be done? This book will sketch one approach, and no doubt the reader will come to his or her own judgment of its inadequacies. Of one thing, however, we can be very sure: that we cannot pursue scientific psychology and hope to avoid the problem for very long.

Indeed, historically psychologists have neither addressed nor evaded consciousness successfully, and two major psychological metatheories, introspectionism and behaviorism, have come to grief on the horns of this dilemma. Having perhaps gained some wisdom from these failures, most scientific psychologists now subscribe to a third metatheory for psychology, the cognitive approach (Baars, 1986a). Whether cognitive psychology will succeed where others have not depends in part on its success in understanding conscious experience: not just because "it is there," but because consciousness, if it is of any scientific interest at all, must play a major functional role in the human nervous system.

The first obstacle in dealing with consciousness as a serious scientific issue comes in trying to make sense of the tangled thicket of conflicting ideas, opinions, facts, prejudices, insights, misunderstandings, fundamental truths and fundamental falsehoods that surrounds the topic. Natsoulas (197x) counts at least seven major definitions of the word "consciousness" in English. One topic alone, the mind-body issue, has a relevant literature extending from the Upanishads to the latest philosophical journals -- four thousand years of serious thought. We can only nod respectfully to the vast philosophical literature and go our own way. In doing so we do not discount the importance of philosophical questions. But one time-honored strategy in science is to side-step philosophical issues for a time by focusing on empirically decideable ones, in the hope that eventually, new scientific insights may cast some light on the perennial philosophical concerns

How are we to discover empirical evidence about consciousness? What is a theory of consciousness a theory of? Nineteenth-century psychologists like Wilhelm Wundt and William James believed that consciousness was the fundamental constitutive problem for psychology, but they had remarkably little to say about it as such. Freud and the psychodynamic tradition have much to say about unconscious motivation, but conscious experience is taken largely for granted. Behaviorists tended to discourage any serious consideration of consciousness in the first half of this century; and even cognitive psychologists have studiously avoided it until the last few years

In truth, the facts of consciousness are all around us, ready to be studied. Practically all psychological findings involve conscious experience. Modern psychologists find themselves in much the position of Moliere's Bourgeois Gentleman, who hires a scholar to make him as sophisticated as he is wealthy. Among other absurdities, the scholar tries to teach the bourgeois the difference between prose and poetry, pointing out that the gentleman has been speaking prose all his life. This unsuspected talent fills the bourgeois gentleman with astonished pride -- speaking prose, and without even knowing it! In just this way, some psychologists will be surprised to realize that they have been studying consciousness all of their professional lives. The physicalistic philosophy of most psychologists has tended to disguise this fundamental fact, and our usual emphasis on sober empirical detail makes us feel more secure with less glamorous questions. But a psychologist can no more evade consciousness than a physicist can side-step gravity

Even if the reader is willing to grant this much, it may still be unclear how to approach and define the issue empirically. Here, as elsewhere, we borrow a leaf from William James' book. In The Principles of Psychology (1890) James suggests a way of focusing on the issue of consciousness as such, by contrasting comparable conscious and unconscious events. James himself was hindered in carrying out this program because he believed that psychology should not deal with unconscious processes as such; unconscious events, he thought, were physiological. In contrast, our current cognitive metatheory suggests that we can indeed talk psychologically about both conscious and unconscious processes, if we can infer the properties of both on the basis of public evidence. In cognitive psychology, conscious and unconscious events have the same status as any other scientific constructs. A wealth of information has now accumulated based on this reasoning, clearing the way for us to consider comparable conscious and unconscious events side by side. We call the resulting method contrastive analysis, a term borrowed from linguistics, where it is used to determine the perceived similarities and differences between classes of speech sounds. One can think of contrastive analysis as an experiment with consciousness as the independent variable and everything else held as constant as possible

The results of this method are very satisfying. Contrastive analysis makes it possible, for example, to take Pavlov's findings about the Orienting Response (OR), the massive wave of activity that affects all parts of the nervous system when we encounter a novel situation. We can contrast our conscious experience of a stimulus that elicits an OR to our unconscious representation of the same stimulus after the OR has become habituated due to repetition of the stimulus (Sokolov, 1963; see Chapters 1 and 5). Now we can ask: what is the difference between the conscious and the unconscious representation of this stimulus? After all, the physical stimulus is the same, the inferred stimulus representation is the same, and the organism itself is still much the same: but in the first case the stimulus is conscious, while in the second it is not. In this way we focus on the differential implications of conscious experience in otherwise very similar circumstances. It makes not a bit of difference that Pavlov was a devout physicalist, who felt that a scientific treatment of conscious experience was impossible. In time-honored scientific fashion, good data outlast the orientation of the investigators who collected them. While a number of investigators have discussed contrasts like this, there has been a very unfortunate tendency to focus on the most difficult and problematic cases, rather than the simplest and most revealing ones. For instance, there has been extensive debate about subliminal perception and "blind sight" the kind of brain damage in which people can identify visual stimuli without a sense of being conscious of them. These are important phenomena, but they are methodologically and conceptually very difficult and controversial. They are very poor sources of evidence at this stage in our understanding. Trying to tackle the most difficult phenomena first is simply destructive of the normal process of science. It leads to confusion and controversy, rather than clarity. When Newton began the modern study of light, he did not begin with the confusing question of wave-particle duality, but with a simple prism and a ray of sunlight. Only by studying simple clear cases first can we begin to build the solid framework within which more complex and debatable questions can be understood. We will adopt this standard scientific strategy here. First we consider the clear contrasts between comparable conscious and unconscious events. Only then will we use the resulting framework to generate ideas about the very difficult boundary questions.

One could easily generate dozens of tables of contrasts, listing hundreds of facts about comparable conscious and unconscious phenomena (see Baars, 1986b). In Chapter 1 we survey some of the contrastive pairs of facts that invite such an analysis. However, in our theoretical development, starting in Chapter 2, we prefer to present only a few simplified tables, summarizing many observations in a few statements. Others might like to arrange the data differently, to suggest different theoretical consequences. The reader may find it interesting to build a model as we go along, based on the contrastive facts laid out throughout the book. The use of cumulative empirical constraints. While a great deal of research must still be done to resolve numerous specific issues, many useful things can already be said about the picture as a whole. Integrative theory can be based on "cumulative constraints." This is rather different from the usual method of inquiry in psychology, which involves a careful investigation of precise local evidence. Let me illustrate the difference

Suppose we are given four hints about an unknown word.

      1. It is something to eat.
      2. One a day keeps the doctor away.
      3. It is as American as Mom's unspecified pie.
      4. It grows in an orchard
One way to proceed is to take each hint in isolation, and investigate it carefully. For "growing in an orchard," we may survey orchards to define the probability of peaches, pears, plums, cherries and apples. That is a local, increasingly precise approach. Another approach is to accept that by itself each hint may only partly constrain the answer, and to use the set of hints as a whole to support the best guess. After all, there are many things to eat. The doctor could be kept away by a daily aspirin, or by bubonic plague, or by regular exercise. Mom could bake blueberry pie. And many fruits grow in an orchard. But "growing in an orchard" plus "one a day keeps the doctor a way" eliminates bubonic plague and regular exercise. Each hint is locally incomplete. But taken together, the combination of locally incomplete facts help to support a single, highly probable answer for the whole puzzle

Scientific psychologists are trained to perform local, increasingly precise investigations. This has the advantage of producing more and more accurate information, though sometimes about smaller and smaller pieces of the puzzle. Alternatively, we could use all the local sources of evidence together, to constrain global hypotheses. Of course, global models should make novel local predictions. But sometimes we can develop a compelling global picture, even if some of the local evidence is still missing

The two methods are complementary. In this book we will largely pursue the second, global method.

A suggestion to the reader.

This book is in the nature of a scouting expedition, exploring a territory that is not exactly unknown, but at least uncharted by modern psychologists. After a self-imposed absence of many decades the psychological community seems poised to explore this territory once again. In that process it will no doubt probe both the evidence and the theoretical issues in great detail. This work aims to produce a preliminary map to the territory. We try here to cover as much ground as possible, in reasonable detail, to make explicit our current knowledge, and to define gaps therein

There are two ways to read this book. First, you can take it at face value, as a theory of conscious experience. This entails some work. Though I have tried very hard to make the theory as clear and understandable as possible, the job of understanding each hypothesis, the evidence pro and con, and its relation to the rest of the theory will take some effort. An easier way is to take the theory as one way of organizing what we know today about conscious experience -- a vast amount of evidence. (I believe this book considers nearly all the major cognitive and neuroscientific findings about conscious and unconscious processes.) Rather than testing each hypothesis, the theory can be taken as a convenient "as if" framework for understanding this great literature.

The second approach is easier than the first, and may be better for students or for the general reader. Graduate students, professional psychologists, and others with a deeper commitment to the issues will no doubt wish to scrutinize the theory with greater care. The Glossary and Guide to Theoretical Claims at the end of the book defines each major concept formally and relates it to the theory as a whole; this may be helpful to those who wish to examine the theory in more detail.

A brief guide to the book.

This book sketches the outlines of a theory of conscious experience. Although it may seem complicated in detail, the basic ideas are very simple and can be stated in a paragraph or two. In essence, we develop only a single theoretical metaphor: a publicity metaphor of consciousness, suggesting that there is a "global workspace" system underlying conscious experience. The global workspace is the publicity organ of the nervous system; its contents, which correspond roughly to conscious experience, are distributed widely throughout the system. This makes sense if we think of the brain as a vast collection of specialized automatic processors, some of them nested and organized within other processors. Processors can compete or cooperate to gain access to the global workspace underlying consciousness, enabling them to send global messages to any other interested systems. Any conscious experience emerges from cooperation and competition between many different input processors. One consequence of this is that a global message must be internally consistent, or else it would degrade very rapidly due to internal competition between its components (2.0). Further, conscious experience requires that the receiving systems be adapting to, matching, or acting to achieve whatever is conveyed in the conscious global message (5.0). Another way of stating this is to say that any conscious message must be globally informative. But any adaptation to an informative message takes place within a stable but unconscious context

Contexts are relatively enduring structures that are unconscious, but that can evoke and be evoked by conscious events (4.0). Conscious contents and unconscious contexts interweave to create a "stream of consciousness" (6.0). The interplay between them is useful in solving a great variety of problems, in which the conscious component is used to access novel sources of information, while unconscious contexts and processors deal with routine details that need not be conscious. Voluntary control of action can be treated as a special kind of problem-solving, with both conscious and unconscious components (7.0). And if we take one plausible meaning of "self" as the dominant, enduring context of many conscious experiences, we may also say that conscious experience provides information to the self-as-context (9.0). This framework seems to unify the great bulk of empirical evidence in a reasonable way. There are other ways to think about conscious experience, but these can be seen to follow from the extended publicity metaphor. Properties like selectivity, limited capacity, self- consciousness, the ability to report conscious contents, knowledge of the world, reflective consciousness; consciousness as the domain of rationality; consciousness as the "glue" for combining different perceptual features, as the domain of error- correction and trouble-shooting, as a tool for learning; and the relationship between consciousness and novelty, voluntary control, and self ---- all these points are consistent with, and appear to follow from the present framework. The reader can do a quick preview of the entire theory by perusing all the theoretical figures listed in the Index of Tables and Figures. The global workspace metaphor results in a remarkable simplification of the evidence presented in the conscious- unconscious contrasts. This great simplification provides one cause for confidence in the theory. Further, a number of specific, testable predictions are generated throughout the book. The ultimate fate of the theory depends of course on the success or failure of those predictions

Where we cannot suggest plausible answers, we will try at least to ask the right questions. We do this throughout by marking theoretical choice-points whenever we are forced to choose between equally plausible hypotheses. At these points reasonable people may well disagree. In each case we state arguments for and against the course we ultimately take, with some ideas for testing the alternatives. For example, in Chapter 2 we suggest that perception and imagery -- so-called "qualitative" conscious contents -- play a special role as global input that is broadcast very widely. While there is evidence consistent with this proposal, it is not conclusive; therefore we mark a "theoretical choice-point," to indicate a special need for further evidence. It is still useful to explore the implications of this idea, and we do so with the proviso that further facts may force a retreat to a previous decision point. No theory at this stage can expect to be definitive. But we do not treat theory here as a once-and-for-all description of reality. Theories are tools for thinking, and like other tools, they tend sooner or later to be surpassed.

The need to understand conscious experience.

Imagine the enterprise of scientific psychology as a great effort to solve a jig-saw puzzle as big as a football field. Several communities of researchers have been working for decades on the job of finding the missing pieces in the puzzle, and in recent years many gaps have been filled. However, one central missing piece -- the issue of conscious experience -- has been thought to be so difficult that many researchers have sensibly avoided that part of the puzzle. Yet the gap left by this great central piece has not gone away, and surrounding it are numerous issues that cannot be solved until it is addressed. If that is a reasonable analogy, it follows that the more pieces of the jig-saw puzzle we discover, the more the remaining uncertainties will tend to cluster about the great central gap where the missing piece must fit. The more we learn while continuing to circumvent conscious experience, the more it will be true that the remaining unanswered questions require an understanding of consciousness for their solution

Certainly not everyone will agree with our method, conclusions, theoretical metaphor, or ways of stating the evidence. Good theory thrives on reasoned dissent, and the ideas developed in this book will no doubt change in the face of new evidence and further thought. We can hope to focus and define the issues in a way that is empirically responsible, and to help scotch the notion that conscious experience is something psychology can safely avoid or disregard. No scientific effort comes with a guarantee of success. But if, as the history suggests, we must choose in psychology between trying to understand conscious experience and trying to avoid it, we can in our view but try to understand.


Explicit development of this theory began in 1978. Since then a number of psychologists and neuroscientists have provided valuable input, both encouraging and critical. Among these are Donald A. Norman, David Galin, George Mandler, Michael Wapner, Benjamin Libet, Anthony Marcel, James Reason, Donald G. MacKay, Donald E. Broadbent, Paul Rozin, Richard Davidson, Ray Jackendoff, Wallace Chafe, Thomas Natsoulas, Peter S. White, Matthew Erdelyi, Arthur Reber, Jerome L. Singer, Theodore Melnechuk, Stephen Grossberg, Mardi J. Horowitz, David Spiegel, James Greeno, Jonathan Cohen, and Diane Kramer. I am especially grateful to Donald Norman, David Galin, and Mardi J. Horowitz for their open-minded and encouraging attitude, which was at times sorely needed

I am grateful for support received as a Cognitive Science Fellow at the University of California, San Diego, funded by the Alfred P. Sloan Foundation, in 1979-80; and for a Visiting Scientist appointment in 1985-6 at the Program for Conscious and Unconscious Mental Processes, Langley Porter Neuropsychiatric Institute, University of California, San Francisco, supported by the John D. and Catherine T. MacArthur Foundation, and directed by Mardi J. Horowitz. The MacArthur Foundation is to be commended for its thoughtful and historically significant decision to support research on conscious and unconscious functions. Finally, the Wright Institute and its President, Peter Dybwad, were extremely helpful in the final stages of this work. The editorial board of Cambridge University Press showed rare intellectual courage in accepting this book for its distinguished list at a time when the theory was largely unknown. I think that is admirable, and I trust that the result justifies their confidence.

Bernard J. Baars
The Wright Institute
Berkeley, California
January, 1987

Chapter One
What is to be explained? Some preliminaries

The study ... of the distribution of consciousness shows it to be exactly such as we might expect in an organ added for the sake of steering a nervous system grown too complex to regulate itself. -- William James (1890)

1.0 Introduction.

1.1 Some history and a look ahead.
1.11 The rejection of conscious experience: Behaviorism and the positivist philosophy of science.
1.12 Empirical evidence about conscious experience: clear cases and fuzzy cases.
1.13 Modern theoretical languages are neutral with respect to consciousness.

1.2 What is to be explained? A first definition of the topic.

1.21 Objective criteria for conscious experience.
1.22 Contrastive analysis to focus on conscious experience as such.
1.23 Using multiple contrasts to constrain theory.
1.24 Examples of the method: perception and imagery.
1.25 Are abstract concepts conscious? 1.26 Some possible difficulties with this approach.
1.27 ... but is it really consciousness?

1.3 Some attempts to understand conscious experience.

1.31 Four common hypotheses.
1.32 Current models.
1.34 Limited capacity: Selective attention, dual tasks, and short term memory.
1.35 The Mind's Eye.
1.36 Cognitive architectures: distributed systems with limited capacity channels
1.37 The Global Workspace (GW) approach attempts to combine all viable metaphors into a single theory.
1.4 Unconscious specialized processors: A gathering consensus.
1.41 There are many unconscious representations.
1.42 There are many unconscious specialized processors.
1.43 Neurophysiological evidence.
1.44 Psychological evidence.
1.45 General properties of specialized processors.
1.5 Some common themes in this book.
1.51 The role of unconscious specialists.
1.52 Conscious experience reflects the operation of an underlying limited-capacity system.
1.53 Every conscious event is shaped by enduring unconscious systems which we will call "contexts".
1.54 Conscious percepts and images are different from conscious concepts.
1.55 Are there fleeting "conscious" events that are difficult to report, but that have observable effects?
1.6 The course of theory development in this book.

1.0 Introduction

Chances are that not many hours ago, you, the reader, woke up from what we trust was a good night's sleep. Almost certainly you experienced the act of waking up as a discreet beginning of something new, something richly detailed, recallable and reportable, something that was not happening even a few minutes before. In the same way we remember going to sleep as an end to our ability to experience and describe the world. The world this morning seemed different from last night -- the sun was out, the weather had changed, one's body felt more rested. Hours must have passed, things must have happened without our knowledge. "We were not conscious," we say, as if that explains it.

At this moment you can probably bring to mind an image of this morning's breakfast. It is a conscious image -- we can experience again, though fleetingly, the color of the orange juice, the smell of hot coffee, the taste and texture of corn flakes. Where were those images just before we made them conscious? "They were unconscious", we say, or "in memory", as if that explains it.

At this instant you, the reader, are surely conscious of some aspects of the act of reading -- the color and texture of this page, and perhaps the inner sound of these words. Further, you can become conscious of certain beliefs -- a belief in the existence of mathematics, for example -- although beliefs do not consist of sensory qualities in the same way that orange juice has taste, or the way a mental image of corn flakes recreates the experience of a certain crunchy texture. In contrast to your conscious experiences, you are probably not conscious of the feeling of your chair in this instant; nor of a certain background taste in your mouth; of that monotonous background noise; of the sound of music or talking in the background; of the complex syntactic processes needed to understand this phrase; of your intentions regarding a friend; of the multiple meanings of ambiguous words, as in this case; of your eye movements; of the complex vestibular processes that are keeping you oriented to gravity; of your ability to drive a car. Even though you are not currently conscious of them, there is a great deal of evidence to support the idea that all of these unconscious events are being represented and actively processed in your nervous system.

The fact that we can predict all these things with considerable confidence indicates that conscious experience is something knowable, at least in its boundaries. But what does it mean that at this moment this event is likely to be conscious, and that one unconscious? What role does the distinction between conscious and unconscious events play in the running of the nervous system? That is the central question explored in this book. Asking the question this way allows us to use the very large empirical literature on these matters, to constrain theory with numerous reliable facts. A small set of ideas can explain many of these facts. These ideas are consistent both with modern cognitive theory, and also with many traditional notions about consciousness. We briefly review some of these traditional ideas now

1.1 Some history and a look ahead.  contents

Consciousness seems so obvious in its daily manifestations, yet so puzzling on closer examination. In several millenia of recorded human thought it has been viewed variously,

Consciousness has had its ups and downs with a vengeance, especially in the last hundred years. Even today, more sense and more nonsense is spoken of consciousness, probably, than of any other aspect of human functioning. The great problem we face here is how to tip the balance in favor of sense, and against nonsense.

In thinking about conscious experience we are entering a stream of ideas that goes back to the earliest known writings. Any complete account of human thought about human experience must include the great technical literatures of Vedanta Hinduism, Buddhism, and Taoism; but it must also include European philosophy from Plato to Jean-Paul Sartre, as well as the various strands of mystical thought in the West. Indeed, the history of ideas in all developed cultures is closely intertwined with ideas of perception, knowledge, memory, imagination, and the like, all involving conscious experience in different ways. We cannot trace this fascinating story here in detail. Our main purpose is not to interpret the great historical literature, but to develop a theory that will simplify our understanding of conscious experience, just as any good theory simplifies its subject matter. But we will very briefly set the historical context.

When scientific psychology began in the 19th century it was intensely preoccupied with consciousness. By contrast, the 20th century so far has been remarkable for its rejection of the whole topic as "unscientific". Some psychologists in this century have even argued that conscious experience does not exist, a view that has never been seriously held before, in the long history of human thought. Nevertheless, many of these same radical skeptics have uncovered evidence that is directly relevant to the understanding of conscious experience. Though their findings are often described in ways that avoid the word "consciousness," their evidence stands, no matter what we call it. We shall find this evidence very useful.

Usually when we wish to study something -- a rock, a chemical reaction, or the actions of a friend -- we begin with simple observation. But conscious experience is difficult to observe in a straightforward way. We cannot observe someone else's experience directly, nor can we study our own experience in the way we might study a rock or a plant. One great problem seems to be this: Conscious experience is hard to study because we cannot easily stand outside of it, to observe the effects of its presence and absence. But generally in science, we gain knowledge about any event by comparing its presence and absence; that is after all what the experimental method is about. If we try to vary the degree of our own consciousness -- between waking, drowsiness, and sleep, for example -- we immediately lose our ability to observe. How do you observe the coming and going of your own consciousness? It seems futile, like a dog chasing its own tail. There is a vicious circle in attempting to observe conscious experience, one that hobbles the whole history of scientific attempts to understand consciousness.

The difficulty in studying unconscious processes is even more obvious -- by definition, we cannot directly observe them at all. Unconscious processes can only be inferred, based on our own experience and on observation of others. Throughout recorded history, individual thinkers have held that much more goes on unconsciously than common sense would have us believe, but this realization did not catch on very widely until the middle of the 19th century, and then only in the face of much resistance (Ellenberger, 1970). Acknowledging the power of unconscious processes means giving up some of our sense of control over ourselves, a difficult thing to do for many people.

In sum, throughout recorded history it has been remarkably difficult for philosophers and scientists to study and talk sensibly about either conscious or unconscious events. Even as scientific psychology was being founded in the 19th century, psychologists became caught up in these difficulties. Such early luminaries as Wilhelm Wundt and William James defined psychology as the quest for the understanding of conscious experience. William James, the preeminent American psychologist of the 19th century, is still an extraordinary source of insight into conscious functioning, and we will quote him throughout this book. But James must be treated with great caution, because of his strong philosophical preconceptions. He insisted, for example, that all psychological facts must ultimately be reduced to conscious experiences. For James, conscious experience, one of the most puzzling phenomena in psychology, was to be the foundation for a scientific psychology. But building on a foundation that is itself puzzling and badly understood is a recipe for futility -- it undermines the scientific enterprise from the start (Baars, 1986a).

James raised a further problem by getting hopelessly entangled in the great foundation problem of psychology, the mind/body problem, which Schopenhauer called "die Weltknoten" -- the "world-knot" (ref. p. in James). At various points in his classic Principles of Psychology (1890) James tried to reduce all phenomena to conscious experiences (mentalism), while at others he tried to relate them to brain processes (physicalism); this dual reduction led him to mind/body dualism, much against his will. Conflicting commitments created endless paradoxes for James. In some of his last writings (1904), he even suggests that "consciousness" should be dispensed with altogether, though momentary conscious experiences must be retained! And he insistently denied the psychological reality of unconscious processes. These different claims are so incompatible with each other as to rule out a clear and simple foundation for psychological science. Thus many psychologists found James to be a great source of confusion, for all his undoubted greatness. And James himself felt confused. By 1893(?) he was writing in despair, "The real in psychics seems to "correspond" to the unreal in physics, and vice versa; and we are sorely perplexed" (p. 460).

Toward the end of the 19th century other scientific thinkers -- notably Pierre Janet and Sigmund Freud -- began to infer unconscious processes quite freely, based on observable events such as post-hypnotic suggestion, conversion hysteria, multiple personality, slips of the tongue, motivated forgetting, and the like. Freud's insights have achieved extraordinary cultural influence (Ellenberger, 1970; Erdelyi, 1985). Indeed the art, literature, and philosophy of our time is utterly incomprehensible without his ideas and those of his opponents like Jung and Adler. But Freud had curiously little impact on scientific psychology, in part because his demonstrations of unconscious influences could not be brought easily into the laboratory -- his evidence was too complex, too rich, too idiosyncratic and evanescent for the infant science of psychology to digest.

1.11 The rejection of conscious experience: Behaviorism and the positivist philosophy of science.

The controversy and confusion surrounding consciousness helped lead to the behavioristic revolution, starting about 1913. Behaviorism utterly denied that conscious experience was a legitimate scientific subject, but it promised at least a consistent physicalistic basis on which psychology could build. For some radical behaviorists the existence of consciousness was a paradox, an epiphenomenon, or even a threat to a scientific psychology: "Consciousness", wrote John Watson in 1925, "is nothing but the soul of theology" (p. 3; viz., Baars, 1986a). Watson's behaviorism quickly achieved remarkable popularity. In various forms this philosophy of science held a dominant position in American universities until very recently.

But physicalistic psychology was not limited to America. Similar philosophies became dominant in other countries, under different labels. In Russia, Pavlov and Bekhterev espoused a physicalistic psychophysiology, and in England and parts of the European continent, the positivist philosophy of science had much the same impact. Thus at the beginning of the 20th century many psychologists rejected consciousness as a viable topic for psychology. Naturally they rejected unconscious processes as well -- if one cannot speak of conscious phenomena, one cannot recognize unconscious ones either.

The conventional view is that 19th century psychology was rejected by behaviorists and others because it was unreliable and subjectivist, because it was mired in fruitless controversy, and because it was unscientific. However, modern historical research has cast doubt on this view in all respects (Blumenthal, 1979, 1984; Danziger, 1979; Baars, 1986a). It now appears that psychologists like Wilhelm Wundt used objective measures most of the time, and employed introspection only rarely. Even a cursory reading of James' great text (1890) indicates how many "modern" empirical phenomena he knew. Numerous important and reliable effects were discovered in the 19th century, and many of these have been rediscovered since the passing of behaviorism: basic phenomena like selective attention, the capacity limits of short term memory, mental imagery, context effects in comprehension, and the like. Major controversies occurred, as they do today, but primarily about two topics which we must also address in this book: (1) the evidence for imageless thought, indicating that much "intelligent" processing goes on unconsciously (e.g. Woodworth, 1915), and (2) the question whether there is such a thing as a conscious command in the control of action (James, 1890/1980, p. ; Baars, 1986b; viz., Ch. 7). But these were important, substantive controversies, not mere metaphysical argumentation. They were perhaps unsolvable at the time because of conceptual difficulties faced by the late 19th century, some of which have been resolved today. These include the difficulties encountered by William James with unconscious processes and mentalistic reductionism.

As for introspection itself -- reports of conscious experience, sometimes by trained observers -- it is used almost universally in contemporary psychology, in studies of perception, imagery, attention, memory, explicit problem-solving, and the like (e.g. Stevens, 1966; Kosslyn, 1980; Ericsson & Simon, 1984). No doubt methodological improvements have been made, but the basic technique of asking subjects, "What did you just perceive, think, or remember" is extremely widespread. We do not call it "introspection," and we often avoid thinking that subjects in experiments answer our questions by consulting their own experience. But surely our subjects themselves think of their task in that way, as we can learn simply by asking them. They may be closer to the truth in that respect than many experimenters who are asking the questions.

In rejecting consciousness as well as the whole psychology of common sense, behaviorists were supported by many philosophers of science. Indeed, philosophers often tried to dictate what was to be genuine psychology and what was not. Ludwig Wittgenstein, in his various phases of development, inveighed against "mentalistic language" -- the language of psychological common sense -- as "a general disease of thinking" (Malcolm, 1967). In his later work he argued against the possibility of a "private language" -- i.e., that people can really know themselves in any way. His fellow philosopher Gilbert Ryle presented very influential arguments against inferred mental entities, which he ridiculed as "ghosts in the machine" and "homunculi." Ryle believed that all mentalistic inferences involved a mixing of incompatible categories, and that their use led to an infinite regress (1949).

From a modern psychological point of view, the problem is that these philosophers made strong empirical claims that are more properly left to science. Whether people can reliably report their own mental processes is an empirical question. Whether inferred mental entities like "consciousness," "thinking" and "feeling" are scientifically useful is a decision that should be left to psychological theory. In fact, there is now extensive evidence that mental images can be reported in very reliable and revealing ways (Cooper & Shepard; Kosslyn; others). Other mental events, like intentions, may be more difficult to report, as we shall see below (6.0, 7.0, 9.0). Similarly, a vast amount of research and theory over the past twenty years indicates that inferred mental entities can be scientifically very useful, as long as they are anchored in specific operational definitions and expressed in explicit theory (e.g. Neisser, 1967; Anderson, 1983; Miller & Johnson-Laird, 1976). Sometimes mentalistic inferences are indeed flawed and circular, as Ryle argued so strongly. But not always. The job is to make scientific inferences properly. If we were to avoid all inference we would lose the power of theory, an indispensible tool in the development of science.

In one way, however, philosophies of science like behaviorism may have advanced the issue -- namely by insisting that all psychological entities could be viewed "from the outside," as objects in a single physical universe of discourse. For some psychologists consciousness could now be treated as a natural phenomenon (to be sure, with a subjective aspect), but basically like any other event in the world. In this light the most significant observations about consciousness may be found in remarks by two well-known psychologists of the time -- Clark Hull, a neobehaviorist, and Edwin G. Boring, an operationist and the preeminent historian of the period. In 1937 Hull wrote that:

"... to recognize the existence of a phenomenon (i.e. consciousness) is not the same as insisting upon its basic, i.e. logical, priority. Instead of furnishing a means for the solution of problems, consciousness appears to be itself a problem needing solution." (p. 855)

And Boring some years later (1953) summarized his own thinking about introspection by saying that:

"Operational logic, in my opinion ... shows that human consciousness is an inferred construct, a capacity as inferential as any of the other psychological realities, and that literally immediate observation, the introspection that cannot lie, does not exist. All observation is a process that takes time and is subject to error in the course of its occurrence."

This is how we view conscious experience in his book: as a a theoretical construct that can often be inferred from reliable evidence; and as a basic problem needing solution. Within the behavioristic framework it was difficult to build theory, because of resistance to inferred, unobservable constructs. Today, the new cognitive metatheory has overcome this reluctance. The cognitive metatheory encourages psychologists to go beyond raw observations, to infer explanatory entities if the evidence for them is compelling (Baars, 1986a). This is not such a mysterious process -- it is what human beings are always doing in trying to understand their world. No one has ever publicly observed a wish, a feeling of love or hate, or even a pain in the belly. These are all inferred constructs, which we find useful to understand other people's actions, and sometimes even our own.

It cannot be overemphasized that such inferences are not unique to psychology. All sciences make inferences that go beyond the observables. The atom was a highly inferential entity in the first century of its existence; so was the gene; so was the vastness of geological time, a necessary assumption for Darwinian theory; and other scientific constructs too numerous to list here. Cognitive psychology applies this commonsensical epistemology in a way that is more explicit and testable than it is in everyday life. In this way, scientific psychologists have once again begun to speak of meaning, thought, imagery, attention, memory, and recently, conscious and unconscious processes -- all inferred concepts that have been tested in careful experiments and stated in increasingly adequate theories.

Our view here is that both conscious and unconscious processes involve inferences from publicly observable data. Thus conscious and unconscious events reside in the same domain of discourse -- the domain of inferred psychological events. From this perspective William James was wrong to insist that all psychological events must be reduced to conscious experiences, and behaviorists were equally wrong to insist that we cannot talk about consciousness at all. Once we accept a framework in which we simply try to understand the factors underlying the observations in exactly the way geologists try to understand rocks -- that is to say, by making plausible and testable inferences about the underlying causes -- the way becomes much clearer.

Today we may be ready to think about conscious experience without the presuppositional obstacles that have hobbled our predecessors (e.g. Posner, 1978; Mandler, 1975ab; Shallice, 1972). If that is true, we are living at a unique moment in the history of human thought. We may have a better chance to understand human conscious experience now than ever before. Note again -- this is not because we are wiser or harder-working than our predecessors, or even because we have more evidence at our disposal. We may simply be less encumbered by restrictive assumptions that stand in the way of understanding. Many scientific advances occur simply when obstructive assumptions are cleared away (Chapter 5). Such "release from fixedness" is noteworthy in the work of Copernicus and Galileo, Darwin, Freud, and Einstein. While we do not compare our work with theirs, the fact remains that progress can often be made simply by giving up certain presupposed blind spots

1.12 Empirical evidence about conscious experience: clear cases and fuzzy cases

There are many clear cases of conscious experience. The reader may be conscious of this page, of images of breakfast, and the like. These clear cases are used universally in psychological research. When we ask a subject in a perception experiment to discriminate between two sounds, or to report on a perceptual illusion, we are asking about his or her conscious experience. Commonsensically this is obvious, and it is clearly what experimental subjects believe. But scientific psychologists rarely acknowledge this universal belief. For example, there is remarkably little discussion of the conscious aspect of perception in the research literature. The multi-volume Handbook of Perception has only one index reference to consciousness, and that one is purely historical (Carterette & Friedman, 19xx). Nevertheless, reports about the subjects' experiences are used with great reliability and accuracy in psychological research.

In addition to so many clear cases, there are many fuzzy cases where it may be quite difficult to decide whether some psychological event is conscious or not. There may be fleeting "flashes" of conscious experience that are difficult to report, as William James believed. There are peripheral "fringe" experiences that may occur while we are focused on something else. Early psychologists reported that abtract concepts have fleeting conscious images associated with them (Woodworth, 1915), and indeed the writings of highly creative people like Mozart and Einstein express this idea. Such examples are much more difficult to verify as conscious than the clear cases discussed above.

The zero-point problem.

This kind of uncertainty sometimes leads to seemingly endless controversy. For example, there is much debate about whether subliminal perceptual input is conscious or not (Marcel, 1983ab, Cheesman & Merikle, 1984; Holender, 1986). Likewise there is great argument about the evidence for "blind sight", where patients with occipital damage can name objects which they claim not to experience (Weisskrantz, 1980; Natsoulas, 1982a; Holender, 1986). It is regrettable that so much current thinking about consciousness revolves around this "zero-point problem," which may be methodologically quite beyond us today. Progress in most scientific research comes from first looking at the easy, obvious cases. Only later, using knowledge gained from the clear cases, can one resolve the truly difficult questions. Newton first used prisms to analyze light; only later was his analysis extended to difficult cases like color filters and the wave-particle issue. If Newton had begun with these difficult cases, he would never have made his discoveries about light. In science, as in law, hard cases make bad law.

In this book we will make an effort to build on clear cases of conscious and unconscious processes. We will try to circumvent the "zero point problem" as much as possible (e.g. 5.7). We use a "high criterion" for consciousness: We want people to report a conscious experience that is independently verifiable. Ordinary conscious perception obviously fits this definition, but it also includes such things as the conscious aspects of mental images, when these can be verified independently. On the unconscious side, we also set a high criterion: unconscious processes must be inferrable on the basis of strong, reliable evidence, and they must not be voluntarily reportable even under the optimum conditions (Ericsson & Simon, 1984). Syntactic processing provides a strong example of such a clearly unconscious event. Even professional linguists who study syntax every working day do not claim to have conscious access to their own syntactic processes.


Insert Figure 1.12 about here.


Between these clear cases of conscious and unconscious events there is a vast range of intermediate cases (Figure 1.12). In this book we start with clear cases of conscious and unconscious events, seek a plausible theory to explain them, and then use this theoretical scaffolding to decide some of the fuzzier cases. But we will start simply.

We began this chapter with some claims about the reader's own experience. The reader is momentarily conscious of most words in the act of reading, but at the same time competing streams of potentially conscious information are likely to be unconscious (or barely conscious); syntactic processes are unconscious; most conceptual presuppositions are unconscious (Chapter 4); habituated stimuli are unconscious; imageable memories, as of this book's cover, can be momentarily conscious, but are currently unconscious; and so on. These inferences are supported by a great deal of solid, reliable evidence. Such clear cases suggest that we can indeed speak truthfully about some conscious and unconscious events.

1.13 Modern theoretical languages are neutral with respect to conscious experience.

Current theories speak of information processing, representation, adaptation, transformation, storage, retrieval, activation, and the like, without assuming that these are necessarily conscious events. This may seem obvious today, but it is actually a painfully achieved historic insight into the right way to do psychological theory (Baars, 1986a). William James, as noted above, felt strongly that all psychological events must be reducible to conscious experiences, while the behaviorists denied the relevance of either consciousness or unconsciousness. Either position makes it impossible to compare similar conscious and unconscious events, and to ask the question, "Precisely what is the difference between them?" Because it is neutral with respect to conscious experience, the language of information processing gives us the freedom to talk about inferred mental processes as either conscious or unconscious. This is a giant step toward clarity on the issues.

1.2 What is to be explained? A first definition of the topic.  contents

What is a theory of consciousness a theory of? In the first instance, as far as we are concerned, it is a theory of the nature of experience. The reader's private experience of this word, his or her mental image of yesterday's breakfast, or the feeling of a toothache -- these are all contents of consciousness. These experiences are all perceptual and imaginal. (In this book we will use the word "imaginal" to mean internally generated quasi-perceptual experiences, including visual and auditory images, inner speech, bodily feelings, and the like.)

For present purposes we will also speak of abstract but immediately expressible concepts as conscious -- including our currently expressible beliefs, intentions, meanings, knowledge, and expectations. Notice that these abstract concepts are experienced differently from perceptual and imaginal events (Natsoulas, 1978a; Baars, 1986b, and throughout this book). Abstract concepts do not have the same rich, clear, consistent qualities that we find in the visual experience of this book: no color, texture, warmth, size, location, clear beginning and ending, etc. Perceptual and imaginal experiences are characterized by such qualities. Conceptual events are not. In contrast to qualitative conscious experiences we will sometimes refer to abstract conceptual events in terms of conscious access

This issue is closely related to the question of focal vs. peripheral consciousness. The reader right now is conscious of these words. But much ancillary information is immediately available, as if it exists vaguely in some periphery of awareness. Some of it is in short-term memory and can be immediately brought to mind (1.x). Some of it is in the sensory periphery, like a kind of background noise. And some of it may consist of ideas that are always readily available, such as one's ability to stand up and walk to the next room. Again, it is probably better to think about peripheral events in terms of immediate conscious access, rather than prototypical conscious experience.

Common sense calls both qualitative experiences and non- qualitative concepts conscious. Similarly, common sense may call both focal and peripheral events conscious. For the time being we will follow this usage if the events in question meet our operational criteria, discussed below. A complete theory must explain both the similarities and differences between these reports. Later in this book we will also explore the notion of conscious control, as a plausible way of thinking about volition (7.0).

In reality, of course, every task people engage in involves all three elements: conscious experience, access, and control. Ultimately we cannot understand the role of consciousness if we do not explore all three. However, one can make the case that conscious qualitative experience is fundamental to the understanding of the other aspects and uses of consciousness. Thus in this book we first address the puzzle of conscious experience (Chapters 2 and 3), then explore conscious access (Chapters 4 and 5), proceed to conscious control (Chapters 6 and 7), and finally consider the integrated functioning of all three elements (Chapters 8, 9 and 10).

The first order of business, then, is to find a usable objective criterion for the existence of a conscious event. When would any reasonable person agree that someone just had some experience? What is reliable objective evidence that a person just saw a banana, felt a sharp toothache, remembered the beauty of a flower, or experienced a new insight into the nature of conscious experience?

1.21 Objective criteria: Gaining access to the phenomena

In the course of this book we will often appeal to the reader's personal experience, but only for the sake of illustration. From a scientific point of view, all evidence can be stated in entirely objective terms. We can define a useful (though not perfect) objective criterion for conscious events. There may be arguments against this first operational definition, but it marks out a clear domain which almost everyone would consider conscious. Within this domain we can proceed with theory construction, and then consider more difficult cases.

For now, we will consider people to be conscious of an event if (1) they can say immediately afterwards that they were conscious of it and (2) we can independently verify the accuracy of their report. If people tell us that they experience a banana when we present them with a banana but not with an apple, we are satisfied to suppose that they are indeed conscious of the banana. Accurate, immediate consciousness report is in fact the most commonly used criterion today. It is exactly what we obtain already in so many psychological experiments.

It is important not to confuse a useful operational definition with the reality of conscious experience. Surely many claimed experiences are not conveniently verifiable --- dreams, idiosyncratic images, subtle feelings, etc. But this is not necessary for our purpose, since we can rely upon the many thousands of experiences of all kinds that can indeed be verified. In the usual scientific fashion, we are deliberately setting a high criterion for our observations. We prefer to risk the error of doubting the existence of a conscious experience when it is actually there, rather than the opposite error of assuming its existence when it is not there.

For example, in the well-known experiment by Sperling (1960), subjects are shown a 3x3 grid of letters or numbers for a fraction of a second. Observers typically claim that they can see all the letters, but they can only recall three or four of them. Thus they pass the "consciousness report" criterion suggested above, but they fail by the accuracy criterion. However, it is troubling that subjects -- and experimenters serving as subjects -- continue to insist that they are momentarily conscious of all the elements in the array. Sperling brilliantly found a way for observers to reveal their knowledge objectively, by asking them after the exposure to report any randomly cued letter. Under these circumstances people can accurately report any arbitrary letter, suggesting that they do indeed have fleeting access to all of them. Since the response cue is only given after the physical information has disappeared, it is clear that the correct information must have come from memory, and not from the physical display. Now we can be quite confident that subjects in the Sperling experiment do have momentary conscious access to all the elements in the visual display. Both the accuracy and the "consciousness report" criterion are satisfied


Insert Fig. 1.21 about here.


The Sperling experiment serves as a reminder that conscious events may decay in a few hundred milliseconds, so that immediate report is often essential (Ericsson & Simon, 1984). Sometimes even very recent events can be hard to recall -- very fleeting ones for example, or novel stimuli that cannot be "chunked" into a single experience, or stimuli that are followed by distraction or surprise. Indeed, the very act of retrieving and reporting recent material may interfere with accurate recall. But in general, recent events make for the best consciousness reports.

There are many ways to verify the accuracy of report. In perception, psychophysics, and memory experiments, we can check the stimulus directly. Studies of mental imagery typically look for internal consistency. For example, the well-known experiments by Shepard and Cooper (1973) show that in rotating mental images, the time of rotation is a highly predictable linear function of the degree of rotation. This very precise result helps validate the subjects' claim that they are indeed representing the rotating image mentally. Studies of explicit problem solving typically look for accuracy of results, subgoals, timing, and characteristic errors (Ericsson & Simon, 1984). And so on. Notice by the way that accuracy does not guarantee consciousness by itself. Aspects of mental rotation may not be conscious, for instance. Likewise, reports of a conscious experience do not guarantee that it has actually occurred. There is much evidence that people sometimes manufacture memories, images, perceptual experiences, and intentions that are demonstrably false (e.g., Nisbett & Wilson, 1977). This is why we set the criterion of both the report of a conscious experience and accuracy.

Notice that saying "I just experienced a banana" is a metacognitive act -- it is a report about a previous mental event. Consciousness no doubt exists even without this kind of metacognition -- it surely continues if we do not report it afterwards, even to ourselves. In states of deep absorption in a novel or a film, or in hypnosis, people may not be able to reflect on their experiences without disrupting the absorbed state (7.x), but they are quite conscious all the same. This suggests that there may be more direct ways of assessing conscious experience than the operational definition we advance here. In fact, as we discover more evidence that correlates with this definition, better operational criteria will no doubt emerge. If we find that people who are conscious by the "accurate report" criterion also have excellent recognition memory for the experience, we may "bootstrap" upward, and "accurate recognition memory" may then supersede accurate report. Or someone might discover a neural event that correlates infallibly with conscious experience, defined by accurate consciousness report; the neural event may also work when people cannot report their experience. Over time, as confidence grows in this measure, it may begin to supersede the current definition. But for now, "accurate, immediate consciousness report" is still the most obviously valid criterion.

Our first operational definition extends beyond perceptual events to purely mental images, bodily feelings, inner speech, and the like, when people can give accurate reports of having been conscious of such events. These kinds of conscious events are often called "qualitative conscious contents," because they have qualities like color, weight, taste, location in space and time, etc. In addition to qualitative conscious events, people talk about other mental contents as "conscious" if they are immediately available and expressable. Thus people can give accurate reports about their current beliefs, ideas, intentions, and expectations: But these things do not have qualities like taste or texture or color. Ideas like democracy or mathematics, a belief in another person's good intentions, and the like -- these events are non-qualitative or abstract. Nevertheless, they can in principle satisfy our operational definition, and certainly in the common meaning of "consciousness" we speak often of our conscious beliefs, ideas, and intentions. The relationship between qualitative and non-qualitative conscious contents will be a running theme in this book. Chapter 7 suggests a resolution of this problem.

Note that accurate, immediate consciousness report takes for granted a whole cognitive apparatus that any complete theory must explain. For example, it presupposes the ability to act voluntarily; this is closely related to conscious experience (see Chapter 7). Further, any theory must eventually give a principled account of the operational definitions that led to it in the first place. In the beginning we can choose measures simply because they seem plausible and useful. But eventually, in the spiraling interplay of measure and theory, we must also explain them.

1.22 Contrastive analysis to focus on conscious experiences as such.

We will focus on the notion of consciousness as such by contrasting pairs of similar events, where one is conscious but the other is not. The reader's conscious image of this morning's breakfast can be contrasted with the same information when it was still in memory, and unconscious. What is the difference between conscious and unconscious representations of the same thing? Similarly, what is the difference between the reader's experience of his or her chair immediately after sitting down, and the current habituated representation of the feeling of the chair? What is the difference between the meaning conveyed by this sentence, and the same meaning in memory, and therefore not currently available? Or between currently accessible ideas and the presupposed knowledge that is necessary to understand those ideas, but which is not currently available? All these cases involve contrasts between closely comparable conscious and unconscious events.

These contrasts are like experiments, in the sense that we vary one thing -- conscious experience of or access to the event -- and try to hold everything else constant. And indeed many experiments of this type have been published. In studies on selective attention, on subliminal perception, and on automaticity, similar conscious and unconscious events are routinely compared (e.g. MacKay, 1973, Libet, 1978; Marcel, 1983a; Sokolov, 1963; Shiffrin & Schneider, 1977). If contrastive analysis is just like doing an experiment, what is the difference between it and any perceptual experiment? It lies only in what is being compared. In perceptual experiments we might compare a 20 decibel sound to a 30 decibel sound, both of them conscious events. But in contrastive analysis, we compare two mental representations, one of a 30 decibel sound before habituation (which is conscious) to the mental representation of the same sound after habituation, when it is unconscious (1.xx, Sokolov, 1963). Contrastive analysis allows us to observe the difference between the presence and absence of conscious experiences "from the outside." We can do this through reliable inferences from observed behavior to some inferred mental event, which may be inferrable even when the subject's experience of it is lost.

1.23 Using multiple contrasts to constrain theory.

This book is concerned with "cumulative constraints" on conscious experience (Posner, 1982). As we noted in the Preface, we can look to multiple domains of evidence, so that strengths in one domain may compensate for weaknesses in another. A great deal of empirical work is required before the hypotheses advanced in this book can be considered solid. But the power of theory is precisely to make inferences about the unknown, based on what is known. As Broadbent (1958) has noted,

"The proper road for progress ... is to set up theories whch are not at first detailed, although they are capable of disproof. As research advances the theory will become continually more detailed, until one reaches the stage at which further advance is made by giving exact values ... previously left unspecified in equations whose general form was known." (Quoted by Posner, 1982, p. 168)

Our approach in this book is integrative and global rather than local. We will also find a strong convergence between the "system architecture" suggested in this book and other current cognitive theories, even though the evidence we consider is quite different (e.g. Anderson, Newell, Norman & Shallice, Reason.). This is encouraging.

1.24 Some examples of the method: perception and imagery.

Perception as conscious stimulus representation.

Perception is surely the most richly detailed domain of conscious experience. In perceptual research we are always asking people what they experience, or how one experience compares to another. And we always check the accuracy of those reports. Thus research in perception and psychophysics almost always fits the criterion of "accurate report of consciousness." Someone might argue that perceptual illusions are by definition inaccurate, so that the study of illusions seems to be an exception to the rule (viz. Gregory, 1966). But in fact, even perceptual illusions fit our operational definition of conscious experience: that definition is concerned after all with accurate report with respect to the subject's experience, not with whether the experience itself matches the external world. We cannot check the accuracy of reported illusions by reference to the external world, but other validity checks are routinely used in the laboratory. Perceptual illusions are highly predictable and stable across subjects. If someone were to claim an utterly bizarre illusory experience that was not shared by any other observer, that fact would be instantly recognized. For such an idiosyncratic illusory experience we would indeed be in trouble with our operational definition. Fortunately, there are so many examples of highly reliable perceptual reports that we can simply ignore the fuzzy borderline issues, and focus on the clear cases

Now we can apply a contrastive analysis to perceptual events. We can treat perception as input representation (e.g. Rock, 1982; Lindsay & Norman, 1977; Marr, 1982), and contrast perceptual representations to stimulus representations that are not conscious. Table 1.24a shows these contrasts. There is evidence suggesting that "unattended" streams of information are processed and represented even though they are not conscious (e.g. MacKay, 1973; but see Holender, 1986). Further, habituated perceptual events -- those to which we have become accustomed -- apparently continue to be represented in the nervous system (Sokolov, 1963; see section 1.xx). There is evidence that perceptual events are processed for some time before they become conscious, so that there are apparently unconscious input representations (Libet, 1978; Neisser, 1967). Then there are numerous ambiguities in perception, which involve two ways of structuring the same stimulus. Of these two interpretations, only one is conscious at a time, though there is evidence that the other is also represented (e.g. Swinney, 1979; Tanenhaus, Carlson & Seidenberg, 1985). There is evidence, though somewhat controversial, that visual information that is centrally masked so that it cannot be experienced directly, continues to be represented and processed (Marcel, 1983a; Holender, 1986; Cheesman & Merikle, 1984). And finally, there are many contextual representations and processes that shape a perceptual interpretation, but which are not themselves conscious (see 4.0).

Any theory of the conscious component of perception must somehow explain all of these contrasts. The problem is therefore very strongly bounded. One cannot simply make up a theory to explain one of the contrasts and expect it to explain the others.

Table 1.24a

Contrastive Evidence in Perception
Conscious Events Comparable Unconscious Events
1. Perceived stimuli 1. Processing of stimuli lacking in intensity or duration, centrally masked stimuli.
  2. Pre-perceptual processing.
  3. Habituated or automatic stimulus processing.
  4. Unaccessed meanings of ambiguous stimuli.
  5. Contextual constraints on the interpretation of percepts.
  6. Unattended streams of perceptual input.

Several psychologists have suggested that perception has a special relationship to consciousness (Wundt, 1912; Freud, 198x; Skinner, 1974; Merleau-Ponty, 1964). This is a theme we will encounter throughout this book. A rough comparison of major input, output, and intermediate systems suggests that consciousness is closely allied with the input side of the nervous system. While perceptual processes are obviously not conscious in detail, the outcome of perception is a very rich domain of information to which we seem to have exquisitely detailed conscious access. By comparison, imagery seems less richly conscious, as are inner speech, bodily feelings, and the like. Action control seems even less conscious -- indeed, many observers have argued that the most obviously conscious components of action consist of feedback from actions performed, and anticipatory images of actions planned. But of course, action feedback is itself perceptual, and imagery is quasi-perceptual (see 1.25 and Chapter 7). The conscious components of action and imagery resemble conscious perception.

Likewise, thought and memory seem to involve fewer conscious details than perception. Even in short term memory we are only conscious of the item that is currently being rehearsed, not of the others; and the conscious rehearsed item in short term memory often has a quasi-perceptual quality. We are clearly not conscious of information in long term memory or in the semantic, abstract component of memory. In thinking and problem-solving we encounter phenomena like incubation to remind us that the details of problem solving are often carried out unconsciously (Chapter 6). Again, the most obviously conscious components in thinking and memory involve imagery or inner speech -- and these resemble perceptual events. The thoughts that come to mind after incubation often have a perceptual or imaginal quality (John- Steiner, 1986). In sum, when we compare input events (perception and imagery) with output (action) and mediating events (thought and memory), it is the input that seems most clearly conscious in its details. This kind of comparison is very rough indeed, but it does suggest that perception has a special relationship to consciousness (viz., 1.54).

Imagery: Conscious experience of internal events.

We can be conscious of images in all sensory modalities, especially vision; of inner speech; and of feelings associated with emotion, anticipatory pleasure, and anticipatory pain. These experiences differ from perception in that they are internally generated. There are now a number of techniques for assessing imagined events that can meet our operational definition of conscious experience, though the imagery literature has been more concerned with accuracy of the imagery reports than with asking whether or not the image was conscious. For example, a famous series of experiments by Shepard and Cooper (?) shows that people can rotate mental images, and that the time needed for rotation is a linear function of the number of degrees of rotation. This very precise result has been taken as evidence for the accuracy and reliability of mental images. But it is not obvious that subjects in this task are continuously conscious of the image. It is possible that in mentally rotating a chair, we are conscious of the chair at 0, 90, and 180 degrees, and less conscious at other points along the circle (Table 1.2x).

Table 1.24b

Contrastive Evidence in Imagery. (*)
Conscious Events Comparable Unconscious Events
1. Images retrieved and in generated all modalities.  1. Unretrieved images in memory
 2. New visual images. 2. Automatized visual images.
 3. Automatic images that encounter some unexpected difficulty.  
 4. Inner speech: Currently rehearsed words in Short-Term Memory.  4. Currently unrehearsed words 
in Short-Term Memory
  5. Automatized inner speech?
 (*) "Images" are broadly defined here to include all quasi- perceptual events occurring in the absence of external stimulation, including inner speech and emotional feelings.

Assessing the consciousness of mental images.

Fortunately researchers in imagery have begun to address the issue of consciousness more directly. Pani (1982) solicited consciousness reports in a verifiable mental imagery task. His results are very systematic, and consistent with historical views of imagery. Pani's subjects were asked to memorize several visual shapes (Figure 1.xx), which were arbitrary, so that previous learning would not be a factor. As shown in Figure 1.24, the test

Insert Figure 1.24 about here.

shapes were designed along a similarity dimension, so that any two adjacent shapes would be relatively similar, while more distant shapes were correspondingly different. Now Pani asked his subjects to perform a discrimination task: They were to keep one shape in mind, and select which of two stimulus figures came closest to the one they had in mind. By making the two visual figures more or less similar to each other, he was also able to vary the difficulty of the task. The more similar the two stimuli were, the more difficult the discrimination.

Imagery reports were collected as a function of practice and difficulty, and the results were quite clear-cut: The more practice, the less subjects were conscious of the mental figure. Indeed, consciousness of the imaged figure drops very predictably with practice, even over 18 trials, with a correlation of -90%. When the discrimination is made more difficult, the mental image tended to come back to consciousness.

Pani's is in many ways a prototype experiment, one we will return to several times. It shows several important things. First, it suggests that even though the mental representation of the figure becomes less consciously available with practice, it continues to be used in the task. Discrimination accuracy did not drop off with practice, even though conscious access did. This result invites a contrastive analysis: after all, some sort of mental representation of the target image continues to exist, whether conscious or not; what is the difference between the conscious image and the unconscious representation? Note also the rapid recovery of the conscious image when difficulty increased. In Chapter 5 we will argue that both fading and recovery of the conscious image can be explained in terms of novelty, informativeness, and predictability. The more predictable the mental representation, the less likely it is to fade; the more novel, informative, and difficult it is, the more likely it is to be conscious

The importance of inner speech.

Inner speech is one of the most important modes of experience. Most of us go around the world talking to ourselves, though we may be reluctant to do so out loud. We may be so accustomed to the inner voice that we are no longer aware of its existence "metacognitively", leading to the paradoxic of people asking themselves, "What inner voice?" But experiments on inner speech show its existence quite objectively and reliably (e.g., Klapp, Greim, & Marshburn, 1981). For several decades Singer and his colleagues have studied inner speech simply by asking people to talk out loud, which they are surprisingly willing to do (e.g. Pope and Singer, 1978). There is good evidence from this work that the inner voice maintains a running commentary about our experiences, feelings, and relationships with others; it comments on past events and helps to make plans for the future (Klinger, 1971). Clinical researchers have trained children to talk to themselves in order to control impulsive behavior (Meichenbaum & Goodman, 1971), and there are many hundreds of experiments in the cognitive literature on verbal Short Term Memory, which is roughly the domain in which we rehearse telephone numbers, consider different ideas, and talk to ourselves generally (e.g. Baddeley, 1976). Thus we actually know a great deal about inner speech, even though much of the evidence may be listed under other headings.

Short Term Memory is the domain of rehearsable, usually verbal memory. It has been known since Wundt that people can keep in immediate memory only 7 or so unrelated words, numbers, and even short phrases. If rehearsal is blocked, this number drops to three or four (Peterson & Peterson, 1959). It is quite clear that we are not conscious of everything in conventional Short Term Memory. In rehearsing a telephone number we are qualitatively conscious only of the currently rehearsed item, not of all seven numbers, although all seven are readily available. STM raises not just the issue of conscious experience, but also of voluntary control. We can ask people to rehearse numbers voluntarily, or we can interfere with rehearsal by asking them to do some competing, voluntary task, like counting backward by threes from 100 (Peterson & Peterson, 1959). A complete account of short-term memory must also include this voluntary control component (see Chapter 8).

There is considerable speculation that inner speech may become automatic with practice. Some clinical researchers suggest that people who are depressed may have rehearsed depressive ideation to the point of automaticity, so that they have lost the ability to control the self-denigrating thoughts (e.g., Beck, 1976). While this idea is plausible, I know of no studies that support it directly. This is a significant gap in the scientifc literature. An experiment analogous to Pani's work on visual imagery may be able to provide the missing evidence.

1.25 Are abstract concepts conscious?

Philosophers have noted for many centuries that we are conscious of the perceptual world in ways that differ from our awareness of concepts. Perception has qualities like color, taste, and texture. Concepts like "democracy" or "mathematics" do not. And yet, ordinary language is full of expressions like "I am conscious of his dilemma," "I consciously decided to commit murder" and the like. Abstract beliefs, knowledge, intentions, decisions, and the like, are said to be conscious at times. And certainly our operational definition would allow this: If someone claims to be conscious of a belief in mathematics, and we can verify the accuracy of this claim somehow, it would indeed fit the definition of an "accurate report of being conscious of something." But can we really say that people are conscious of a belief that has no experienced qualities like size, shape, color, or location in time and space?

We will suppose that it is meaningful to be conscious of some abstract concept, although the nature of the relationship between qualitative and non-qualitative experiences will be a theme throughout the book (1.xx). We can point to a number of contrastive facts about our consciousness of abstract concepts. For example, the reader is probably not conscious right now of the existence of democracy, but if we were to ask whether democracy exists, this abstract fact will probably become consciously available. That is, we can contrast occasions when a concept is in memory but not "conscious" to the times when it is available "consciously." Further, there are reasons to believe that conscious access to concepts becomes less conscious with practice and predictability, just as images become less conscious with practice (5.xx). Thus consciousness of abstract concepts seems to behave much like the conscious experience of percepts and images. We will speak of conscious experience of percepts and images, and conscious access to abstract concepts, intentions, beliefs, and the like. Chapter 7 will suggest a solution to the problem of the relationship between qualitative experiences and non-qualitative conscious access.

In sum, we can find several contrasts between matched conscious and unconscious events in the realms of perception, imagery and even abstract concepts. These are only two examples of the contrastive analysis method (see Baars, 1986b, for more examples). In the remainder of the book, we perform several others, as follows:

Thus we gain a great deal of mileage from contrastive analysis in this book.

1.26 Some possible difficulties with this approach

The logic of contrastive analysis is much like the experimental method, and some of the same arguments can be raised against it. In an experiment, if A seems to be a necessary condition for B, we can always question whether A does not disguise some other factor C. This question can be raised about all of the contrasts: What if the contrasts are not minimal: what if something else is involved? What if automatic skills are unconscious because they are coded in a different, procedural format, which cannot be read consciously (Anderson, 1983)? What if subliminal stimulation is unconscious not because the stimulus has low energy, but because the duration of the resulting neural activity is too short? These are all possibilities. In the best of all possible worlds we would run experiments to test all the alternative hypotheses. For the time being, we will rely mainly on the extensive evidence that is already known, and try to account for it with the smallest set of principles that work. But any explanation is open to revision

1.27 ... but is it really consciousness?

A skeptical reader may well agree with much of what we have said so far, but still wonder whether we are truly describing conscious experience, or whether, instead, we can only deal with incidental phenomena associated with it. Of course, in a scientific framework one cannot expect to produce some ultimate, incorrigible understanding of "the thing itself." Rather, one can aim for an incremental advance in knowledge. No matter how much we learn about conscious experience, there may always be some irreducible core of "residual subjectivity" (Natsoulas, 1978b). In this connection it is worth reminding ourselves that physicists are still working toward a deeper understanding of gravity, a centerpiece of physical science for almost four hundred years. Yet early developments in the theory of gravity were fundamental, and provided the first necessary steps on the road to current theory. We can work toward a reasonable theory, but not an ultimate one.

These considerations temper the quest for better understanding. And yet, scientific theories in general claim to approach the "thing itself," at least more so than competing theories. Physics does claim to understand and explain the planetary system, and biology really does seem to be gaining a genuine understanding of the mechanism of inheritance. These topics, too, were considered shocking and controversial in their time. Generally in science, if it looks like a rabbit, acts like a rabbit, and tastes like a rabbit, we are invited to presume that it is indeed a rabbit. Similarly, if something fits all the empirical constraints one can find on conscious experience, it is likely to be as close to it as we can get at this time. Of course, any claim that the current theory deals with conscious experience as such depends on the reliability, validity, and completeness of the evidence.

It is customary in cognitive psychology to avoid this debate through the use of scientific euphemism like "attention," "perception," "exposure to the stimulus," "verbal report," "strategic control" and the like. These terms have their uses, but they also tend to disguise the real questions. "Strategic control" is a good way to refer to the loss of voluntary control over automatic skills (Shiffrin & Scheider, 1977; Schneider & Shiffrin, 1977). But using this term skirts the question of the connection between conscious experience and voluntary, "conscious" control. Once we label things in terms of conscious experience, this question can no longer be evaded (see Chapter 7). In this book we will find it helpful to call things by their usual names, because that tends to bring up the major issues more directly. None of the current crop of euphemisms for conscious experience conveys precisely what we mean by "conscious experience," either in life, or in this book.

1.3 Some attempts to understand conscious experience. contents

There is now once more a rising tide of scientific interest in conscious experience. G.A. Miller (1986) has called consciousness one of the three major "constitutive" problems of psychology -- the problems that define psychology as a discipline. It therefore makes sense to take another look at existing efforts to understand the topic. We will briefly review some common explanatory metaphors, explore some current models, and finally sketch the themes that will be developed further in this book. Again, the reader should not become discouraged by the apparent complexity and divergence of the evidence -- the rest of this book aims to capture it all in terms of a few basic ideas.

1.31 Four common hypotheses.

The Activation Hypothesis.
One common suggestion is that consciousness involves activation of elements in memory that reach consciousness once they cross some activation threshold. We will call this the Activation Hypothesis; it is a current favorite, because many of today's cognitive theories use the concept of activation for reasons of their own. The Activation Hypothesis was stated as early as 1824 by Johann Herbart. In a very modern vein, he wrote: "As it is customary to speak of an entry of the ideas into consciousness, so I call threshold of consciousness that boundary which an idea appears to cross as it passes from the totally inhibited state into some ... degree of actual (conscious) ideation. ... As we may speak of the intensification and weakening of ideas, so I refer to an idea as below the threshold if it lacks the strength to satisfy those conditions. ... it may be more or less far below the threshold, according as it lacks more or less of the strength which would have to be added to it in order for it to reach the threshold. Likewise, an idea is above the threshold insofar as it has reached a certain degree of actual (conscious) ideation." (Herbart, 1824/1961, p. 40. Italics in the original.) Studies of perception, imagery, and memory all provide some evidence for this idea. Low-intensity stimuli in a normal surround do not become conscious. When two stimuli both evoke the same association, it is more likely to become conscious than when only one stimulus evokes the association (Anderson, 19xx). And so on. Numerous phenomena involving consciousness can be explained naturally with the idea of an activation threshold. In recent years a number of models have been proposed involving "spreading activation", which are in spirit not far removed from Herbart's thoughts. These models view knowledge as a network of related elements, whether they be phonemes, words, or abstract concepts. Information can spread from node to node; the degree of involvement of any element is indicated by an activation number that is assigned to each node. These models are very effective, providing a flexible and powerful theoretical language for psychology. They have been applied to modeling language, visual perception, word perception, imagery, memory retrieval, speech production, and the like (see Rumelhart, McClelland, and the PDP Group, 1986). However, in these models the strength of activation is not interpreted as the likelihood of the activated material becoming conscious. Several theorists have made tentative suggestions that consciousness may in fact involve high-level activation. This is attractive in some ways, and indeed the model we propose in Chapter 2 may be stated in terms of activation (2.33). But we will sound the following note of caution about the use of activation alone to represent access to consciousness.
The trouble with unaided activation
Activation by itself is not sufficient to produce a conscious experience. This is shown especially by phenomena like habituation and automatization of conscious experience when an event is presented over and over again. We will call these phenomena Redundancy Effects. They are quite important in this book (Chapter 5). Redundancy Effects show that we generally lose consciousness of repeated and predictable events. This applies to perceived stimuli, but also to repeated mental images, to any practiced, predictable skill, and even to predictable components of meaning (see Chapter 5). Later in this chapter we will give arguments to the effect that Redundancy Effects involve not merely decay of activation, but an active learning process (1.41; 5.0).

In general, if we are to accept that conscious experience corresponds to activation above some threshold, as Herbart's Activation Hypothesis suggests, we must also accept the paradoxical idea that too much activation, lasting too long, can lead to a loss of conscious experience. Perhaps activation first rises and then declines? But then one would have to explain how a well-learned automatic skill can have low activation and still be readily available and very efficient! In learning to ride a bicycle, we lose consciousness of the details of riding even as we gain efficiency and availability of the skill. Hence activation cannot be used to explain both consciousness, and efficiency and availability. If activation is used to explain consciousness, then something else is needed to account for availability and efficiency. One is caught on the horns of a dilemma: either consciousness and activation are the same, in which case activation cannot be used to explain the efficiency and availability of automatic (unconscious) skills, or activation and consciousness are different, in which case activation cannot be the only necessary condition for conscious experience. Later in this book we interpret Redundancy Effects as evidence that conscious experience always must be informative as well as highly activated -- i.e., it involves a process that works to reduce uncertainty about the input (5.00). We are conscious of some event only as long as its uncertainty is not completely resolved. This view breaks the circularity of the unaided Activation Hypothesis, by adding another necessary condition. We will use activation in this book as one way to model the chances of an even becoming conscious. But activation is only a necessary, not a sufficient condition of consciousness (2.33).

The Novelty Hypothesis.
The role suggested above for informative stimulation is not entirely new. It follows from another stream of thought about conscious experience. This trend, which we can call the Novelty Hypothesis, claims that consciousness is focused on mismatch, novelty, or "anti-habit". (Berlyne, 1960; Straight, 1977; Sokolov, 1963). Of course novelty is closely connected with the concept of information, and in Chapters 5 we suggest that the mathematical definition of information may be adapted to create a modern version of the Novelty Hypothesis (Shannon & Weaver, 1949).
The Tip-of-the-Iceberg Hypothesis.
Another long tradition looks at consciousness as the tip of the psychological iceberg. "Tip of the Iceberg" Hypotheses emphasize that conscious experience emerges from a great mass of unconscious events (Ellenberger, 1970). In modern cognitive work conscious experience is closely associated with limited capacity mechanisms (see 1.x), which represent the tip of a very large and complex iceberg of unconscious memories and mechanisms. In a different tradition, Freud's censorship metaphor attempts to explain the fact that conscious experience is only the tip of a great motivational iceberg (Erdelyi, 1985).

Curiously enough, few researchers seem to ask why our conscious capacity is so limited. The limitations are quite surprising, compared to the extraordinary size, capacity, and evolutionary sophistication of the nervous system. Some psychologists suppose that there must be a physiological reason for conscious limited capacity, but of course this begs the question of its functional role. Even physiological mechanisms evolve for functional reasons. We suggest an answer to this puzzle in Chapter 2.

The Theatre Hypothesis.
A fourth popular metaphor may be called the "search light" or Theater Hypothesis. This idea is sometimes called "the screen of consciousness." An early version may be found in Plato's classic Allegory of the Cave. Plato compared ordinary perception to the plight of bound prisoners in a cave, who can see only the cave wall with the shadows projected on it of people moving about in front of a fire. The people projecting the shadows are themselves invisible; they cannot be seen directly. We humans, according to Plato, are like those prisoners -- we only see the shadows of reality. Modern versions of the Theater Hypothesis may be found in Lindsay & Norman (, p. x), Jung (?), Crick (?), -- and throughout this book. It has been beautifully articulated by the French historian and philosopher Hyppolite Taine (1828-1893): "One can therefore compare the mind of a man to a theatre of indefinite depth whose apron is very narrow but whose stage becomes larger away from the apron. On this lighted apron there is room for one actor only. He enters, gestures for a moment, and leaves; another arrives, then another, and so on ... Among the scenery and on the far-off stage or even before the lights of the apron, unknown evolutions take place incessantly among this crowd of actors of every kind, to furnish the stars who pass before our eyes one by one, as in a magic lantern." (18xx/Ellenberger?, p.) Taine managed to combine several significant features in his theater image. First, he includes the observation that we are conscious of only one "thing" at a time, as if different mental contents drive each other from consciousness. Second, he incorporates the Tip-of-the-Iceberg Hypothesis, the idea that at any moment much more is going on than we can know. And third, his metaphor includes the rather ominous feeling that unknown events going on behind the scenes are in control of whatever happens on our subjective stage (cf. Chapters 4 and 5).

The Theater Hypothesis can easily incorporate an Activation Hypothesis: we can simply require that "actors" must have a certain amount of activation in order to appear in the limelight. Indeed, the theory developed in this book is a modern version of the Theater Hypothesis, attempting to include all of the partial metaphors into a single coherent model. Some psychologists speak of consciousness in terms of a "searchlight" metaphor, a variant of the Theatre Hypothesis. It compares conscious experience to a spotlight playing over elements in the nervous system (Lindsay & Norman, 1977; Crick, 1985). One can make a spotlight go wherever wanted, but a theatre director can also control whatever will appear on stage. The two metaphors seem very similar, though the searchlight emphasizes control processes (see Chapter 8).

The common sense.
One version of the Theater Metaphor has had great influence in Western and Eastern thought; that is the notion of a "common sense," a domain in which all the special senses meet and share information. The original meaning of "common sense" is not the horse-sense we are all born with to keep us from the clutches of used-car salesmen and politicians. Rather, "common sense", according to Aristotle (who introduced the term in Western philosophy) is a general sense modality that mediates between the five special senses. His arguments in favor of the common sense have a distinctly modern, cognitive flavor. They are as follows:
1. "The five senses of popular psychology are each of them a special sense -- visual only, or auditory only or tactual only, and so on. As the organs for each of them are distinct and separate it seems remarkable that the visible, auditory, tactual, and other sense qualities of an object should be localized in one and the same object. Hence the postulation of a "common" sense in addition to the "special" senses in order to account for the synthesis in question."
2. "Again, there are some things apprehended in sense perception which are not peculiar to any one of the special senses but are common to two or more of them ---- such are, for instance, motion, rest, number, size, shape. It seemed therefore reasonable to Aristotle to assume a common sense for the apprehension of "common sensibles"... ."
3. "Once more, the different special sense- impressions are frequently compared and commonly differentiated. This likewise seemed to be the function of a common sense capable of comparing the reports of the several special senses ..."
And finally,
4. Aristotle "... also credited the common sense with the function of memory, imagination, and even awareness of the fact that we are having sense-experiences" (Encyclopedia Britannica, 1957, p. 128) (Italics added).
Thus the common sense is somehow associated with consciousness, and with introspective capabilities that tell us something about what we are conscious of. There is a remarkable resemblance between Aristotle's conclusions and the arguments made in Chapters 2 and 3 of this book. Interestingly, the notion of a common sense also appears in classical Eastern psychology about the time of Aristotle (ref. Syntopicon, Vol. I). Each of the four hypotheses can be developed into a modern model. All four have some truth, and in a way, our job in this book is to find a viable and testable mix of these metaphors.

1.32 Contemporary ideas.

There are currently a few psychological models with implications for attention and consciousness, but most current thinking is stated as single hypotheses, with no specified relationship to other hypotheses. For example, Mandler (1984) suggests that conscious experience often involves "trouble-shooting" and interruption of ongoing processes (see Chapter 7 and 10). Posner and his co-workers have provided evidence for a number of specific properties of conscious experience, without working out an overarching theoretical position (e.g. Posner, 1982). The single-hypothesis approach has pros and cons. Single hypotheses can remain viable when models fall apart. On the other hand, model-building incorporates more information, and comes closer to the ultimate goal of understanding many properties of consciousness at the same time in a coherent way. We need both. In this book we focus on theory construction, referring to single hypotheses wherever appropriate.

1.33 Limited capacity: Selective attention, competing tasks, and immediate memory.

The brain is such an enormous, complex, and sophisticated organ that the narrow limits on conscious and voluntary capacity should come as a great surprise. Cognitive psychologists rely on three sources of evidence about this "central limited capacity".

First, in selective attention experiments subjects are asked to monitor a demanding stream of information, such as a stream of reasonably difficult speech, or a visual display of a fast-moving basketball game. Under these conditions people are largely unconscious of alternative streams of information presented at the same time, even to the same sensory organ. Similarly, in absorbed states of mind, when one is deeply involved with a single train of information, alternative events are excluded from consciousness (8.0).

Second, in dual-task paradigms people are made to do two things at the same time, such as reacting as quickly as possible to a momentary visual signal while beginning to say a sentence. In general, performance in each of the two tasks degrades as a result of competition. The more predictable, automatic, and unconscious a task becomes, the less it will degrade, and the less it will interfere with the other task as well.

Third, immediate memory is quite limited and fleeting. It includes sensory memories (notably the visual and auditory sensory stores) which can be consciously experienced. Sensory memories decay rapidly, and are limited to relatively few separate stimuli (e.g. Sperling, 1960). Immediate memory also includes Short Term Memory, which is essentially the capacity to retain unrelated, rehearsable items of information longer than the immediate sensory stores allow.

Let us explore these facts in more detail.

Selective attention: people can be conscious of only one densely coherent stream of events at a time.

The first return to consciousness in modern times can be credited to Donald E. Broadbent, who adapted a simple and instructive experimental technique for the purpose, and suggested a basic theoretical metaphor to explain it (Broadbent, 1958; Cherry, 1953). Broadbent and his colleagues asked subjects to "shadow" a stream of speech -- to repeat immediately what they heard, even while continue listening for the next word -- something that people can learn to do quite well (Moray, 1959). Rapid shadowing is a demanding task, and if one stream of speech is fed into one ear, it is not possible to experience much more than a vague vocal quality in the other ear. At the time, this seemed to indicate that human beings can fully process only one channel of information at a time. The role of attention, therefore, seemed to be to select and simplify the multiplicity of messages coming through the senses (Broadbent, 1958; James, 1890). Attention was a filter; it saved processing capacity for the important things. In spite of empirical difficulties, the concept of "attention as a selective filter" has been the dominant theoretical metaphor for the past thirty years.

However, it quickly became clear that information in the unattended "channel" was indeed processed enough to be identified. Thus Moray (1959) showed that the subject's name in the unattended channel would break through to the conscious channel. Obviously this could not happen if the name were not first identified and distinguished from other alternatives, indicating that stimulus identification could happen unconsciously. MacKay (1973) and others showed that ambiguous words in the conscious channel were influenced by disambiguating information on the unconscious side. In a conscious sentence like, "They were standing near the bank ...", the word "river" in the unconscious ear would lead subjects to interpret the word "bank" as "river bank", while the unconscious word "money" would shift the interpretation to "financial bank." Finally, it became clear that the ears were really not channels at all: if one switched two streams of speech back and forth rapidly between the two ears, people were perfectly able to shadow one stream of speech, in spite of the fact that it was heard in two different locations (Grey and Wedderburn, 19xx). The important thing was apparently the internal coherence of the conscious stream of speech, not the ear in which it was heard (4.xx).

Attempts were made to cope with these problems by suggesting that filtering took place rather late in the processing of input (Treisman, 1964, 1969). Speech was filtered not at the level of sound, but of meaning. However, even this interpretation encountered problems when the meaning of the unconscious speech was found to influence the interpretation of the conscious message, suggesting that even meaning penetrates beyond the unconscious channel under some circumstances (MacKay, 19xx). Norman (1968) has emphasized the importance of semantic selectivity in determining what is to become conscious, and Kahneman (1973) has pointed out that selective attention is also influenced by long-term habits of mind or Enduring Dispositions, and of Momentary Intentions as well. Thus the filter model became enormously enriched with semantic, intentional, and dispositional factors. All these factors are indeed relevant to the issues of consciousness and attention, and yet it is not clear that they helped to resolve fundamental difficulties in the filter metaphor. The purpose of filtering is to save processing capacity (Broadbent, 19xx). If information is processed in the unattended channel as much as in the attended channel, filtering no longer has any purpose, and we are left in a quandary. We can call this the "filter paradox" (Wapner, 1986, p. ). But what is the function then of something becoming conscious? In this book we argue that consciousness involves the internal distribution of information. Apparently both conscious and unconscious stimuli are analyzed quite completely by automatic systems. But once unattended inputs are analyzed, they are not broadcast throughout the nervous system. Conscious stimuli, on the other hand, are made available throughout, so that many different knowledge sources can be brought to bear upon the input. This creates an opportunity for novel contextual influences, which can help shape and interpret the incoming information in new ways (Norman & Bobrow, 19xx). In this way the nervous system can learn to cope with truly novel information, and develop innovative adaptations and responses (5.xx).

Thus consciousness involves a kind of a filter -- not an input filter, but a distribution filter. The nervous system seems to work like a society equipped with a television broadcasting station. The station takes in information from all the wire services, from foreign newspapers, radio, and from its own correspondents. It will analyze all this information quite completely, but does not broadcast it to the society as a whole. Therefore all the various resources of the society cannot be focused on all the incoming information, but just on whatever is broadcast through the television station. From inside the society it seems as if external information is totally filtered out, although in fact it was analyzed quite thoroughly by automatic systems. Consciousness thus gives access to internal unconscious resources (Navon & Gopher, 19xx; Gazzaniga, 19xx).

Dual-task paradigms: any conscious or voluntary event competes with any other.
There is a large experimental literature on interference between two tasks (e.g. Posner, 197x). In general, the findings from this literature may be summarized by three statements:

Specific interference: Similar tasks tend to interfere with each other, presumably because they use the same specific processing resources. (Brooks in Norman, 1976) We encounter limits in some specialized capacity when we do two tasks that both involve speech production, visual processes, and the like, or perhaps when the two tasks make use of closely connected cortical centers (Kinsbourne & Hicks, 19xx).

Non-specific interference: Even tasks that are very different interfere with each other when they are conscious or under voluntary control. When these tasks become automatic and unconscious with practice, they cease to interfere with each other (Shiffrin, Dumais, & Schneider, 19xx).

Competing tasks that take up limited capacity tend to become automatic and unconscious with practice. As they do so, they stop competing.
ecause there is such a close relationship between consciousness and limited capacity, we can sometimes use the dual-task situation to test hypotheses about conscious experience. Later in this book we will offer a theoretical interpretation of this kind of interference, and suggest some experiments to help decide cases where "accurate consciousness reports" may prove to be a less than reliable guide. The existence of nonspecific interference does not argue for consciousness as such, of course. It provides evidence for a central limited capacity that underlies consciousness. In general we can say that conscious experiences take up central limited capacity, but that there are capacity-limiting events that are not reported as conscious (e.g., Chapter 6 and 7; also veiled conscious events in Shiffrin & Schneider).
Immediate memory is fleeting, and limited to a small number of unrelated items.
Another important source of evidence for a relationship between consciousness and a narrow capacity bottle-neck is the study of immediate memory. We have already discussed the work of Sperling (1960), who showed that we can have momentary access to a visual matrix of numbers or letters. This has been interpreted as evidence for a momentary sensory memory, and evidence for similar sensory memories has been found in hearing and touch. Sensory memories can be conscious, though they need not be. For instance, we can have the experience of being very preoccupied with reading, and having someone say something which we did not hear. For a few seconds afterwards, we can go back in memory and recall what was said, even though we were not conscious of it in detail at the time (Norman, 1976). It seems that even the vocal quality of the speech can be recalled, indicating that we have access to auditory sensory memory, not merely to the higher-level components.

The best-known component of immediate memory is called Short-Term Memory (STM). This is the rehearsable, usually verbal component of immediate memory -- the domain in which we rehearse new words and telephone numbers. There is a remarkably small limit to the number of unrelated words, numbers, objects, or rating categories that can be kept in Short Term Memory (Miller, 1956; Newell & Simon,; ). With rehearsal, we can recall about 7 plus or minus 2 items, and without rehearsal, between 3 and 4. This is a fantastically small number for a system as large and sophisticated as the human brain; an inexpensive calculator can store several times as many numbers. Further, STM is limited in duration as well, to perhaps 10 seconds without rehearsal (Simon, 19xx).

Short Term Memory is a most peculiar memory, because while it is limited in size, the "size" of each item can be indefinitely large. For example, one can keep the following unrelated items in STM: consciousness, quantum physics, mother, Europe, modern art, love, self. Each of these items stands for a world of information -- but it is highly organized information. That is, the relationship between two properties of "mother" is likely to be closer than the relationship between "mother" and "modern art". This is one aspect of chunking, the fact that information that can be organized can be treated as a single item in Short Term Memory. For another example, consider the series: 677124910091660129417891. It far exceeds our Short Term Memory capacity, being 24 units long. But we need only read it backwards to discover that the series is really only six chunks long, since it contains the well-known years 1776, 1492, 1900, 1066, and 1987. Chunking greatly expands the utility of Short Term Memory. It serves to emphasize that STM is always measured using a novel, unintegrated series of items. As soon as the items become permanently memorized, or when we discover a single principle that can generate the whole string, all seven items begin to behave like a single one.

All this suggests that STM depends fundamentally on Long Term Memory (LTM) -- the great storehouse of information that can be recalled or recognized. The fact that 1066 was the year of the Norman invasion of England is stored in LTM, and part of this existing memory must somehow become available to tell us that 1066 can be treated as a single, integrated chunk. Not surprisingly, several authors have argued that Short Term Memory may be nothing but the currently activated, separate components of Long Term Memory (Atkinson & Juola, 19xx).

Short Term Memory is not the same as consciousness.We are only conscious of currently rehearsed STM items, not of the ones that are currently "in the background". Indeed, the unrehearsed items in current STM are comparable to peripheral events in the sensory field. They are readily available to focal consciousness, but they are not experienced as focal. Nevertheless, conscious experience and STM are somehow closely related. It is useful to treat consciousness as a kind of momentary working memory in some respects (Chapter 2). STM then becomes a slightly larger current memory store, one that holds information a bit longer than consciousness does, with more separate items.

Note also that STM involves voluntary rehearsal, inner speech, and some knowledge of our own cognitive capacities (metacognition). That is to say, STM is not something primitive, but a highly sophisticated function that develops throughout childhood (Pascual-Leone, 19xx). We argue later in this book that voluntary control itself requires an understanding of conscious experience, so that voluntary rehearsal in STM first requires an understanding of conscious experience. Thus STM cannot be used to explain conscious experience; perhaps it must be the other way around. In a later chapter (8.00) we will suggest that all of these functions can be understood in terms of systems that interact with conscious experience. In conclusion, Short Term Memory is not the same as consciousness, although the two co-occur. It involves conscious experience, voluntary control over rehearsal and retrieval, the ability to exercise some metacognitive knowledge and control, and, in the case of chunking, a rather sophisticated long-term storage and retrieval system. STM is by no means simple. We will find it useful to build on a conception of conscious experience, develop from it some notions of voluntary control (7.0) and metacognition (8.0), and ultimately make an attempt to deal with some aspects of Short Term Memory.

We have briefly reviewed the three major sources of evidence for limited capacity associated with conscious experience: the evidence for narrow limitations in selective attention, competing tasks, and immediate memory. It consistently shows an intimate connection between conscious experience, limited capacity processes, and voluntary control. There can be little doubt that the mechanisms associated with conscious experience are remarkably small in capacity, especially compared to the enormous size and sophistication of the unconscious parts of the nervous system.

1.34 The Mind's Eye and conscious experience. In recent years our knowledge of mental imagery has grown by leaps and bounds. Not so long ago, "mental imagery" was widely thought to be unscientific, relatively unimportant, or at least beyond the reach of current scientific method (Baars, in press). But in little more than a decade we have gained a great amount of solid and reliable information about mental imagery (Paivio, 19xx; Cooper & Shepard, 19xx; Kosslyn, 19xx).

In general there is a remarkable resemblance between the domain of mental imagery and ordinary visual perception -- between the Mind's Eye and the Body's Eye (Finke, 19xx; Kosslyn & Shwartz, 19xx). The visual field is a horizontal oval, as anyone can verify by simply fixating at one point in space, and moving one's hands inward from the sides to the fixation point. Coming from the right and left sides, the hands become visible at perhaps 65 degrees from the fixation point, long before the hands can be seen when they are moving inward vertically, from above and below. The same kind of experiment can be done mentally with the eyes closed, and yields similar results (Finke, 19xx). Likewise, in the Mind's Eye we lose resolution with distance. We can see an elephant from thirty paces, but to see a fly crawling along the elephant's ear, we must "zoom in" mentally to get a better mental look. As we do so, we can no longer see the elephant as a whole, but only part of its ear. There are many other clever experiments that suggest other similarities between vision and visual imagery (see Kosslyn & Shwartz, 19xx).

The best current theory of mental imagery suggests that the "Mind's Eye" is a domain of representation much like a working memory, with specifiable format, organization, and content (Kosslyn & Shwartz, 19xx). Notice also that we can exercise some voluntary control over mental images -- we can learn to rotate them, zoom in and out of a scene, change colors, etc. Mental imagery cannot be the same as conscious experience, but it is certainly a major mode of consciousness.

1.35 Perceptual feature integration and attentional access to information-processing resources. Two more current ideas deserve discussion before we can go on. They are, first, the idea that the function of consciousness is to "glue" together separable perceptual features (Treisman & Gelade, 1980) and second, that consciousness or attention creates access to information-processing resources in the nervous system (Navon & Gopher, 1979). If we combine these ideas with the previous conceptions of attention and immediate memory, we come very close to the theoretical approach advanced in this book.

In an elegant series of experiments Treisman and her co- workers have provided evidence for the existence of separable features in vision. Treisman & Gelade (1980) showed that separable components of large, colored letters add linearly to search times. That is, to detect that something is red takes a short time; to detect that it is a red letter S takes a bit longer. Similarly, Sagi and Julesz (1985) found that people can detect the location of a few stray vertical lines in an array of horizontal lines very quickly; however, to tell whether these lines were vertical or horizontal, more time was needed. The more features were added, the more time was needed. They interpreted this to mean that integration of separable visual features takes up limited capacity. One problem with this idea is that a rich visual scene may have many thousands of separable visual features, and it is quite unlikely that all of them are processed serially. Watching a football team playing in a stadium full of cheering fans must involve large numbers of features, which surely cannot all be scanned serially, one after another. Focusing on a single, conspicuous feature, such as deciding which team is wearing the red uniforms, does seem to be a serial process.

Nevertheless there is something fundamentally important about the findings of Treisman and her co-workers. In almost any rich visual scene we may be doing a partial search. In a grocery store we search for a particular package, in a crowd we may look for a friendly face, or in a dictionary for a word. Eye-movements are highly functional, scanning the parts of a scene that are most informative and personally relevant (Yarbus, 1967). This searching component may generally be serial, while the automatic, predictable components of a scene may be integrated either very quickly or in parallel; both serial and parallel processes work together to create our visual experience.

A very different approach is advocated by Navon & Gopher (1979), who treat limited capacity as a resource-allocation problem, much like problem in economics. The idea that attention or consciousness involves access to processing resources is very powerful, and is a major aspect of the theory advanced in this book. Notice that most of the processing resources in the nervous system are unconscious, so that we have the remarkable situation of conscious events being used to gain access to unconscious processing resources. To put it slightly differently: a narrow, limited-capacity system seems to be involved in communicating with a simply enormous marketplace of processing resources.

How can these apparently different views be accomodated in a single coherent theory? After all, Treisman and her colleagues find evidence for conscious perception as an integrative capacity while Navon & Gopher (?) argue for this same system as a widely diverging access system. We resolve this tension by speaking of a "global-workspace architecture," in which conscious events are very limited, but are broadcast system-wide, so that we have both a narrow, convergent bottle-neck and a widely diverging processing capacity (2.xx). The specialized processors in this view mobilize around centrally broadcast messages, so that the processing resources "select themselves." The situation is much like a television broadcasting station which may call for volunteers in an emergency; the volunteers are self-selected, though one may be able to recruit more of them by broadcasting more messages in the limited-capacity medium.

Models of all these phenomena have much in common. Selective attention, feature integration, immediate memory, and access to resources all suggest the existence of some sort of domain of integration related to consciousness, perhaps a "working memory" that can be worked on by both voluntary and involuntary operators. All the models involve limited capacity, and in recent years, there has been increasing emphasis on the fact that access to the limited-capacity system also gives one access to a great number of mental resources that are otherwise inaccessible (Baars, 1983; see Chapter Two). In the next chapter, we propose a model that combines the most useful features of all these proposals.

The most recent models propose an overall architecture for the nervous system that incorporates these properties, as we see next.

1.36 Cognitive architectures: distributed systems with limited capacity channels. A recent class of psychological models treats the cognitive system as a society of modules, each with its own special capabilities (Minsky & Papert, 198x; Rumelhart, McClelland and the PDP Group, 1986). These distributed systems suppose that much of the problem-solving ability of the society resides not in its "government," but in its individual members. Limited capacity is sometimes taken to reflect a "working memory" in such a system (e.g. Anderson, 1983), or in any case some sort of bottle-neck that forces the individual modules to compete or cooperate for access (Baars, 1983; Norman & Shallice, 1980; Reason, 1985). In this book we work out one model of this kind.

Distributed models require a change in our usual way of thinking about human beings. We normally think of ourselves as guided by an executive "self"; intuitively we believe that "we" have control over ourselves. But distributed systems are strongly decentralized -- it is the specialized components that often decide by their own internal criteria what they will do. This is comparable perhaps to a market economy, in which thousands of individual transactions take place without government intervention although the marketplace as a whole interacts with global governmental influences. Distributed collections of specialized processors seem to have some distinct virtues (e.g. Greene, 1972; Gelfand et al, 1971; Rumelhart, McClelland, and the PDP Group, 1986). A decentralized system does not rule out executive control, just as the existence of market forces in the economy does not rule out a role for government (9.0). But it limits the control of executives, and creates possibilities for a mutual flow of control between executives and subordinate elements. Details of processing are generally handled by specialized members of the processing society. The Global Workspace model developed in this book is a distributed society of specialists that is equipped with a working memory, called a global workspace, whose contents can be broadcast to the system as a whole. The whole ensemble is much like a human community equipped with a television station. Routine interactions can take place without the television station, but novel ones, which require the cooperation of many specialists, must be broadcast through the global workspace. Thus novel events demand more access to the limited-capacity global workspace (5.0).

Notice that the recent theories propose an architecture for the whole cognitive system. In that sense they are more ambitious than the early models of short-term memory and selective attention. Perhaps the best-known architectural model today is Anderson's ACT*, which grew out of earlier work on semantic networks as models of knowledge, and on production systems to model limited capacity mechanisms (Anderson, 1983). But similar architectures have been proposed by others. In these models, conscious experience is often rather vaguely associated with limited-capacity mechanisms or working memory. Most of the architectural models do not suggest a functional reason for the rather astonishing fact of limited capacity. But explicit, running models of cognitive architectures do exist. That means we can go ahead in this book and discuss the issues without worrying too much about the formal specifics, which can be handled once the outlines of the theory are clear. This is not unusual in the natural sciences, where qualitative theory often precedes quantitative or formal theory (viz., Einstein, 1949). Indeed, Darwinian theory was purely qualitative in its first century of existence, and yet it revealed important things about the organization of life. Fortunately, we now have a number of computational formalisms that can be used to make the current theory more explicit and testable when that becomes appropriate.

1.37 The Global Workspace (GW) approach attempts to combine all viable metaphors into a single theory. The model we pursue in this book suggests that conscious experience involves a global workspace, a central information exchange that allows many different specialized processors to interact. Processors that gain access to the global workspace can broadcast a message to the entire system. This is one kind of cognitive architecture, one that allows us to combine many useful metaphors, empirical findings, and traditional insights regarding consciousness into a single framework. The word "global", in this context, simply refers to information that is usable across many different subsystems of a larger system. It is the need to provide global information to potentially any subsystem that makes conscious experience different from the many specialized local processors in the nervous system. Global Workspace (GW) theory attempts to integrate a great deal of evidence, some of which has been known for many years, in a single conceptual framework. Figure 1.37 shows the similarity between the three main constructs of GW theory -- the global workspace, specialized processors, and contexts -- and ideas proposed elsewhere. There is a clear similarity, although not an exact equivalence. Precision and coherence are the aims of the current theory; complete novelty may be less important.

Insert Figure 1.37 about here.

So much for some ways of thinking about consciousness. One cannot think properly about conscious experience without some clear conception of unconscious events -- the other side of the same coin. We turn to this issue now.

1.4 Unconscious specialized processors: A gathering consensus.  contents

Unconscious events are treated in this book as the functioning of specialized systems. The roots of this view can be found in the everyday observation that as we gain some skill or knowledge, it tends to becomes less and less conscious in its details. Our most proficient skills are generally the least conscious. We will first explore the properties of unconscious representations; then see how representations are involved in unconscious information processing; this in turn leads to the notion of specialized unconscious processors.

1.41 There are many unconscious representations. A representation is a theoretical object that bears an abstract resemblance to something outside of itself. In somewhat different terms, there is an abstract match or isomorphism between the representation and the thing that is represented. Human knowledge can be naturally viewed as a way of representing the world and ourselves. Instead of operating upon the world directly, we can try our ideas out on a representation of some part of the world, to predict its behavior. An architect's blueprint is a representation of a building, so that one can investigate the effects of adding another story by calculating load factors on the structural supports shown in the blueprint. We can think of knowledge, percepts, images, plans, intentions, and memories as representations. Everyday psychology can be translated into these terms in a natural way. Some psychologists prefer to speak of adaptation rather than representation (Grossberg, 1982). This approach has a long and honorable history with a somewhat different philosophical bent (e.g. Piaget, 1973). In practice, adaptation and representation are quite similar. Here we will use the term "representation" with the understanding that representations share many properties with adaptive systems. What is the adequate evidence for the existence of a mental representation? In psychology we often infer that human beings have mentally represented an object if they can correctly detect matches and mismatches to the object at a later time. All psychological tasks involve some kind of selective matching of representations, conscious or not.

Recognition memory.
Recognition memory provides one major class of cases in which people can spot matches and mismatches of previous events with impressive accuracy. In recognition studies subjects are given a series of pictures or sounds, and later are shown similar stimuli to see if they can tell old from new items. People are extremely good in this kind of task, often correctly recognizing more than 90% out of many hundreds of items a week or more afterwards (e.g. Shepard, 1967). There are indeed cases where recognition memory appears to fail, especially when the old and new stimuli are very similar. Nevertheless, even here it makes sense to suppose that the task involves a memory representation of the stimulus; the representation is just not completely accurate, it may be abstract, or it may be selectively stored and retrieved.

This brings us to the first, rather obvious class of unconscious representations. What happens to our memories of last week's stimuli before we see them again in a recognition test? According to the argument made above, we must be representing those memories somehow, otherwise we could not successfully detect matches and mismatches. The simplest supposition is that memories continue to be represented unconsciously. The remarkable accuracy of recognition memory indicates that human beings have a prodigious capacity for storing the things we experience, without effort. But of course most stored memories cannot be recalled at will.

Memory psychologists make a distinction between experiential, autobiographical memories (episodic) and our memory for abstract rules (semantic) (Tulving, 1972). The reader is not conscious of the syntactic rules that are working right now to determine that the word "word" is being used as a noun rather than a verb. However, we do become conscious of events that match or mismatch those rules. Sentences that violate very subtle syntactic regularities are spotted instantly. Further, the evidence is good that people given artificial strings of symbols infer the underlying rules with remarkable facility, but without knowing consciously what those rules are (Franks & Bransford, 1971; Posner, 1982; Reber & Allen, 1978).

Thus the case of abstract rules shows that a great deal of knowledge involves abstract representations, which are known to be representations because they fit the match/mismatch criterion. Matches and mismatches are accurately "recognized," though people are not conscious of the syntactic representations themselves. There is a third class of unconscious stimulus representations, namely the representation of those predictable stimuli to which we are currently habituated. This example requires a little exposition.

Sokolov and the mental model of the habituated stimulus.
A formal argument for unconscious stimulus representations has been given by the Russian physiologist Y.N. Sokolov (1963), working in the tradition of research on the Pavlovian "Orienting Response." The Orienting Response (OR) is a set of physiological changes that take place when an animal detects a new event. Any animal will orient its eyes, ears, and nose toward the new event, and at the same time a widespread set of changes take place in its body: changes in heart-rate and breathing, in pupillary size, electrical skin conductivity, brain electrical activity, and in dilation and contraction of different blood vessels. We now know that a massive wave of activation goes throughout the brain about 300 milliseconds after a novel event (?). Altogether this set of responses to novelty defines an Orienting Response. If the novel stimulus is repeated regularly over a period of time, the OR will gradually disappear -- it habituates. Subjectively we lose awareness of the repeated, predictable stimulus. Suppose the animal has habituated to a repeated one-second noise pulse, with two seconds of silence between noise bursts (see Figure 1.41). Now we reduce the length of the silent period between the pulses, and suddenly the animal will orient again. We can increase or decrease the loudness of the stimulus, change its location in space, its pitch or spectral distribution, or other characteristics like the rate of onset or offset. In each case, the change in stimulation will cause the animal to orient again to the stimulus, even after complete habituation of orienting. That is, the animal detects any kind of novelty. But how can the nervous system do this? Sokolov suggests that it can only do this as a result of some comparison process between the original stimulus and the new stimulus. (Indeed, "novelty" by definition involves a comparison of new to old.) But of course the original stimulus is long gone by the time the novel stimulus is given, so it is not available for comparison. Hence, Sokolov suggests, the nervous system must retain some model of the stimulus to which it has habituated. And since a change in any parameter of the stimulus will evoke a new OR, it follows that the stimulus representation must contain all parameters of the stimulus.

Insert Figure 1.41 about here.

It is interesting to consider neurophysiological evidence about stimulus habituation from E.R. John's work with Event- Related Potentials (see 2.xx). Prior to habituation, John and his co-workers have found, activity related to a repeated visual stimulus can be found throughout the brain. But once habituation takes place, it can only be found in the visual system. In our terms, the habituated stimulus appears to be processed, perhaps much as before, but it is not distributed globally (2.x). This finding is quite consistent with Sokolov's arguments. The fact that people become unconscious of a repetitive or predictable stimulus does not mean that the stimulus has disappeared; on the contrary, it continues to be processed in the appropriate input system.

Although Sokolov's arguments have been widely accepted in neurophysiology, in cognitive psychology they are not as well- known as one might expect. This is curious, because the cognitive literature is generally quite receptive to compelling inferences based on well-established evidence. Many psychologists still consider habituation as a purely physiological effect without important psychological implications -- perhaps due to "fatiguing" of feature detectors (e.g. Eimas & Corbitt, 1973) -- in any case, as something non-functional. But Sokolov's argument suggests that the decline in orienting to redundant stimuli is something very functional for the nervous system.

In fact, Sokolov anticipated a "fatigue" explanation of habituation, and provided an interesting argument against it (Sokolov, 1963). Suppose there is some neural mechanism that is triggered by a repeated stimulus, such as the white noise burst described above. Now suppose that this mechanism -- which might be a single neuron or a small network of neurons -- declines over time in its ability to detect the stimulus, for reasons that have no functional role. Perhaps toxic metabolic by-products accumulate and prevent the "noise burst detector" from functioning properly, or perhaps some neurotransmitter becomes depleted. In any case, some "fatigue" affects the detector. If that were true, we might expect habituation of awareness, and that is in fact observed. But the pattern of dishabituation should be different from Sokolov's findings. A new Orienting Response might occur after habituation, but only if the stimulus were stronger in some way than the original stimulus -- if the noise were louder or longer or more frequent. That is, the depleted and unresponsive detector might be triggered again by a greater stimulus. In fact, we find that a louder, longer, or more frequent stimulus does elicit an OR -- but so does a softer, shorter, or less frequent noise burst. Indeed, an OR even occurs to a missing noise burst, which is the absence of an expected physical event! Thus release from habituation is not dependent upon the energy of the stimulus: it is dependent upon a change in information, not a change in energy as such (5.0). It follows that "fatigue" is not a plausible explanation of the universal fact of habituation of awareness under repeated or predictable stimulation. In support of this argument, recent work shows that the absence of an expected event triggers a great amount of activity in the cortical evoked potential (Donchin, McCarthy, Kutas, & Ritter, 1978). This argument can be generalized to another possible alternative explanation, a "general threshold" hypothesis. Suppose we deal with a repeated auditory stimulus by simply turning up our auditory threshold, much like the "filter" of early selective attention theory (Broadbent, 1958). This hypothesis would account for habituation and for dishabituation to more energetic input; but again, it would fail to explain why we become conscious again of a novel stimulus which is less energetic than the old stimulus.

We have noted that cognitive psychologists are generally willing to infer a mental representation whenever they find that people can retain some past event over time, as evidenced by their ability to accurately spot matches and mismatches with the past event. This is how we infer the existence of memories -- mental representations of past events -- based upon the impressive ability people show in recognition tasks. Formally, Sokolov's argument is exactly the same: that is, it involves a kind of recognition memory. People or animals are exposed to a repeated stimulus, habituate, and respond accurately to matches and mismatches of the past event. But here we infer an unconscious kind of "recognition" process, rather than the recognition of a conscious stimulus. Sokolov's argument has great significance for cognitive approaches to learning; indeed, one may say that the loss of consciousness of a predictable event is the signal that the event has been learned completely (5.0). Habituation of awareness is not just an accidental by-product of learning. It is something essential, connected at the very core to the acquisition of new information. And since learning and adaptation are perhaps the most basic functions of the nervous system, the connection between consciousness, habituation, and learning is fundamental indeed (see Chapter 5).

The three classes of unconscious stimulus representations we have discussed -- stored episodic memories, linguistic knowledge, and habituated stimulus representations -- illustrate the main claim of this section, that there are indeed unconscious mental representations. There may be more than just these, of course. The next step suggests that there are many unconscious processes and even processors as well.

1.42 There are many unconscious specialized processors.

A process involves changes in a representation. In mental addition, we may be aware of two numbers and then perform the mental process of adding them. A processor can be defined as a relatively unitary, organized collection of processes that work together in the service of a particular function. A crucial claim in this book is that the nervous system contains many specialized processors that operate largely unconsciously.

One can think of these processors as specialized skills that have become highly practiced, automatic, and unconscious. Automatic skills are describe as being "unavoidable, without capacity limitations, without awareness, without intention, with high efficiency, and with resistance to modification" (LaBerge, 1981). These are all properties of unconscious specialized processors, as we will see below.

1.43 Neurophysiological evidence.

The neural evidence for specialized processors is extensive. Perhaps most obvious is the well-established fact that many small collections of neurons in the brain have very specific functions. Indeed, much of the cerebral cortex -- the great wrinkled mantle of tissue that completely covers the older brain in humans -- is a mosaic of tiny specialized areas, each subserving a specific function (Mountcastle, 1982; Szentagotai & Arbib, 1975; Rozin, 1976). (See Chapter 3). These range from the sensory and motor projection areas, to speech production and comprehension, to spatial analysis, planning and emotional control, face recognition, and the like. Below the cortical mantle are nestled other specialties, including control of eye movements, sleep and waking, short-term memory, homeostatic control of blood chemistry, hormonal control of reproductive, metabolic and immune functions, pleasure centers and pain pathways, centers involved in balance and posture, breathing, fine motor control, and many more. Some of these specialized neural centers have relatively few neurons; others have many millions.

There is a remarkable contrast between the narrowness of limited-capacity processes and the great size of the nervous system -- most of which operates unconsciously, of course. The cerebral cortex alone has an estimated 55,000,000,000 neurons (Mountcastle, 1982), each one with about 10,000 dendritic connections to other neurons. Each neuron fires an average of 40 and a maximum of 1,000 pulses per second. By comparison, conscious reaction time is very slow: 100 milliseconds at best, or 100 times slower than the fastest firing rate of a neuron. An obvious question is: why does such a huge and apparently sophisticated biocomputer have such a limited conscious and voluntary capacity? (See 2.x, 3.00) Not all parts of the brain have specific assignments. For instance, the function of the cortical "association areas" is difficult to pinpoint. Most functions do not have discreet boundaries, and may be distributed widely through the cortex. Further, there is below the cortex a large non-specific system, which we will discuss in detail in Chapter 3.

1.44 Psychological evidence.

Psychologists have discovered evidence for specialized functional systems as well. Two sources of evidence are especially revealing: (a) the development of automaticity in any practiced task, and (b) the study of errors in perception, memory, speech, action, language, and knowledge. Both sources of evidence show something of interest to us.

1. The development of automaticity with practice. Any highly practiced and automatic skill tends to become "modular" -- unconscious, separate from other skills, and free from voluntary control (La Berge, 1980, 1981; Posner & Snyder, 1975; Shiffrin & Schneider, 1977). And any complex skill seems to combine many semi-autonomous specialized units. In the case of reading, we have specialized components like letter and word identification, eye-movement control, letter-to-phoneme mapping, and the various levels of linguistic analysis such as the mental lexicon, syntax, and semantics. All these components involve highly sophisticated, complex, practiced, automatic, and hence unconscious specialized functions (ref).

Much research on automaticity involves perceptual tasks, which we will not discuss at this point. The reason to avoid perceptual automaticity is that perceptual tasks by definition involve access to consciousness (LaBerge, 1981; Neisser, 1967). Thus they tend to confuse the issue of unconscious specialized systems. Instead, we will focus on the role of automatic processes in memory, language, thought, and action. Perceptual automaticity will be discussed in Chapter 8, in the context of access-control to consciousness.

The best-known early experiment on automatic memory scanning is by Sternberg (1963), who presented subjects with small sets of numbers to hold in memory. Thus, they would be told to keep in memory the set "3,7,6," or "8,5,2,9,1,3." Next, a number was presented that was or was not part of the set, and Sternberg measured the time needed to decide when the test stimulus belonged to the memory set. This task becomes automatic quite quickly, so that people are no longer aware of comparing every item in memory to the test item. Further, the time needed to scan a single item is much faster than conscious reaction time, suggesting again that memory scanning is automatic and unconscious. The big surprise was that reaction time to the test item did not depend on the position of the item in the set of numbers; rather, it depended only on the size of the whole memory set. Thus, if a subject were given the set "8, 5, 2, 9, 1, 3," and the test stimulus were "5," reaction time would be no shorter than when the test stimulus were the last number "3." This seemed most peculiar. In a 1964 conference N.S. Sutherland called it "extremely puzzling. On the face of it, it seems a crazy system; having found a match, why does the subject not stop his search and give the positive response?" (Sutherland, 1967).

Indeed it is rather crazy, if we assume that the subject is consciously comparing each number in memory with the test stimulus. Having found the right answer, it seems silly to continue searching. But it is not so unreasonable if the comparison process runs off automatically, without conscious monitoring or voluntary control (Shiffrin & Schneider, 1977). If the subject has available an unconscious automatic processor to do the job, and if this processor does not compete with other conscious or voluntary processes, little is lost by letting the it run on by itself. More recent work by Shiffrin and Schneider (Shiffrin & Schneider, 1977; Schneider & Shiffrin, 1977) confirms that voluntary (controlled) search does not run on by itself. It terminates when the answer is found.

The automatic search process generally does not compete with other processes (Shiffrin, Dumais, & Schneider, 1981). It is unconscious, involuntary, and specialized. It develops with practice, provided that the task is consistent and predictable. Further, and of great importance, separable components of automatic tasks often begin to behave as single units. That is, specialized functions seem to be carried out by "modular" automatic systems (see below). There is some question whether this is always true, but it seems to be true for most automatic processes (Treisman & Gelade, 1980).

Memory search is not the only process that has these properties. Much the same points have been made for the process of lexical access. In reading this sentence, the reader is using many specialized skills, among them the ability to translate strings of letters into meanings. This mapping between letter strings and meaning is called lexical access, though "lexico- semantic access" might be a more accurate term. A good deal of evidence has accumulated indicating that lexical access involves an autonomous processing module (Swinney, 1979; Tanenhaus, Carlson & Seidenberg, 1985). A typical experiment in this literature has the following format. Subjects listen to a sentence fragment ending in an ambiguous word, such as

(1) They all rose ...
The word "rose" can be either a verb or noun, but in this sentence context it must be a verb. How long will it take for this fact to influence the interpretation of the next word? To test this, one of two words is presented, either flower, or stood. Subjects are asked to decide quickly whether the word is a real English word or not. If the subjects make use of the sentence context in their lexical decision task, the verb "rose" should speed decisions for "stood," because the two words are similar in meaning and syntax; if the context is not used, there should be no time difference between the verb "stood" and the noun "flower". Several investigators have found that for the first few hundred milliseconds, the sentence context has no influence at all (Swinney, 1979; Tanenhaus, Carlson, & Seidenberg, 1985). Thus it seems as if lexical access is autonomous and context-free for a few hundred milliseconds. After this period, prior context does influence the choice of interpretation.

Lexical access seems to involve a specialized unconscious system that is not influenced by other processes. This system, which has presumably developed over many years of practice, seems to be "modular" (Tanenhaus, Carlson, & Seidenberg, 1985). It looks like another example of a highly specialized, unconscious processor that is separate both from voluntary control and from other unconscious specialists. Similar evidence has been found for the modularity of other components of reading, such as syntax, letter-to-phoneme mapping, and eye-movement control.

Notice how unconsciousness and proficiency tend to go together. Specialized unconscious processors can be thought of as highly practiced and automatic skills. New skills are acquired only when existing skills do not work, and we tend to adapt existing skills to new tasks. Thus we usually have a coalition of processors, with mostly old subunits and some new components.

Automaticity often seems to be reversible. We have already discussed the finding by Pani that practiced images, which disappear from consciousness when the task is easy, become conscious again when it is made more difficult (1.xx) . Probably subjects in Pani's imagery task could also use voluntary control to make the conscious images reappear. Although this is not widely investigated, informal demonstrations suggest that many automatized skills can become conscious again when they encounter some unpredictable obstacle. Consider the example of reading upside-down. It is very likely that normal reading, which is mostly automatic and unconscious, involves letter identification and the use of surrounding context to infer the identity of letters. When we read a sentence upside-down, this is exactly what begins to happen consciously. For example:
Bob the big bad newspaper boy did not quite quit popping the upside-down cork on the beer bottle.
This sentence was designed to have as many b's, d's, q's, and p's as possible, to create ambiguities that would be hard to resolve, and which therefore might need to be made conscious. In "newspaper" the reader may have used the syllable "news" to determine that the vertical stalks with circles were p's rather than b's, while the similar shape in "quite" may have been identified by the fact that q's in English are invariably followed by u's. This use of surrounding context is quite typical of the automatic reading process as well (Posner, 1982). It is well established, for example, that letters in a real-word context are recognized faster and more accurately than letters in a non-word context (Rumelhart & McClelland, 1982). The existence of de-automatization is one reason to believe that consciousness may be involved in debugging automatic processes that run into difficulties (Mandler, 1975; see 10.x).

We turn now to another source of evidence for specialized unconscious processors, coming from the study of errors in perception, action, memory, and thought.

2. Perceptual errors as evidence for specialized modules.
As we suggested above, perception is surely the premier domain of conscious experience (?). Nothing else can come close to it in richness of experience and accessibility. Ancient thinkers in Greece and India already argued for the five classical senses as separate systems that are integrated in some common domain of interaction. Thisis well illustrated by binocular interaction -- "cooperation and competition" between visual input to the two eyes. Binocular interaction has been studied by psychologists for more than a century. Under normal conditions the slightly different perspectives from the two eyes fuse experientially, so that one sees a single scene in depth. This phenomenon led to the invention of the stereoscope, in which two separate slides, showing slightly offset images of the same scene, are presented to each eye. With increased disparity, the viewer is conscious of a very strong, almost surrealistic sense of depth, as if one could simply reach out and grasp the image. In the last century this dramatic effect made the stereoscope a popular parlor entertainment. But when the images in the two visual fields are incompatible, the two perspectives begin to compete, and one or the other must dominate. When they differ in time, space, or color, we get binocular rivalry rather than binocular "cooperation"; fusion fails, and one image drives the other from consciousness. It is natural to think of all this in terms of cooperation or competition between two separable visual systems. Numerous other phenomena behave in this way, so that one can say generally that any two simultaneous stimuli can interact so as to fuse into a single experienced event; however, if the stimuli are too disparate in location, time of presentation, or quality, they will compete against each other for access to consciousness (Marks, 19xx).

This analysis seems to emphasize the decomposability of perception. Historically there have been two contending views of perception: one that emphasized decomposability, and one that stressed the integrated nature of normal perception (Kohler, 19xx; Mandler, 1975). The Gestalt psychologists were fervent advocates of the view that perception is not just the sum of its parts. In fact, these two conceptions need not be at odds. Modern theories involve both separate feature detection and integration (e.g. Rock, 1982; Rumelhart & McClelland, 1984). This book is based on the premise that perception and other conscious events are indeed decomposable, and that one major function of the system underlying consciousness is to unify these components into a single, coherent, integrated experience (Mandler, 1975; Treisman & Gelade, 1982). Thus, as we pursue the issue of decomposable features here, we are by no means excluding the well-established Gestalt phenomena.

Clear evidence has emerged in recent decades for "feature detectors" in perception. The phonemes of English can be described by a small number of perceptual feature, such as voicing, place, and manner. Thus the phonemes /b, d, g/ are called "voiced," while /p, t, k/ are "unvoiced." These are essentially perceptual features -- they are not derived from analyzing the physical signal, but from studies of the experience of the speakers of the language. Linguists discover phonemes and their features by asking native speakers to contrast pairs of otherwise similar words, like "tore/door", "pad/bad", etc. At the acoustical and motor level these words differ in many thousands of ways, but at the level of phonemes there is a dramatic reduction for any language to an average of 25 - 30 phonemes; these in turn can be reduced to less than ten different feature dimensions.

A detailed study of sound and motor control in fluent speech shows that each feature is very complex, extremely variable between speakers, occasions, and linguistic contexts, and difficult to separate from other features (Jenkins ref; Liberman,). For example, the /t/ in "tore" is pronounced quite differently from the /t/ in "motor" or in "rot". Yet English speakers consider these different sounds to belong to the same perceptual event. Confusions between phonemes in perception and short-term memory follow the features, so that "t's" are confused with "d's" far more often than they are confused with "l's" (Miller, 19xx). The complexity of phonemes below the level of perception implies that the neural detectors for these elements are not single neurons, but rather complex "processors" -- populations of specialized neurons, which ultimately trigger a few abstract phonetic feature detectors.

Neurons that seem to act as feature detectors have been discovered in the visual system. The most famous work along these lines is by Hubel & Wiesel (59), who found visual neurons in the cortex that are exclusively sensitive to line orientation, to a light center in a dark surround, or a dark center in a light surround. There are alternative ways to interpret this neurophysiological evidence, but the most widely accepted interpretation is that the neurons are feature detectors.

One argument against this approach is that features are demonstrably context-sensitive. For example, letters in the context of a word are easier to recognize than letters in a nonsense string (Rumelhart & McClelland, 1982). There are great numbers of demonstrations of this kind, showing that contextual information helps in detecting features at all levels and in all sensory systems (see Chapter 4 and 5). Thus features do not function in isolation. However, recent models of word perception combine features with contextual sensitivity, so that again, the ability to separate components and the ability to synthesize them are compatible with each other. Some fascinating recent work shows that even "simple" visual percepts involve integration of different component systems. Treisman & Gelade (1980) give a number of empirical arguments for visual features, including the existence of perceptual errors in which features are switched. When people see rapid presentations of colored letters, they mistakenly switch colors between different letters (Treisman & Schmidt, 1982). In a very similar situation, Sagi & Julesz have shown that the location and orientation of short lines are often interchanged (1985). Analogous phenomena have been found in the auditory system. All these facts suggest that perception can be viewed as the product of numerous highly specialized systems, interacting with each other to create an integrated conscious experience. Under some conditions this interaction seems to take up central limited capacity, a capacity that is closely associated with attention and conscious experience (see Chapter 2). For our purposes there are two cardinal facts to take into account: first, perceptual events result from decomposable specialized systems, or modules; and second, these systems interact in such a way that "the whole is different from the sum of its parts" (Kohler, 19xx). One can point to several cases where such components seem to compete or cooperate for access to central limited capacity. These points can be generalized from perception to other psychological tasks, as we shall see next.

3. Performance errors as evidence for specialized modules.
Slips are errors that we make in spite of knowing better. They are different from the mistakes that we make from ignorance. If we make a spoonerism, such as the Reverend Spooner's famous slip "our queer old dean" instead of "our dear old queen", the mistake is not due to ignorance -- the correct information is available, but it fails to influence the act of speaking in time to make a difference. Thus slips of speech and action inherently involve a dissociation between what we do and what we know (Baars, 1985 and in press). This is one reason to believe that slips always involve separable specialized processors. Slips of speech and action generally show a pattern of decomposition along natural fault lines. Errors in speech almost always involve units like phonemes, words, stress patterns, or syntactic constituents -- the standard units of language (Fromkin, 1973, 1980; Baars, in press). We do not splutter randomly in making these errors. This is another reason to think that actions are made up of these units, which sometimes fall apart along the natural lines of cleavage.

Action errors suggest the same sort of thing. For instance, many spontaneous action errors collected by Reason (1984) involve the insertion, deletion, or exchange of coherent subunits of an action. Consider the following examples:

(1) "I went into my room intending to fetch a book. I took off my rings, looked in the mirror and came out again -- without the book." (Deletion error.)
(2) "As I approached the turnstile on my way out of the library, I pulled out my wallet as if to pay -- although no money was required." (Insertion error.)

(3)"During a morning in which there had been several knocks at my office door, the phone rang. I picked up the receiver and bellowed 'Come in' at it." (Insertion error.)

(4)"Instead of opening a tin of Kit-E-Kat, I opened and offered my cat a tin of rice pudding." (Component exchange -- a "behavioral spoonerism".)

(5)"In a hurried effort to finish the housework and have a bath, I put the plants meant for the lounge in the bedroom, and my underwear in the window of the lounge." (Component exchange.)

In all five errors, action components are inserted, deleted, and exchanged in a smooth, normal, seemingly volitional fashion. This suggests that normal action may be organized in terms of such subunits -- i.e., actions may be made up of modular parts. Reason (1984) calls these modules the "action schemata," which, he writes, "can be independently activated, and behave in an energetic and highly competitive fashion to try to grab a piece of the action." That is to say, action schemata seem to compete for the privilege of participating in an action, to the point where they sometimes enter into the wrong context, as in errors (2) - (5) above. This claim is consistent with a widespread conviction that the detailed control of action is decentralized or "distributed", so that much of the control problem is handled by local processes (Arbib, 1982; Greene, 1972; Gelfand, Gurfinkel, Fomin, & Tsetlin, 1971; Baars, 1980b, 1983). It is also consistent with findings about the autonomy of highly practiced skills that have become automatized and largely unconscious (above). Normal actions, of course, combine many of such highly practiced skills into a single, purposeful whole.
4. Specialized modules in language processing.
It is widely believed that understanding a spoken sentence involves a series of structural levels of analysis, going from acoustic representations of the sound of the speaker's voice, to a more abstract string of phonetic symbols; these symbol-strings specify words and morphemes, which are in turn codeable in syntactic terms to represent the subject, predicate, and object of the sentence; this information is interpreted in the context of a complex representation of meaning, which permits inferences about the intentions of the speaker in saying the sentence (Figure 1.44). In recent years much progress has been made in understanding and simulating such fast, symbolic, intelligent, rule-based systems. Visual processing has been subjected to a similar analysis (e.g. Marr, 1982). In general, today the dominant approach to human language and visual processing involves a series of specialized modules, whose internal workings are to some extent isolated from the outside. Each level of analysis is very complex indeed. We have already considered lexical access, which involves all of the words in one's recognition vocabulary perhaps 50,000 words for many people plus the semantic relationships between them.

Insert Figure 1.44 about here.

While the specialized levels are separable, they often need to work together in decoding a sentence, and not necessarily in a rigid, unvarying order. When a syntactic processor runs into an ambiguity it cannot resolve, it must be able to call upon the semantic processor for information (Winograd, 1972; Reddy and Newell, 1974). If we are given the ambiguous sentence "old men are women are delightful," we must use our best guess about the speaker's meaning to decided whether "old (men and women) are delightful" or "(old men) and women are delightful". Empirical evidence for this kind of cooperative interaction between different specialized systems has been found by Marslen-Wilson & Welsh (1979).

Thus the different specialized levels have a kind of separate existence; and yet they must be able to cooperate in analyzing some sentences as if they were one large, coherent system. This seems to be a general characteristic of specialized modules, that they can be decomposed and recomposed with great flexibility, depending on the task and context. Thus there may be different configurations of the linguistic "hierarchy" for speech analysis, speech production, linguistic matching tasks, etc.

We are certainly not conscious of such rapid and complex processes. In a reasonable sense of the word, each of these specialized rule-systems must be intelligent: it appears to be fast, efficient, complex, independent, symbolic, and functional. These are all aspects of what we usually call intelligence.

5. Other sources of evidence for specialized processors.

Memory: dissociation of access.

There are many examples of dissociated access in memory. Perhaps the most obvious is the "tip-of-the-tongue" phenomenon, in which a word that is readily available most of the time is frustratingly out of reach. There is some evidence that current states of mind like mood act to bias access to mood-relevant information and make it difficult to reach irrelevant material (Bower & Cohen, 1982). These differences can become extreme in hypnotic or post-traumatic amnesias, which do not involve a total loss of the original information, but a loss of voluntary access to it (Jacoby & Witherspoon, 1982). Under some conditions these dissociated memories can be recovered. Indeed, most of our memory may consist of isolated islands of material. One of the most interesting aspects of dissociation is the way in which automatic skills and islands of knowledge become unavailable to voluntary recall. Consider: in typing, which finger is used to type the letter "g"? Most people must consult their fingers to find out the answer, even if they have performed the action thousands of times; and indeed, in beginning to type, they may have known it quite voluntarily. As we gain automaticity in some skill, we also lose access to it in voluntary recall. Thus Langer & Imber (?) found that after only a few trials of a letter-coding tasks, subjects reported a loss of consciousness of the task. Thereafter they could not longer report the number of steps in the task, and lost the ability to monitor their own effectiveness (Langer & Imber, 1979; see Chapter 7).
Dissociation of knowledge.
Finally, there is good evidence that knowledge is often fragmented. Cognitive scientists studying everyday knowledge have been surprised by the extent to which scientific reasoning by even very advanced students is lost when the same students are asked to explain everyday phenomena (?). This is well illustrated with a little puzzle presented by Hutchins (?). Every educated person "knows" that the earth turns on its axis and goes around the sun during the year. Now suppose there is a man standing on top of a mountain at dawn, pointing at the sun just as it peeks above the Eastern horizon. He stays rooted to the spot all day, and points again at the sun as night falls, just as it is about to go down in the West. Obviously we can draw one line from the man to the sun at dawn, and another from the man to the sun at sundown. Where do the two lines intersect? Most people, including scientifically sophisticated people, seem to think the two lines intersect in the man, who has been standing on the same spot on the mountain all day. This answer is wrong -- he has changed position, along with the mountain and the earth as a whole -- he has moved even while standing still. It is the sun that has stayed in roughly the same position while the earth turned, so that the two lines intersect in the sun only.

The fact that so many people cannot solve this little puzzle indicates that we have two schemata for thinking about the relations between the sun and the earth. When confronted with an educated question, we claim, certainly, that the earth turns around its axis during the day. But when we take an earth- centered perspective we see the sun "traveling through the sky" during the day, and revert to a pre-Copernican theory. In this commonsense theory, the sun "rises" in the morning and "goes down" in the evening. There is nothing wrong with this perspective, of course. It serves us quite well most of the time. We only run into trouble when the two stories contradict each other, as they do in the little puzzle. There is much more evidence of this kind that knowledge is actually quite fragmented, and that we switch smoothly between different schemas when it suits our purposes to do so. (Notice, by the way, that the contradictions between the two accounts may cause us to make the problem conscious; without such contradictions we seem to go blithely along with several different schemas.) In sum, there is evidence for separate functional units from neurophysiology, especially from the study of brain damage; and in psychology, from studies of the acquisition of automaticity of any practiced skill, of perception, imagery, memory, action, language, and knowledge representation. All these sources of evidence suggest there are indeed many intelligent, unconscious processors in the nervous system.

1.45 General properties of specialized processors.

Once having established the existence of specialized unconscious processors, we shall have very little to say about their inner workings. There is now a vast scientific literature about specialized processes in vision, language, memory, and motor control, which has made major strides in working out these details (see the references cited above). In this book we cannot do justice to even one kind of unconscious specialist, and we will not try. Rather, we treat specialists here as the "bricks" for building an architecture of the nervous system, concentrating on the role of conscious experience in this architecture. Of course, we must specify in general what these bricks are like.

We can illustrate many elements that specialists have in common using the example of action schemata. Action schemata seem to be unitary at any one time. It makes sense to think that a complex action schema can often be called on as a whole to perform its function. In the act of leaping on a bicycle we cannot wait to gather the separate components of spatial orientation, control of the hands and feet, balance, and vision. Instead, we seem to call in an instant on a single "bicycle riding schema", one that will organize and unify all the components of bicycle riding. However, in getting off the bicycle it makes sense to decompose the bicycle-riding schema, so that parts of it become available for use in standing, walking, and running. These other kinds of locomotion also require general skills like spatial orientation, motor control, balance, and vision. It makes sense to adapt general skills for use in a variety of similar actions. Further, if something goes wrong while we are riding the bicycle -- if we lose a piece of the left pedal -- we must be able to decompose the action-as-a-whole, in order to find the part of the bicycle riding skill that must be altered to fix the problem.

Evidently we need two abilities that seem at odds with each other: the ability to call on complex functions in a unitary way, and also the ability decompose and reorganize the same functions when the task or context changes. The first property we will call functional unity, and the second, variable composition. We will list these and other general properties next.

1. Functional unity.
At any one time a coalition of processors that act in the service of some particular goal will tend to act as a single processor. That is, the coalition will have cohesion internally and autonomy or dissociation with respect to external constraints. This is sometimes called a high internal bandwidth of communication, and a low external bandwidth. Specialists are sometimes said to be hierarchically organized internally, though we will prefer the term recursive (2.xx). These are defining properties of modularity.
2. Distributed nature of the overall system.
If the nervous system can be thought to consist of large number of specialized processors, the details of processing are obviously not handled by some central control system, but by the specialists themselves.
3. Variable composition:
Specialized processors are like Chinese puzzle boxes: they are structured recursively, so that a processor may consist of a coalition of processors, which in turn may also be a member of an larger set of processors that can act as a single chunk. We should not expect to define a processor independently of task and context, though some tasks may be so common that they need generalized, relatively invariant processors.
4. Limited adaptability.
Within narrow limits, specialized processors can adapt to novel input. One of the costs of specialization is that a syntax processor cannot do much with vision, and a motor processor is stumped when given a problem in arithmetic. But all processors must be able to change their parameters, and to dissociate and re-form into new processing coalition (that then begin to behave as single processors) when conditions call for adaptation. We see this sort of reorganization when the visual field is experimentally rotated or transformed in dramatic ways, when motor control is transformed by shifting from drivin an automobile with a manual transmission to one with automatic transmission, or when a brain damaged patient learns to achieve his goals by the use of new neuronal pathways (e.g.). At a simpler level we see adaptation of specialized processors when a syllable like /ba/ is repeated over and over again, and the distinctive-feature boundary between /ba/ and /pa/ shifts as a result (?).

These points illustrate that processors may in part be mismatch-driven. That is to say, they must be able to adapt whenever the predictions they make about the world are violated, and it is even possible that many processors remain essentially passive unless such violations occur (see Chapter 5). We could speak of these processors as being mismatch-addressable.

5. Goal-addressability.
While processors such as action schemata are unconscious and automatic, they appear to act in the service of goals that are sometimes consciously accessible. Indeed, action schemata can be labeled most naturally by the goal or subgoal which they appear to subserve. Error (1) above is a failure of a goal that may be called "fetch book". Error (2) is an inappropriate execution of the goal "pull out wallet". And so on. Each of these actions could be described in many different ways -- in terms of physical movements, in terms of muscle groups, etc. But such descriptions would not capture the error very well. Only a description of the error in terms of goals met and goals unachieved reveals the fact that an error is an error. Thus, action schemata appear to be goal-addressible, though the goals are not necessarily conscious in detail. The fact that with biofeedback training one can gain voluntary control over essentially any population of neurons suggests that other functional processors are also goal-addressible (x.x).
6. The unconscious and involuntary nature of specialized processors.
Control of specialized functions is rarely accessible to conscious introspection. Try wiggling your little finger. What is conscious about this? The answer seems to be, "remarkably little". We may have some kinesthetic feedback sensation; some sense of the moment of onset of the action; perhaps a fleeting image of the goal a moment before the action occurs. But there is no clear sense of commanding the act, no clear planning process, certainly no awareness of the details of action. Wiggling a finger seems simple enough, but its details are not conscious the way perceptual events are, such as the sight of a pencil or the sound of a spoken word. Few people know where the muscles that move the little finger are located (they are not in the hand, but in the forearm). But that does not keep us from wiggling our fingers at will. No normal speaker of English has conscious knowledge of the movements of the jaw, tongue, velum, glottis, and vocal cords that are needed to shape a single spoken syllable. It is remarkable how well we get along without retrievable conscious knowledge of our own routine actions. Greene (1972) calls this property executive ignorance, and maintains that it is true of many distributed control systems (see Chapter 7).
We can sum up all these points by saying that specialists are functionally unified or modular. That means that detailed processing in the overall system is widely decentralized or distributed. Each module may be variably composed and decomposed, depending on the guiding goals and contexts. Specialized processors may be able to adapt to novel input, but only within narrow limits. Adaptation implies that specialized processors are sensitive to mismatches between their predictions and reality, that they are, in a sense, "mismatch-addressible". At least some specialists are also goal-addressible; perhaps all of them can be trained to be goal-directed with biofeedback training. We are not conscious of the details of specialized processors, suggesting that executive control processes are relatively ignorant of specialized systems.

1.5 Some common themes in this book. contents

The remainder of this book will be easier to understand if the reader is alert to the following themes.

1.51 Conscious experience reflects an underlying limited-capacity system.

Conscious events always load non-specific limited capacity, but not all limited-capacity events can be experienced consciously. There seem to be events that compete with clearly conscious ones for limited capacity, but which are not reportable in the way the reader's experience of these words is reportable. It appears therefore that conscious experience may be one "operating mode" of an underlying limited-capacity system; and that is indeed a reasonable way to interpret the Global Workspace architecture which we will develop in the next chapter. The question then is, "in addition to loading limited-capacity, what are the necessary conditions for conscious experience?" We will suggest several in the course of this book, and summarize them in the final chapter (11.x).

1.52 Every conscious event is shaped by a number of enduring unconscious systems which we will call "contexts".

This fundamental issue runs throughout this book. We treat a context as a relatively enduring system that shapes conscious experience, access, and control, without itself becoming conscious. The range of such contextual influences is simply enormous. In knowing the visual world, we routinely assume that light shines from above. As a result, when we encounter an ambiguous scene, such as a photograph of moon craters, we tend to interpret them as bumps rather than hollows, when the sun's rays come from the bottom of the photo (Rock, 1983). The assumed direction of the light is unconscious of course, but it profoundly influences our conscious visual experience. There are many cases like this in language perception and production, in thinking, memory access, action control, and the like. The contrasts between unconscious systems that influence conscious events and the conscious experiences themselves, provide demanding constraints on any theory of conscious experience. Theoretically, we will treat contexts as coalitions of unconscious specialized processors that are already committed to a certain way of processing their information, and which have ready access to the Global Workspace. Thus they can compete against, or cooperate with incoming global messages. There is no arbitrariness to the ready global access which contexts are presumed to have. Privileged access to the Global Workspace simply results from a history of cooperation and competition with other contexts, culminating in a hierarchy of contexts that dominates normal access to the Global Workspace (x.x).

We may sometimes want to treat "context" not as a thing but as a relationship. We may want to say that the assumption that light comes from above "is contextual with respect to" the perception of concavity in photographs of the moon's craters (Rock, 1983), or that a certain implicit moral framework "is contextual with respect to" one's feelings of self-esteem. In some models context is a process or a relational event -- part of the functioning of a network that may never be stated explicitly (Rumelhart & McClelland, 1984). In our approach we want to have contexts "stand out", so that we can talk about them, and symbolize them in conceptual diagrams. There is no need to become fixated on whether context is a thing or a relationship. In either case contextual information is something unconscious that profoundly shapes whatever becomes conscious (4.0, 5.0).

1.53 Conscious percepts and images are qualitative events, while consciously accessible intentions, expectations, and concepts are non-qualitative contents.

As we indicated above (1.25), people report qualitative conscious experiences of percepts, mental images, feelings, and the like. In general, we can call these perceptual or imaginal. Qualitative events have experienced qualia like warmth, color, taste, size, discrete temporal beginnings and endings, and location in space. There is a class of representations that is not experienced like percepts or images, but which we will consider to be "conscious" when they can be accurately reported. Currently available beliefs, expectations, and intentions -- in general, conceptual knowledge -- provide no consistent qualitative experience (Natsoulas, 1982). Yet qualitative and non-qualitative conscious events have much in common, so that it is useful to talk about both as "conscious". But how do we explain the difference? Concepts, as opposed to percepts and images, allow us to get away from the limits of the perceptual here-and-now, and even from the imaginable here-and-now, into abstract domains of representation. Conceptual processes commonly make use of imagined events, but they are not the same as the images and inner speech that they may produce. Images are concrete, but concepts, being abstract, can represent the general case of some set of events. However, abstraction does not tell the whole story, because we can have expectations and "set" effects even with respect to concrete stimuli (e.g. Bruner & Potter, 195x; 4.0). Yet these expectations are not experienced as mental images. The opposition between qualitative and non-qualitative "conscious" events will provide a theme that will weave throughout the following chapters. Finally in Chapter 7 we will suggest an answer to this puzzle, which any complete theory of consciousness must somehow address.

Both qualitative perceptual/imaginal events and non- qualitative "conceptual" events will be treated as conscious in this book. The important thing is to respect both similarities and differences as we go along, and ultimately to explain these as best we can (4.0, 6.0, 7.0).

1.54 Is there a lingua franca, a trade language of the mind?

If different processors have their own codes, is there a common code understood by all? Does any particular code have privileged status? Fodor (1979) has suggested that there must be a lingua mentis, as it was called in medieval philosophy, a language of the mind. Further, at least one mental language must be a lingua franca, a trade language like Swahili, or English in many parts of the world. Processors with specialized local codes face a translation trade-off that is not unlike the one we find in international affairs. The United Nations delegate from the Fiji Islands can listen in the General Assemply to Chinese, Russian, French or English versions of a speech; but none of these may be his or her speaking language. Translation is a chore, and a burden on other processes. Yet a failure to take on this chore presents the risk of failing to understand and communicate accurately to other specialized domains. This metaphor may not be far-fetched. Any system with local codes and global concerns faces such a trade-off.

We suggest later in this book that the special role of "qualitative" conscious contents -- perception and imagination -- may have something to do with this matter. In Chapter 2 we argue that conscious contents are broadcast very widely in the nervous system. This is one criterion for a lingua franca. Further, some conscious events are known to penetrate to otherwise inaccessible neural functions. For example, it was long believed that autonomic functions were quite independent from conscious control. One simply could not change heart-rate, peristalsis, perspiration, and sexual arousal at will. But in the last decade two ways to gain conscious access to autonomic functions have been discovered. First, autonomic functions can be controlled by biofeedback training, at least temporarily. Biofeedback always involves conscious perceptual feedback from the autonomic event. Second and even more interesting, these functions can be controlled by emotionally evocative mental images -- visual, auditory, and somatic -- which are, of course, also qualitative conscious events. We can increase heart-rate simply by vividly imagining a fearful, sexually arousing, anger-inducing, or effortful event, and decrease it by imagining something peaceful, soothing, and supportive. The vividness of the mental image -- its conscious, qualitative availability -- seems to be a factor in gaining access to otherwise isolated parts of the nervous system.

Both of these phenomena provide support for the notion that conscious qualitative percepts and images are involved in a mental lingua franca. We suggest later in this book that all percepts and images convey spatio-temporal information, which is known to be processed by many different brain structures (refs; 3.x). Perceived and imagined events always reside in some mental place and time, so that the corresponding neural event must encode spatial and temporal information (Kosslyn, 1980). A spatio-temporal code may provide one lingua franca for the nervous system. Finally, we will suggest that even abstract concepts may evoke fleeting mental images (7.xx).

1.55 Are there fleeting "conscious" events that are difficult to report, but which have observable effects?

William James waged a vigorous war against the psychological unconscious, in part because he believed that there are rapid "conscious" events which we simply do not remember, and which in retrospect we believe to be unconscious. There is indeed good evidence that we retrospectively underestimate our awareness of most events (Pope & Singer, 1978). We know from the Sperling phenomenon (1.x) that people can have fleeting access to many details in visual memory which they cannot retrieve a fraction of a second later. Further, there are important theoretical reasons to suppose that people may indeed have rapid, hard-to-recall conscious "flashes," which have indirect observable effects (7.0). But making this notion testable is a problem.

There are other sources of support for the idea of fleeting conscious events. In the "tip-of-the-tongue" phenomenon people often report a fleeting conscious image of the missing word, "going by too quickly to grasp." Often we feel sure that the momentary image was the missing word, and indeed, if people in such a state are presented with the correct word, they can recognize it very quickly and distinguish it from incorrect words, suggesting that the fleeting conscious "flash" was indeed accurate (Brown & McNeill, 1966). Any expert who is asked a novel question can briefly review a great deal of information that is not entirely conscious, but that can be made conscious at will, to answer the question. Thus a chess master can give a quick, fairly accurate answer to the question, "Did you ever see this configuration of chess pieces before?" (Newell & Simon, 1972) Some of this quick review process may involve semi-conscious images. And in the process of understanding an imageable sentence, we sometimes experience a fleeting mental image, flashing rapidly across the Mind's Eye like a darting swallow silhouetted against the early morning sky -- just to illustrate the point. One anecdotal source of information about conscious "flashes" comes from highly creative people who have taken the trouble to pay attention to their own fleeting mental processes. Albert Einstein was much interested in this topic, and discussed it often with his friend Max Wertheimer, the Gestalt psychologist. In reply to an inquiry Einstein reported: "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be "voluntarily" reproduced and combined. ... this vague ... combinatory play seems to be the essential feature in productive thought. ... (These elements) are, in my case, of visual and some of muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage, when the ... associative play is sufficiently established and can be reproduced at will. (But the initial stage is purely) visual and motor ... " (Ghiselin, 1952; p. 43; italics added).

About the turn of the century many psychologists tried to investigate the fleeting images that seem to accompany abstract thought. As Woodworth and Schlossberg (1954) recall: "When O's (Observers) were asked what mental images they had (while solving a simple problem) their reports showed much disagreement, as we should expect from the great individual differences found in the study of imagery ... Some reported visual images, some auditory, some kinesthetic, some verbal. Some reported vivid images, some mostly vague and scrappy ones. Some insisted that at the moment of a clear flash of thought they had no true images at all but only an awareness of some relationship or other "object" in (a) broad sense. Many psychologists would not accept testimony of this kind, which they said must be dueto imperfect introspection. So arose the 'imageless - thought' controversy which raged for some years and ended in a stalemate." The possibility of fleeting conscious flashes raises difficult but important questions. Such events, if they exist, may not strictly meet our operational criterion of accurate, verifiable reports of experienced events. We may be able to test their existence indirectly with dual-task measures, to record momentary loading of limited capacity. And we may be able to show clear conscious flashes appearing and disappearing under well-defined circumstances. Pani's work (1982; 1.x) shows that with practice, mental images tend to become unconscious, even though the information in those images continues to be used to perform a matching task. Further, the images again become conscious and reportable when the task is made more difficult. Perhaps there is an intermediate stage where the images are more and more fleeting, but still momentarily conscious. People who are trained to notice such fleeting events may be able to report their existence more easily than those who ignore them -- but how can we test the accuracy of their reports? The evidence for fleeting glimpses of inner speech is weaker than the evidence for automatic images. Some clinical techniques which are based on the recovery of automatic thoughts are quite effective in treating clinical depression and anxiety (Beck, 1976). It is hard to prove however that the thoughts that patients seem to recover to explain sudden irrational sadness or anxiety, are in fact the true, underlying automatic thoughts. Perhaps patients make them up to rationalize their experience, to make it seem more understandable and controllable. In principle, however, it is possible to run an experiment much like Pani's (1982) to test the existence of automatic, fleetingly conscious thoughts. In the remainder of this book we work to build a solid theoretical structure that strongly implies the existence of such fleeting "conscious" events. We consequently predict their existence, pending the development of better tools for assessing them (7.0).

Should we call such quick flashes, if they exist, "conscious"? Some would argue that this is totally improper, and perhaps it is (B. Libet, personal communication). A better term might be "rapid, potentially conscious, limited-capacity-loading events." Ultimately, of course, the label matters less than the idea itself and its measurable consequences. This issue seems to complicate life at first, but it will appear later in this book to solve several interesting puzzles (7.0).

1.6 A summary and a look ahead.  contents

We have sketched an approach to the problem of understanding conscious experience. The basic method is to gather firmly established contrasts between comparable conscious and unconscious processes, and to use them to constrain theory. As we do this we shall find that the basic metaphors used traditionally to describe the various aspects of conscious experience -- the Activation Metaphor, the Tip-of-the-Iceberg Metaphor, the Novelty Hypothesis, and the Theater Metaphor -- are still very useful. All of the traditional metaphors contain some truth. The whole truth may include all of them, and more.

In the next chapter we develop the evidence for our first-approximation theory, Model 1 of the Global Workspace theory. After considering its neurophysiological implications in Chapter 3, we discover a need to add an explicit role for unconscious contexts in the shaping and direction of conscious experience (Chapters 4 and 5, Models 2 and 3). The discussion of unconscious guiding contexts leads to a natural theoretical conception of goal-directed activity and voluntary control (Chapters 6 and 7, Models 4 and 5), and finally to an integrated conception of attention, reflective consciousness, and self (Chapters 8 and 9). Chapter 10 sums up the adaptive functions of conscious experience, and the last chapter provides a short review of the entire book.



Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles