Jerry Fodor
The Mind Doesn't Work That Way:
The Scope and Limits of Computational Psychology

Cambridge, MA: MIT Press, July 2000

Introduction

Over the years, I’ve written a number of books in praise of the Computational Theory of Mind (CTM often hereinafter). It is, in my view, far the best theory of cognition that we’ve got; indeed, the only one we’ve got that’s worth the bother of a serious discussion. There are facts about the mind that it accounts for and that we would be utterly at a loss to explain without it; and it’s central idea—that intentional processes are syntactic operations defined on mental representations—is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition. 1

But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works. (Practitioners of Artificial Intelligence have sometimes said things that suggest they harbor such convictions. But, even by its own account, AI was generally supposed to be about engineering, not about science; and certainly not about philosophy.) So, then, when I wrote books about what a fine thing CTM is, I generally made it a point to include a section saying that I certainly don’t suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology; and that the most interesting—certainly the hardest—problems about thinking are unlikely to be much illuminated by any kind of computational theory we are now able to imagine. I guess I sort of took it for granted that even us ardent admirers of computational psychology were more or less agreed on that.

I am now, however, disabused of taking that for granted. A couple of years ago, The London Review asked me to write about two new books, each of which summarized and commended a theory that is increasingly influential in cognitive science: Steven Pinker’s How the Mind Works and Henry Plotkin’s These books suggest, in quite similar terms, how one might combine CTM with a comprehensive psychological nativism and with biological principles borrowed from a Neo- Darwinist account of evolution. Pinker and Plotkin’s view appears to be that the resulting synthesis, even if it doesn’t quite constitute a general map of the cognitive mind, is pretty much the whole story about large areas of Manhattan, the Bronx and Staten Island. I thought both books admirable and authoritative in many respects; but, though I’m a committed—not to say fanatical— nativist myself, I wasn’t entirely happy with either, and I said so in my review. 2

For one thing, although they accurately set out a network of doctrines about the cognitive mind that many nativists hold, neither book made as explicit as I thought it might have how the various strands fit together. For a second thing, though neither book spends a lot of time on the alternatives, the Pinker/Plotkin view is by no means the only kind of current cognitive science that’s friendly to the idea that lots of knowledge is innate. Indeed, Noam Chomsky, who is surely as close to personifying the nativist revival as anybody can get, is nevertheless quite out of sympathy with much of what Pinker and Plotkin endorse. Readers who are new to the cognitive science game may well find this puzzling, but I hope to make it clear as we go along what the disagreement is about. Third, both books insist on a connection between nativism about cognition and a neo-Darwinist, adaptationist account of how the cognitive mind evolved. That struck me as neither convincingly argued in the texts nor particularly plausible in it’s own right. Finally, I was, and remain, perplexed by an attitude of ebullient optimism that’s particularly characteristic of Pinker’s book. As just remarked, I would have thought that the last forty or fifty years have demonstrated pretty clearly that there are aspects of higher mental processes into which the current armamentarium of computational models, theories and experimental techniques offers vanishingly little insight. And I would have thought that all of this is common knowledge in the trade. How, in light of it, could anybody manage to be so relentlessly cheerful?

So, it occurred to me to write a book of my own. I had it in mind to pick up some old threads in passing; in particular, I wanted to extend a discussion of the modularity (or otherwise) of cognitive architecture that I’d first embarked upon a million years or so ago in (1983). But the book I thought I’d write would be mostly about the status of computational nativism in cognitive science. And it would be much shorter, and much more jaundiced, than either Pinker’s or Plotkin’s. The shortness would be mostly because, unlike them, I wasn’t going to write an introductory text, or to review the empirical cognitive science literature, or even to argue in much detail for the account of the field I would propose. I’d be satisfied just to articulate a geography of the issues that’s quite different from the map that Pinker and Plotkin have on offer. The jaundice would be ..mostly in the conclusion, which was to be this: Computational nativism is clearly the best theory of the cognitive mind that anyone has thought of so far (vastly better than, for example, the associationistic empiricism that is the main alternative); and there may indeed be aspects of cognition about which computational nativism has got the story more or less right. But it’s nonetheless quite plausible that computational nativism is, in large part, not true.

In the fullness of time, I embarked upon that project, but the more I wrote, the unhappier I became. I’d started off intending to take CTM more or less for granted as the background theory and to concentrate on issues about nativism and adaptationism. But in the event, that turned out not to be feasible; perhaps unsurprisingly, what one says about any of these matters depends very much on what one thinks about the others. So be it.

There are many claims about nativism, and about adaptationism, in the book I ended up with. But part of the context for discussing them is an attempt to get clearer on what’s right, and what’s wrong, about the idea that the mind is a computer. 3

The Cognitive science that started fifty years or so ago more or less explicitly 4 had as its defining project to examine the theory—largely owing to Turing—that cognitive mental processes are operations defined on syntactically structured mental representations that are much like sentences. 5 The proposal was to use the hypothesis that mental representations are language-like to explain certain pervasive and characteristic properties of cognitive states and processes; for example, that the former are productive and systematic, and that the latter are, by and large, truth preserving. Roughly, the systematicity and productivity of thought were supposed to trace back to the compositionality of mental representations, which in turn depends on the constituent structure of their syntax. The tendency of mental processes to preserve truth was to be explained by the hypothesis that they are computations, where, by stipulation a computation is a causal process that is syntactically driven. 6

I think that the attempt to explain the productivity and systematicity of mental states by appealing to the compositionality of mental representations has been something like an unmitigated success; 7 in my view, it amply vindicates the postulation of a language of thought. That, however, is a twice told tale, and I won’t dwell on it in the discussion that follows. By contrast, it seems to me that the attempt to reduce thought to computation has had a decidedly mixed career. It’s a consolation, however, that there is much to be learned both from its successes and from its failures. Over the last forty years or so, we’ve been putting questions about cognitive processes to Nature, and Nature has been replying with interpretable indications of the scope and limits of the computational theory of the cognitive mind. The resultant pattern is broadly intelligible; so, at least, I am going to claim. Before the discussion gets seriously under way, however, I want to sketch brief overview for purposes of orientation. Here, in a nutshell, is what I think Nature has been trying to tell us about the scope and limits of the computational model: It’s been pretty clear since Freud, that our pretheoretical, ‘folk’ taxonomy of mental states conflates two quite different natural kinds: the intrinsically intentional ones, of which beliefs, desires and the like are paradigms; 8 and the intrinsically conscious ones, of which sensations, feelings and the like are paradigms. 9, 10 Likewise, I claim, a main result of the attempt to fit the facts of human cognition to the Classical Turing account of computation is that we need a comparably fundamental dichotomy among mental process; viz between the ones that are local and the ones that aren’t. There is (I continue to claim) a characteristic cluster of properties that typical examples of local mental processes reliably share with one another but not with typical instances of global ones. 11 Three of these features are most pertinent to our purposes: Local mental processes appear to accommodate pretty well to Turing’s theory that thinking is computation; they appear to be largely modular; and much of their architecture, and of what they know about their proprietary domains of application, appears to be innately specified.

By contrast, what we’ve found out about global cognition is mainly that it is different from the local kind in all three of these respects; and that, because it is, we deeply do not understand it. Since the mental processes thus afflicted with globality apparently include some of the ones that are most characteristic of human cognition, I’m on balance not inclined to celebrate how much we have so far learned about how our minds work. The bottom line will be that the current situation in cognitive science is light years from being satisfactory. Perhaps somebody will fix it eventually; but not, I should think, in the foreseeable future, and not with the tools that we currently have in hand. As he so often does, Eeyore catches the mood exactly: "‘It’s snowing still,’ said Eeyore ‘. . . And freezing. . . . However,’ he said, brightening up a little, ‘we haven’t had an earthquake lately.’"

This, then, is the itinerary. In chapter 1, I set out some of the main ideas that are currently in play in nativistic discussions of cognition. In particular, I want to distinguish the synthesis of nativism, computational psychology and (neo)Darwinism that Pinker and Plotkin both endorse from Chomsky’s story about innateness. Chomskian nativism and this New Synthesis l2 are, in some respects quite compatible. But as we’ll see, they are also in some respects quite different; and even when they endorse the same slogans, it’s often far from clear that they mean the same things by them. For example, Chomskian nativists and computational nativists both view themselves as inheriting the tradition of philosophical rationalism, but they do so for rather different reasons. Chomsky’s account (so I’ll suggest) is primarily responsive to questions about the sources and uses of knowledge, and so continues the tradition of rationalist epistemology. Computational nativism, by contrast, is primarily about the nature of mental processes (like thinking, for example) and so continues the tradition of rationalist psychology.

I expect that much of what I’ll have to say in the first chapter will be familiar to old hands, and I’d skip it if I could. However, standard accounts of New Synthesis cognitive psychology (including, notably, both Pinker’s and Plotkin’s) often hardly mention what seems to me be overwhelmingly its determining feature; viz, its commitment to Turing’s syntactic account of mental processes. Leaving that out simplifies the exposition, to be sure; but it’s Hamlet without the Prince. I propose to put the Prince back even though, here as in the play, doing so makes no end of trouble for everyone concerned. Much of this book will be about how the idea that cognitive processes are syntactic shapes the New Synthesis story; and why I doubt that the syntactic theory of mental processes could be anything like the whole truth about cognition, and what we’re left with if it’s not.

The second chapter will discuss what I take to be the limitations of the syntactic account of the mental, and chapter 3 will consider some ways in which computational nativists have tried, in my view not successfully, to evade these limitations. In chapter 4, the currently fashionable ‘massive modularity thesis’ will emerge as one such failed way out. The last chapter concerns the connection of all of this to issues about psychological Darwinism.

It will become clear, as the exposition proceeds, that I think some version of Chomskian Nativism will probably turn out to be true and that the current version of New Synthesis nativism probably won’t. I suspect that the basic perplexity of the New Synthesis is that the syntactic/ computational theory of thought that it depends on is likely to hold for cognitive processes in general only if the architecture of the mind is mostly modular; which, however, there is good reason to suppose that it isn’t. On the other hand, a tenable cognitive psychology does urgently need some theory of mental processes or other, and Chomsky rather clearly doesn’t have one. So if computational nativism is radically untenable, Chomskian nativism is radically incomplete. Ah, well, nobody ever said that understanding the cognitive mind was going to be easy.

Notes

1. This is not to claim that CTM is any of the truth about consciousness, not even when the cognition is conscious. There are diehard fans of CTM who think it is; but I’m not of their ranks.

2. Reprinted in Fodor (1998b).

3. Much of the specifically philosophical discussion of this issue has been about whether minds are ‘Turing equivalent,’ (that is: Whether there is anything that minds can do that Turing machines can’t.) By contrast, the question cognitive scientists care most about, and the one that CTM is committed on, is whether the architecture of (human) cognition is interestingly like the architecture of Turing’s kind of computer. It will be a preoccupation in what follows that the answer to the second could be ‘no’ or only in part’ even if the answer to the first turns out to be ‘yes’.

4. Arguably less rather than more, however; getting clear on the nature of the project took considerable time and effort. Particularly striking in retrospect was the widespread failure to distinguish the computational program in psychology from the functionalist program in metaphysics; the latter being, approximately, the idea that mental properties have functional essences. (For an instance where the two are run together, see Fodor (1968).) It’s only the first with which the present volume is concerned.

5. Turing’s theory was thus a variant of the Representational Theories of Mind that had been familiar for centuries in the British Empiricist tradition. What RTMs have in common is the idea that mind-world relations (or mind-proposition relations if that’s your preference) are mediated by mental particulars that exhibit both semantical and causal properties. ("Ideas" in Hume’s terminology; "concepts" and "mental representations" in the vocabulary of thoroughly modern cognitive psychologists). From this point of view, Turing’s suggestion that the mental particulars in question are syntactically organized was crucial; it opened the possibility of treating their causal interactions as computational rather than associative. More on that in later chapters.

6. For a lucid introduction to this research program, and to many of the philosophical problems issues it raises, see Rey (1997).

7. For some of the relations between issues about the productivity, systematicity and compositionality of thought, and the thesis that mental representations have syntactic structure, see Fodor and Pylyshyn (1988); Fodor and McLaughlin (1998) and Fodor (1998b).

8. One of the unintended, but gratifying, implications of recognizing the compositionality of mental representations is that is places severe constraints on psychological theories of concepts. Among the ones ruled out are several that might otherwise have seemed tempting. I call that progress. (For discussion, see Fodor 1998a; Fodor and Lepore 1999).

9. For the largely nonphilosophical purposes of the present volume, I’ll be mostly uncommitted as to the ‘criteria of intentionality’ (i.e., as to what it is, exactly, that makes a state intentional. Suffice it that they all have satisfaction conditions of one sort or another, and are thus susceptible of semantic evaluation.

10. It is rather an embarrassment for cognitive science that any intentional mental states are conscious. ‘Why aren’t they all unconscious if so many of them are?’ is a question that our cognitive science seems to raise but not to answer. Since, however, I haven’t the slightest idea what the right answer is, I propose to ignore it.

11. Mental processes are also classifiable as conscious or unconscious, of course. But I’m assuming that this is derivative; an (un)conscious mental process is just a causal sequence of (un)conscious mental states. (If I’m wrong about that, so be it. Nothing in what follows will depend on it.)

12. There is even a suggestion of a scintilla of some evidence that they may be mediated by distinct, dissociable psychological mechanisms. See Happe 19xx.

13. In what follows, I’ll often write ‘The New Synthesis’ with caps as a short way of referring to the galaxy of views that computational nativists like Pinker and Plotkin share. By stipulation, the New Synthesis consists of the three doctrines just enumerated in the text together with the claim that the cognitive mind is ‘massively modular’.

 

top

 

Debate
Evolution
CogSci

Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles


CogWeb