DysFunctionalism: How Putnam Touted and then Turned Against his Theory of Mind, Clearing the Way for Transcendentalism
by Michael Hartman
In the mid-1960's, Hilary Putnam wrote a series of articles that laid much of the groundwork for the theory of mind that came to be known as functionalism.1 Putnam's arguments seemed tremendously convincing and, largely on their strength, functionalism rose to dominance in spite of its counter-intuitive implications. However, decades later, Putnam realised that functionalism was plagued by problems and rejected it, although the theory continues to enjoy widespread support among philosophers of mind. However, since supporting functionalism entails accepting its tremendous problems, it makes little sense to be a functionalist. While functionalism may remain intriguing, Thomas Nagel's transcendentalism offers a far better approach to the mind. Though transcendentalism is somewhat radical and seems to lead to irreducible mental privacy, its implications are far less objectionable than functionalism's. And while some contemporary philosophers have tried to modify functionalism to minimise its problems, their efforts show that functionalism, in any form, has fundamental difficulties.
According to Blackwell's A Companion to Philosophy of Mind, functionalism
is "the doctrine that what makes a mental state the type of state it is...is
the functional relations it bears to the subject's perceptual stimuli, behavioural
responses, and other mental states" (317). In 1964, Hilary Putnam was not yet
a complete functionalist but was demonstrating some of the philosophical currents
that converged to create the theory, including iconoclastic disdain for earlier
theories of mind. He expressed his pre-functionalist spirit in a talk to the
American Psychological Association, which was later published as "Robots: Machines
or Artificially Created Life?" and in which Putnam suggested that a robot could
be "a 'model' for any psychological theory that is true of human beings" (390).
His argument began with the stipulation that we might create robots that could
exhibit any human behaviour, including "religion" (387), "inductive reasoning
and theory construction" (387), confusion about "their own internal constitution"
(388), and even "philosophy" (388). Throughout the article, Putnam rebutted
attempts to show that humans were conscious in some special sense in which such
robots obviously were not. For example, Putnam tried to convince his readers
that these robots could "'have the 'sensation' of red'" since we might engage
with the robot in as complex a discussion of a stop sign's redness as we might
with a human. Furthermore, Putnam claimed that for a group of these complex
robots "the Mind-Body problem...must arise" (388). It soon became clear
that Putnam was comfortable comparing robots to humans because he believed that
human psychology was defined by functional relationships, relationships that
might also be demonstrated by robots (391). In other words, Putnam believed
and tried to demonstrate that, having constructed robots that were psychologically
(functionally) isomorphic to humans, there would be no good reason not to call
these robots conscious. He recognised that this was counter-intuitive, but attempted
to stop readers from scoffing by showing that a similar "counter-intuitive feeling"
might arise if his robots were to build "ROBOTS - i.e., second-order robots"
(405). Putnam plausibly held that there would be a tendency on the part of the
original robots to regard their creations as "mere robots" lacking "'consciousness'"
(405). Therefore, said Putnam, since we see that the robots' distinction would
be foolish and baseless, we humans should not be like the robots and groundlessly
assume that we somehow represent the extent of consciousness. Putnam concluded
that the question of attributing consciousness to robots "calls for a decision
and not for a discovery" (407). He hoped, however, that we would decide in favour
of robot consciousness since the alternative would entail "'discrimination'
based on the 'softness' or 'hardness' of the body parts" of an "'organism',"
something that seemed to Putnam to be "as silly as discriminatory treatment
of humans on the basis of skin color" (407).2
Over the course of several decades, however, Putnam gradually realised that his arguments for robot consciousness were flawed. First, he retreated from his earlier position that "the question whether any automaton was conscious was not really a question of fact but called for a 'decision' on our part" (Guttenplan 507) since he now saw that this logic lead to the erroneous conclusion that "light is not identical with radiation (of such-and-such wavelengths)" merely because the definition "does not follow analytically" (508). Initially, though, Putnam was not troubled by this because he was arguing towards the functionalist idea that "a robot with the same program as a human would ipso facto be conscious" (Guttenplan 507). In "The Mental Life of Some Machines" he clearly stated the functionalist principle for the first time, writing that "to know for certain that a human being has a particular belief, or preference, or whatever, involves knowing...about the functional organization of the human being" (Putnam 424). In this article, Putnam demonstrated his belief that functional relationships, not physiology, determined psychology by suggesting that humans minds were equivalent to potential Turing Machines. To garner support for his new theory, Putnam pointed out its advantages over other theories of mind. Behaviourism, for example, was untenable because it could not account for a psychological pathology (as in his example of hiding pain) (420-421) and materialism attempted to address psychology in the wrong language (416-417) and rested on unsafe assumptions about the laws of science (416). Functionalism, on the other hand, had no major problem.
After having fully developed functionalism, however, Putnam began to see that the theory was fundamentally flawed. Earlier, Putnam had voiced his concern that "the difficulty with the notion of psychological isomorphism is that it presupposes the notion of something's being a functional or psychological description" (509). That is, functionalism might tend to apparently confirm itself while resting on an unsound foundation. He focused now on these assumptions and, thereby, came to doubt that his comparisons between humans and Turing Machines made sense since "Turing Machine states don't have the right sorts of properties" (510) to describe human psychology. This posed a problem because it would be meaningless to say that pain was a functional relationship if there were no model for such a relationship. In the place of Turing Machines, Putnam began to refer to an ideal psychological theory, which would serve as a universal model of mental states and their relationships. Then, however, in his autobiographical entry in A Companion to the Philosophy of Mind, Putnam expressed his crucial realisation that psychology's purpose was not to create anything like the "ideal psychological theory" that functionalism required, which would have to be "rich enough to [among other things] describe the beliefs of a believer of any possible religion" (512). Without hope for a good functional model of the mind, Putnam's theory was ruined. Functionalism, he now saw, amounted to little more than very "vague" (512) speculation.
Putnam's self-critique reveals a few significant problems with functionalism. Another is that functionalism cannot explain intentional attitudes much better than behaviourism, exemplified by the fact that I may secretly want the Conservatives to win yet exhibit the functional responses of a Labour supporter, including voting for and publicly promoting Labour because of social pressure to do so. Yet another problem, one perhaps even more damning, is that functionalism seems pathetically unable to explain qualia, exemplified by the fact that we seem to experience, in a stop sign, a sensation of red, as opposed to green, which does not have to do with the sign's functional tendency to make us step on the brakes. The conventional functionalist must claim that any mental difference will create a functional difference, which seems to conflict strongly with our intuition. In response to such problems, there arose a new position, which Stephen L. White calls "a compromise between functionalist and physicalist intuitions" and to which he applies the label "physicalist-functionalism"3 (Block 695). While this view is initially compelling, White eventually judges that "physicalist-functionalism is untenable in any form" (695). His critique of physicalist-functionalism is quite lengthy and, as such, I cannot reproduce it here. However, he makes several devastating points including his indictment that physicalist-functionalism inherits the grave problems of physicalism, including the inability "to draw the line between relevant and irrelevant physical differences" (710). In addition, White points out that physicalist-functionalism forces us, with little justification, to deny our intuitive feelings that psychology may be at least somewhat independent of beings' physical constitution. White offers the example of Martians who "are functionally equivalent to us," e.g., they scream and run away when needlessly pricked with needles, but who "have a neurophysiology which differs radically from ours" (701). The physicalist-functionalist must say that "the Martian term 'pain' does not refer to [our term] pain" (701) but to something else entirely.4 White concludes the article by endorsing basic functionalism, however not for good, positive reasons but only because the physicalist-functionalist alternative is too problematic. I cannot support White in this move, though, since functionalism has been shown to be too problematic. In addition to Putnam's self-critique and my criticisms in this paper (all put forward by many others), philosophers such as Goldman, to whom White refers, have destroyed much of functionalism's original foundation.5 White admits that functionalism "fails to capture all of our intuitions relating to qualia" (712) but, while he seems to be on the right path when he writes that the problems with functionalism "point beyond physicalist-functionalism to transcendentalism" (712), he dismisses transcendentalism as too radical.
Indeed, transcendentalism seems quite radical from White's description that, for the transcendentalist "pain is a first person fact, irreducible to any third person facts" (697). However, a closer examination of the theory as defended by Nagel in his article "What It Is Like to Be a Bat?" reveals it to be a wonderfully coherent theory that preserves our most important beliefs regarding the philosophy of mind. Consciousness, for Nagel, is largely irreducible because, for any organism, it is what "it is like to be that organism" (519). In other words, my consciousness is what is it like to be me, the "subjective character of [my] experience" (519). Nagel sees functionalists, physicalists, and behaviourists, among others, as promoting unsound "reduction" of consciousness (519). For example, he insists not only that functionalism cannot adequately explain consciousness but that it is actually "logically compatible with its absence" (519). He also thinks that basic functionalism cannot address important questions such as whether what looks red to me looks green to you. And, while physical-functionalism tries to solve these problems, it is very problematic.
To elucidate his position, Nagel offers the example of a bat. Bats, like us, have experiences but their experiences "present a range of activity and a sensory apparatus so different from ours" (520), activities and experiences that we, therefore, cannot fully understand. Nagel believes that the fact that we have different types of experiences means that I cannot really know what it is like to be a bat and that my attempts to do so only point me towards "what it would be like for me to behave as a bat behaves" (520). This gap of understanding does not rest on factual ignorance but, rather, first-person subjectivity, which is intractable because "if one removed the viewpoint of the bat" (523) there would remain nothing to what it is like to be a bat. Furthermore, according to Nagel, the problem of privacy6 exists not only in "exotic cases," such as between a bat and me, but "between one person and another" (521). This is where the real radicalness of Nagel's theory lies. He does not want to suggest that we cannot understand anyone else us and is perfectly comfortable with the fact that one will often "take up a point of view other than one's own" (522). But he believes that this can only occur through "extrapolation from our own case" (521) combined with empathy for our subjects and is not completely accurate. To show this, Nagel points to the fact that, in imagining another's experience, the "more different from oneself the other experiencer is, the less success one can expect with this enterprise" (522). To accurately understand the other's experiences would involve perfectly transferring others' subjective phenomenological experiences into our own subjective consciousness, which is impossible. Functionalism cannot address the importance of subjective experience and, therefore, fails.
Nagel's arguments against functionalism seem very strong. Against such strong criticism, Arnold Zuboff tries, in his paper "What is a Mind?", to rebuild functionalism from the ground up. Zuboff presents what he calls a "replacement argument for functionalism," suggesting that, in a theoretical case in which "a chunk of your brain was to be replaced by a wire and transistor gadget that, as we shall stipulate, will keep the same causal relationships with the rest of the brain that the replaced chunk had" (183), there is unanswerable proof for functionalism. But in his argument, Zuboff makes unsafe reductive assumptions about the mind, an error about which Putnam and Nagel had warned. This problem becomes evident when Zuboff presents the extreme case of "a replacement by tiny gadgets of all the neurons in the brain" (187). He believes that if each gadget preserved the functional role of the neuron, then we could not identify any loss to the mind and functionalism would be vindicated. However, Zuboff unacceptably ignores consciousness. His arguments commit him to the view that a robot, or computer program, running a functional model of our brain would be the same as us. But, when we consider the robot or computer "screaming in pain" versus us screaming in pain, I think it becomes clear that there is a difference. Zuboff's replacement gadgets might keep our bodies wincing when stung by a bee, but they would sacrifice the experience of pain and, ultimately, consciousness.
The key to functionalism's long-running success, I believe, is that its arguments were relatively sound. As Putnam pointed out, they just rested on assumptions that were finally recognised to be false. Nagel's explanation of consciousness allows us to deny functionalism but, admittedly, not without paying a price. I can confidently say that robots are not conscious in the same way that I am but I also seem to have to say that neither is anyone else. In fact, Nagel's theory forces us to sacrifice many apparently solid identifications between individuals' experiences. In their place, Nagel proposes that we try to develop a new "objective phenomenology not dependent on empathy or imagination" (525) but this seems to me to be a fool's dream, especially when he expresses his hope that we might explain vision to those blind from birth. In this sense, transcendentalism may not seem very appealing.7 But I think that accuracy and honesty should count for a lot in judging a theory and, if we truly consider it, I do not think that transcendentalism really tramples our intuitions in the way that other theories of mind such as functionalism do. I believe that the words of Putnam suffice as an elegy for functionalism: "I am struck by the way in which key elements of functionalism...operate within a seventeenth-century picture of the mind. I believe that a very different way of looking at the problems is possible" (513). Transcendentalism provides this new way, which destroys functionalism and its problems and stands solidly on its own.