Suit brought by Paul M. Churchland and Gerald M. Edelman as legal attorneys for Materialism, Georges Rey defending Functionalism.
Suit first filed February 1981; amended 1992
Testimony for the defense completed 1997
Suit brought before Judge John Schmidt, People’s Court of the Mind on the 24th day of September in the Year 1998.
This Judgment Issued by the Court on the 16th day of October in the
Year 1998.
Summary of Judgment, Table of Contents
Part 1. Information about the Parties: Functionalism and Materialism............................................1
Part 2. Introduction to the case........................................................................................3
Part 3. Judge’s summary of the argument from Paul Churchland................................................4
Part 4. Judge’s comments on the argument from Paul Churchland...............................................6
Part 5. Judge’s summary of the argument from Gerald Edelman.................................................9
Part 6. Judge’s comments on the argument from Gerald Edelman..............................................12
Part 7. Judge’s summary of the argument from Georges Rey..................................................14
Part 8. Judge’s comments on the argument from Georges Rey................................................16
Part 9. Judge’s Decree...............................................................................................18
Part 1. Information about the Parties: Functionalism and Materialism.
Definitions submitted to the court by Georges Rey.
Functionalism: an approach to the analysis of mental phenomena that looks to relations of mental phenomena both among themselves, and to inputs and outputs.
Mental: a term still being defined by empirical means within psychology, but particular emphasis is given by Functionalism to mental states that involve intentionality, such as mental states that are concerned with one’s beliefs or desires.
CRTT (Computational/Representational Theory of Thought): the view that propositional attitudes consist in an agent’s bearing corresponding computational relations to sentences encoded in the brain.
Propositional Attitude: any mental state that involves a relation to either a sentence or a proposition that expresses the meaning of a sentence.
Materialism: the view that all that exists is matter in motion. With respect to theories, the view that the phenomena that are to be the subjects of theories should be understandable in terms of physics.
Eliminativism: eliminativists about the mind deny the existence (in the materialistic sense) of mental phenomena.
From these definitions, it is clear that Functionalism is concerned
with constructing theories of mind such as CRTT that attempt to utilize
mental states described by psychology as the fundamental elements of the
theory. This approach to mind contrasts to that usually taken by Materialists.
Thus, Churchland submits that: “the basic unit of cognition is not a sententially
expressible state such as ‘X believes that P”, rather it is the vector
of activation levels across a large population of neurons.” It comes as
no surprise that Functionalists and Materialists construct different kinds
of theories. Rey is concerned with “formal” computations performed on Language
of Thought (LOT) sentences while Churchland is concerned with parallel
distributed processing or “neurocomputation” at the level of neuronal networks.
Definitions submitted to the court by Paul Churchland.
Propositional Attitudes: beliefs, desires, etc.
Eliminative Materialism: the view that common sense conceptions
of psychological phenomena provide a theory of mind, but it is a theory
that is defective and will be replaced by a new theory of mind based on
neuroscience (Materialism).
Definition submitted to the court by Gerald
Edelman.
Functionalism: the view that the workings of the brain result
from implementation of algorithms. The details of the mechanical instantiation(s)
of such algorithms are not important for a functional analysis of mind.
What is ultimately important for understanding behavior are the algorithms.
I will use Edelman’s theory of mind (the extended TNGS that is described in The Remembered Present) as an example of a materialistic theory that can be contrasted with Rey’s CRTT. Edelman claims (page 213, The Remembered Present), “neural states definitely underlie intentional states.....but the direct ascription of mentalistic terms to the operations of the brain itself rather than to a person can only lead to confusion”. Edelman’s theory of language and consciousness is built upon his theory of how neural networks can account for categorization, memory and learning. For Edelman, the emphasis is on the mechanism by which semantics can enter into a biological brain, how a brain contains true understanding of its environment. Edelman’s view is that for the most part, syntax emerges from semantics. According to Edelman, the brain is a network of neuronal networks and not a Turing machine. “There is no case to be made for ‘machine functionalism’ in such a system, and there is no need for such a case.” However, Edelman admits that neuroscience is providing us with a theory of mind that is in many ways unexpected and “must always appear strange” to our commonsense view of mind (page 269, The Remembered Present). Edelman suggests that the materialistic view of mind (however powerful and complete as a physical theory) is built upon a foundation of human culture and is largely unable to alter the basic elements of human culture. Thus Edelman seems to repudiate Churchland’s suggestion that after neuroscience constructs a complete materialistic theory of mind, folk psychology will be replaced by a new way of understanding everyday human behavior. Edelman, in his condemnation of Functionalism, is most concerned that Functionalism not be allowed to distract us from the pursuit of a reductionistic and materialistic neuroscience, and only secondarily is Edelman in agreement with Churchland’s main claim, that Functionalistic theories of mind are doomed to being poor theories. Edelman claims that Functionalism involves errors in reasoning that threaten to undermine cognitive science (see page 229 of Bright Air, Brilliant Fire).
So much for the differences between the Parties, what of the similarities?
Both Functionalists and Materialists are trying to improve human understanding
of the human mind. It is for us to judge if this common purpose can over-come
the differences in methodological approach used by the two Parties.
Part 2. Introduction to the case
Since I was brought into this case at such a late date and since this
dispute has lingered for so long there were some special technical difficulties
that had to be addressed. First, given the great lapse in time between
the initial complaint (1981) and the completion of testimony (1997) it
is possible that the dispute between Materialism and Functionalism might
have resolved itself during the past 17 years. Two points of evidence indicate
that such a spontaneous resolution has not occurred:
1) The 1992 presentation of Materialism’s case by Gerald Edelman
indicates that the rift between Materialism and Functionalism was bitter
and unrelenting for at least 10 consecutive years and extended from the
extreme of Materialism’s dealings with connectionism to the opposite extreme
within the field of biology.
2) The 1997 testimony of Georges Rey indicates that the conflict
between Materialism and Functionalism continued unabated into 1997. Georges
Rey’s argument was largely an attempt to contest the request for divorce
by pleading that Functionalism is deserving of a role in the Science of
Mind. However, this presentation by Rey was also clearly tangled up with
a self-conscious attempt to counter claims continuing to be brought against
Functionalism by Materialism.
Thus, in my judgment, the conflict between Materialism and Functionalism continues to this day. However, there has been some clarification of issues and progress towards accomodation between the Parties. Unfortunately, much of this accomodation consists of the two Parties ignoring eachother. A constructive resolution of this dispute would seem to call for finding ways to increase communication between the two Parties.
Secondly, is the evidence which I have been provided adequate for deciding this case? Given practical restrictions, I have been somewhat limited in my ability to collect additional information relevant to this case. However, I have read Paul Churchland’s 1995 book, all of Gerald Edelman’s Neural Darwinism books, and of course, Georges Rey’s 1997 book. I feel that this breadth of background information supports the contention that the information submitted to me for this case is adequate for me to judiciate this dispute.
This divorce case, while having its unique features, also displays many of the expected features of scientific disputes. Foremost of concern, is the fact of poor communication between the Parties of the dispute. Thus, while both Materialism and Functionalism have assured me that their first priority is commitment to the task of improving human understanding of the human mind, each of these two Parties often fail to accept that the other is pulling their weight in this group effort. I have witnessed one Party of this dispute putting words into the mouth of the other that are vehemently rejected. The mutual misunderstanding between the two Parties has not escaped my attention. As is often the case once a dispute reaches the Court, it is amplified for the attempted benefit of each Party. I have tried to sort through the evidence in an attempt to identify the true content of the dispute and not be distracted by the emotional excesses of either Party. I have also sought council from others who know the two Parties of this dispute. Dan Dennett, Francis Crick, and Douglas Hosfstadter have been particularly helpful in helping me sort through the claims and counter claims of Materialism and Functionalism.
In brief, Materialism, in requesting a divorce, claims that in order for us to understand the human mind, it is most important to follow the standard reductionistic and materialistic strategy that has solved many problems, such as the question of the difference between living and non-living things. Materialists thus seek to reveal how the physical processes (involving “lower order” brain components like cells and molecules) inside brains produce a mind. Functionalism is an attempt to understand the mind in terms of the “higher order” properties that are studied by psychology and which are often accessible to us through subjective experience (properties such as beliefs and desires). Materialism has claimed that Functionalism is not an acceptable component of the Science of Mind. Churchland argued that Functionalism can only produce a poor theory mind and so Functionalism should be removed from the Science of Mind, leaving neuroscience free to construct a good theory of mind. Edelman’s presentation added another argument in favor of divorce. Edelman argued in support of one of Churchland’s claims, that the nature of Functionalism makes it immune to reductionism, preventing it from being joined into the unified whole of all real sciences. Edelman claimed that this should be seen as sufficient basis for removing Functionalism from the Science of Mind.
Arguing against divorce, Rey presented his view that the question of
Functionalism’s ability to produce a good theory of mind is a matter for
empirical evaluation. Neither Materialism or Functionalism can provide
a wholly adequate theory of mind. Materialism and Functionalism provide
us with two alternative research programs that attempt to provide the basis
for a good theory of mind. Thus, Churchland’s claim that Functionalism
is worse than Materialism is only his opinion, not a fact. Rey also claims
that Functionalism does not take it upon itself to explain how things like
beliefs can be reduced to brain processes and furthermore, neither that
aspect of Functionalism or the difficulty of reducing high level phenomena
like beliefs to low level physical processes in the brain can be taken
as justification for Edelman’s belief that a Functionalistic theory of
mind cannot be reduced to a Materialistic theory of mind. Thus, Edelman’s
claim is also only his opinion and can not justify his request that Functionalism
should be removed from the Science of Mind.
Part 3. Judge’s summary of the case presented by Paul Churchland.
Churchland’s argument centered on the idea that functionalistic explanations of mind such as Folk Psychology and CRTT are valid theories and so should be judged by “standard” criteria that are used to evaluate scientific theories. However, it is not clear that there are non-contentious means to evaluate the relative merits of theories, particularly incipient theories such as CRTT and TNGS (Edelman’s Theory of Neuronal Group Selection).
Churchland made a comment to the effect that (page 71 of his argument) it must be held a major mystery why it has taken until the last half of the twentieth century for philosophers to realize that folk psychology is a theory. In my opinion, this wise crack was intended to portray Philosophers as incompetents in dealing with theories and was part of his attempt to describe Functionalists as incompetents in constructing a good theory of mind. However, my reading of the evidence is that this “mystery” can be accounted for the fact that anytime a philosopher attempts to formulate a theory that is not solidly rooted in Materialism, the Materialists launch a withering attack against that theory. It is the Materialists themselves who have driven philosophers out of the business of constructing theories. Thus, I rule that Churchland’s wise crack be stricken from the record, and hereby acknowledge that this wise crack earned Churchland one demerit for counter-productive activities.
Churchland also argued that Folk Psychology (FP) can be viewed as being the same type of theory as is found in the physical sciences. Since Rey and other Functionalists describe themselves as relying on the science of psychology, it struck me as odd that Churchland would claim that Functionalists produce theories like those found in the physical sciences. This line of thought reminded me that Ernst Mayr has attempted to describe fundamental differences between the types of theories that are found in biology and the types of theories that are found in the physical sciences. SEE COMMENT #1, BELOW. I thus view Churchland’s analysis of FP as a theory to be suspect and likely to be based on the questionable relevance of his analogy between “numerical attitudes” and “propositional attitudes”.
Churchland argued that both he and Functionalists believe that the Theory of FP is irreducible to neurobiological processes. While there is evidence that some Functionalists have made this claim, Rey did not make this claim in his CRTT. In particular, Rey argues that Functionalism “can be tied to the non-mental world in a variety of ways (page 8)” , “functionalism permits a level of psychological explanation that does not deny the physical basis of mind (page 178)”, and “it is unlikely that psychology is autonomous of physiology (page 181).
A specific example that well encapsulates Churchland’s mind set is the following words that he puts into the mouth of Functionalism: “whatever tidying up FP may require, it cannot be displaced by any materialistic theory of the physical processes in brains, since it is the abstract functional features of internal mental states that makes a person.” Churchland’s thinking is slightly off the mark in trying to pin this target on CRTT. Rey might claim that CRTT seeks to describe the changes in a person’s mental states or he might say that CRTT is trying to define an algorithm that would allow us to make sense of the sequence of mental states that a person has. Rey does not deny a role for physical brain processes in making the mental states. Thus, I suspect that Rey would re-word Churchland’s statement in the following way: “We are engaged in a process of tidying up FP using the tools of psychology and logic, and we are hypothesizing that the core heuristics of FP will never need to be displaced from their roles as useful algorithms for understanding human behavior even when neuroscience gets around to describing all of the physical details of brain function that make mental states possible in humans because the algorithmic descriptions of mental states are useful rules of thumb by which we can make sense of human behavior.”
Churchland further argued that he believes that functionalistic theories of mind cannot be reduced to physical processes because functionalistic theories are doomed to be wrong and must simply be displaced by a correct theory of mind that will be produced by neuroscience. Thus, the issue becomes centered on Churchland’s claim that functionalistic theories of mind are doomed to be failures.
What evidence does Churchland provide to support this claim? Churchland pointed out the fact that psychological theories do not explain brain processes like learning. I judge this to be an irrelevant fact since Functionalism does not attempt to explain such mechanistic details of brains. Churchland complained that it is an error to ascribe causal powers to theoretical entities (like beliefs). However, this contention is contingent on the theory also denying that its entities (like beliefs) cannot be reduced to physical elements, a claim that is made by Churchland but not defended by Rey. Therefore, I rule this particular charge against Functionalism to be irrelevant.
Churchland’s argument relied heavily on analogies between FP and historical
examples of failed theories such as alchemy. Churchland’s argument is basically
that theory in alchemy was just mumbo-jumbo and hand waving and there was
never any objective evidence to support it. Churchland claimed that the
same is true of FP. I find this argument to be misleading, misdirecting,
and ineffective. There are two sources of evidence that Functionalists
rely upon:
1) the results of psychology, and
2) the results from subjective experience of our own thoughts.
Churchland argued that evidence from these sources has so far produced
only an incomplete and error-filled theory of mind. However, Rey admits
this and claims that Functionalism is a new science still trying to build
a better theory of mind. I rule that Functionalism does include procedures
for removing errors from and improving its theories such as CRTT. Thus,
Churchland’s claim that Functionalism is incapable of producing a good
theory of mind is based upon dubious claims. In addition, it is not hard
to think of reasons why Functionalistic theories of mind might be “good”
even if they are “wrong”. SEE COMMENT #2, BELOW
Part 4. Judge’s comments on the presentation given by Paul Churchland.
COMMENT #1. It is easy to portray FP and CRTT as something other than the type of theory Churchland claimed them to be. For example, FP could be a theory like Lamarck’s theory of evolution and CRTT could be analogous to Darwin’s theory of evolution. Lamarck’s theory of evolution included both “good” features and “bad” features. A good feature was the idea that species could change. This was a radical concept in Lamarck’s time and it was important that the idea be discussed so that people like Wallace and Darwin could have it available to include in their theories. CRTT is an attempt to eliminate the “bad” features of FP and incorporate new “good” ideas from sources like scientific psychology. The argument could be made that Darwin’s theory of evolution was “bad” because it had nothing to say about the cellular and molecular processes that are required for the reproduction of organisms and transmission of genes from generation to generation. However, Darwin’s theory was eventually “reduced” to molecular processes in this century and it is hailed as one of the great intellectual achievements in the history of theory construction. While CRTT may still be an incipient and imperfect theory, Churchland does not provide us with convincing evidence to show that Functionalism as a research program will fail to construct a useful theory of mind, in the same sense that Darwin’s theory was a useful and satisfying theory for explaining the diversity of life on Earth. Darwin’s theory was useful even if it gave biologists no idea how to “reduce” such a theory to the mechanistic details of cells and molecules.
A lot of blood has been spilt within Science over the issue of just what a proper scientific theory is.
Type 1 Theories. Theory in physics is very general in the sense that it tries to take as its domain that group of phenomena that includes all the basic physical processes in the universe. Here the goal is to go to the most fundamental (irreducible to something more basic) level of understanding physical reality. Type 1 theories involve descriptions of the properties of the elemental components of physical reality (atoms, quarks, superstrings, p-banes, etc) and the universal laws which govern the interactions of those elemental components. When it comes to dealing with the causes of physical processes in the universe, Type 1 theories often go no further than to say, "Well, that's just the way it is, that's how the universe works." Many physicists have gone as far as to assume that Type 1 theories are the only scientific theories, a narrow minded view which the biologist Ernst Mayr has effectively dubunked, even if most physical scientists still do not know it. We can call such Type 1 theories, physical theories, in that the goal is to explain aspects of physical reality in terms of the interactions of fundamental physical elements of physical reality.
Type 2 Theories. These are often games or artificial formal systems created by people in order to explore them either for their own sake or in an attempt to find an algorithmic theory of a physical process. On rare occasions, while a mathematician or scientist is trying to play this kind of game a startling discovery is made.....this ain't no game! This is how the world really is! Non-Euclidean geometry is the classical example. Ancient mathematicians were led to Euclidian geometry because its "algorithm" for producing theorems seemed to produce true statements that could usefully be applied to the real world for surveying land and building physical objects. Non-Euclidian geometries seemed like nothing more than pie-in-the-sky "what if?" games for mathematicians, but then suddenly some aspects of physical reality were encountered (like deep gravity wells) that made more sense in terms of Non-Euclidian geometry. Most often, however, algorithmic theories have no relationship to any physical reality independent of their man-made algorithms and their instantiations. For algorithmic theories, there need be no attempt to reduce the elements of the algorithm to a physical theory, although human intuition and hope may suggest that it is theoretically possible to reduce an algorithmic theory to physical theory. A classic example of an algorithmic theory is Darwin’s theory of biological evolution.
In the case of Darwin’s theory, the group of phenomena to be explained included all of the diverse types of living organisms on Earth. Darwin’s theory was heuristic and algorithmic, there was no known way of explaining evolution in terms of physical processes inside organisms. Today we know that biological diversity can be theoretically explained in terms of the fact that all organisms contain molecules which encode genetic instructions in their structure, theses molecules have variations in their structure which result in variations in the macroscopic properties of the organisms, and these variations in the macroscopic properties of organisms can influence which of the genetic molecules get passed on to future generations. The cause of biological diversity is thus a rather complex theoretical issue, involving mixtures of "random" structural changes in molecules (mutations), additional “random” outcomes called “genetic drift”, and many complex macroscopic events in the biosphere that we often vaguely refer to as "competition for survival". This theory of evolution is eminently satisfying to biologists even though it is not predictive in the sense of being able to predict exactly which organisms will survive or arise through evolution. This theory is algorithmic in the sense that it includes an algorithm for generating biological diversity which has been instantiated on Earth. Although many scientists (often physical scientists, not biologists) who are uncomfortable with this type of theory have argued that there is no theoretical basis for the idea of "progress" in the diversification of living organisms, one of the implications of the algorithm of natural selection is that simple types of organisms will evolve before more complex types of organisms. Thus, natural selection “explains” in a general way why small brained fish evolved before large brained primates. The theory of evolution by natural selection is a now a reductive theory in that it explains how the diversity of life can be accounted for by physical processes that can theoretically be reduced their Type 1 foundations. Biologists have no need to introduce any new elemental physical elements or laws. The satisfying conclusion is reached: biology can be explained in terms of chemistry which in turn can be explained in terms of physics. This platitude uses the word "explain" in a special way. Biologists never understand biology by trying to express biological phenomena in terms of a quantum mechanical wave equation, but we can theoretically assume that such an explanation is possible. Human understanding of complex biological processes comes from the identification of heuristic algorithms which both make sense to people and reflect the many restraints imposed by the physical nature of biological systems. It is a fact that Darwin was able to recognize and clearly describe the heuristic idea of evolution by natural selection long before anyone had even a glimmer of understanding of the chemical mechanisms that make it work. The question must be asked, can a Type 2 theory of mind be found? We should certainly be prepared to face the fact that theoretical analysis of a volume of gas will be child's play compared to analyisis of a complex adaptive system like a brain. It might only be difficult to provide the mechanistic details of a reductionist theory of mind, not impossible or unnecessary to do so. It should also be clear that the human ability to think algorithmically about a complex process does not mean that we have to ascribe causal powers to the players in our heuristic models. In biology, it is very common that people discover useful heuristic theories long before the mechanistic details or filled in which provide the causal foundations for the theory. This was certainly the case for evolution, where the filling in continues today.
It is an amusing fact that after centuries of physical scientists condeming biologists for pretending to do science by constructing Type 2 theories and using them in the absence of any understanding of how to reduce them to Type 1 theories, physicists are now being forced to become Type 2 theorists. Physics has become a search for an algorithm for making the universe as we know it. Such an algorithm might go something like this: Start with nothing. A random quantum fluctuation creates a God Particle which decays into a "cloud" of p-banes which assemble into a "big bang" which "cools" into a rapidly expanding universe containing mostly hydrogen (nuclear forces). The hydrogen collects (gravity) into stars which manufacture additional chemical elements. Some stars explode, spreading a diversity of chemical elements around inside galaxies. Later generations of stars and planets incorporate these chemical elements. One such planet was Earth, where the diversity of chemical elements available, the convenient distance from a smallish star, and other "random" factors allowed for the organization of chemical elements into complex molecules and living structures. Four billion years of biological evolution then resulted in multicellular organisms, nervous systems, and eventually human brains that could deal with complex social processes like human language use. Ten thousand years of human civilization resulted in science and the construction of an algorithmic understanding of the universe and our place in it. In physics, it is very common that physicists discover the mechanistic details which are the heart of a Type 1 theory and then, secondarily, a heuristic popularization of the theory is produced by which the avarage person can make sense of the universe. Physicists feel comfortable with this sequence of events. But current attempts to construct a “theory of everthing” (TOE) are just a game by which theoreticians are making up algorithms for how to create a universe and then calling upon experimentalists to find the evidence that will define the values for the variables in the algorithms that pertain to our universe, a particular instantiation of the algorithm. It is very amusing to see physicists reduced to complaining that the required data (such as the key conditions which prevailed at the start of our universe) may not be available. They may never get the satisfaction of being able to reduce their algorithmic theories to physical theories. Ain’t that a shame? Not really.
What is the point of Science? Science is a human effort to allow human
understanding. Science often, but not always, leads to ways by which humans
can increase their control over their environment. Can algorithmic theories
play a role in Science? Only a self-absorbed physical scientists would
think otherwise. Algorithmic theories are particularly important
for human understanding because human minds cannot deal well with all of
the minute details of complex physical theories. Human brains have evolved
as devices for the production of algorithmic descriptions of complex physical
processess, such as the brain processes that construct minds. We would
be foolish and inhuman if we were to attempt to ignore our human talent
for producing heuristc models of minds. Just such attempts have been made
by constructing taboos within science that force scientists to avoid incorporating
subjective experience into theories of mind. Even if this prohibition were
useful for every other scientific effort in theory construction, it should
be obvious that the Science of Mind is the exception to this rule. Many
people just throw up their hands in exasperation and say, “How can we attempt
to explain subjective experience by making use of subjective experience?
That would be circular and unproductive.” Dug Hofstadter has described
how to deal with such tangled hierarchies. We need to be sophisticated
about how we deal with the various descriptive levels of complex systems
like brains, but we can do it. The first step is being willing to deal
with more than one level of description at the same time. Too many scientists
only want to deal with their particular sub-dicipline and ignore all others.
Comment #2. How likely is it that FP, CRTT and all functionalistic theories of mind will be just plain wrong? And there is a closely related question: even if these theories are poor theories, can they still serve useful purposes in our attempt to better understand the mind? The first question largely depends on how good the “data” are that Functionalists have, the types of phenomena that they take as the subject for their theories and the tools that they have to probe those phenomena. Does anyone know just how much psychology can tell us about minds, particularly if we view psychology as being continuous with all of biology, including the most ardent reductionistic Materialists? Does anyone know how powerful our subjective knowledge can be for guiding us towards a heuristic or algorithmic theory of mind? No. This is what Functionalists are trying to determine. Given this fundamental ignorance on our part, can we really afford to eliminate Functionalism from the Science of Mind? What if the Functionalists are correct? What if they can do what Darwin did, work in the absence of knowledge of mechanistic physical details and yet still construct a powerful and revolutionary theory?
While Churchland presented the case that emphasizes the failures of FP, there is the corresponding case that emphasizes the strengths of FP. What if human brains have evolved to be mechanisms that are very good at constructing functional theories of mind? What if this is the basic human survival strategy? What if the human brain is a device that comes to each human pre-programmed with some tricks that make it easy for us to construct hypotheses about and mental models of human minds? We would be fools to throw away the chance to incorporate the knowledge obtainable from this source, no matter how faulty it might be in comparison to objectively obtained knowledge about the physical processes that occur inside brains.
And even if Functionalism fails to produce as accurate a theory of mind
as does materialistic neuroscience, does this mean we do not need a heuristic
theory of mind? Part of the Functionalistic research program is an effort
to “purify” the linguistic terms that we commonly use to describe human
behavior. Is this not a worthy enterprise in itself? If Functionalists
like Rey are in the position where they could play the role of trying to
translate the results from scientific studies of the mind into the language
that is familiar to all people, is this not a contribution to human understanding
of the mind? What would be our fate if we exiled Functionalists from the
Science of Mind and left things to the Materialists? Churchland promises
us a “theory” of the mind that will be incomprehensible to us. Think about
quantum mechanics and relativity. How successful have scientists been in
translating these theories into terms that the average person can understand?
What if neuroscience and Churchland deliver us such a situation with respect
to the mind? Churchland suggests that people will simply “learn a new way
of communicating” that will allow them to then share the “explanation”
of mind provided by neuroscience. This seems highly unlikely. In my judgment,
it is much more likely that there will simply be an elite class of scientists
who are able to understand the neurobiological theory of mind and they
will be entirely inept at communicating what they know with the rest of
humanity. Prudence dictates that the Science of Mind utilize Functionalism
as a way to maintain communication between humanity at large and neuroscientists.
Part 5. Judge’s summary of the Argument given by Gerald Edelman.
In the postscript of Bright Air, Brilliant Fire Gerald Edelman deals with “Digital Computers: The False Analog”. Edelman condemns Functionalism because, “logical operations carried out on a computer does not constitute thinking”. However, we must ask, “Does Functionalism really seek to describe what “constitutes” thinking or does Functionalism seek to construct a heuristic theory that describes how certain high level processes that we are subjectively aware of in our thoughts (like beliefs and desires) are combined systematically in order to account for familiar patterns of human behavior?” It is a “red flag” that Edelman starts his attack on Functionalism by mentioning his own personal goal, which is to understand how physical brain components (neural networks, cells, molecules) “constitute thinking”. Functionalism explicitly takes as its goal the attempt to understand higher order aspects of minds that can be understood independent a detailed understanding of the physical brain processes that are required for biological minds. Thus, Edelman seems from the start to be directing his attack at an aspect of Functionalism which is a self-acknowledged deficiency of theories like CRTT.
Starting from the definition of Functionalism that Edelman provides (see Part 1, above), he lists a series of corollaries to that definition:
1) according to Functionalism either the brain is a digital computer or the digital computer is an adequate model of a machine that can do what the brain does.
2) Functionalism includes the hypothesis that cognitive functions are carried out by the manipulation of symbols according to rules.
3) Functionalism proposes that the relevant level of description for both mind and brain is the level of symbolic representations and of algorithms, not the level of detailed physical mechanisms.
It is striking that Edelman’s description of Functionalism so heavily depends on the role of the brain. It is clear that Edelman, as a neurobiologist, always has brain on his mind. Edelman’s books are full of descriptions of brain processes and how to relate them to mind. In contrast, the works of Functionalists such as Rey’s Contemporary Philosophy of Mind explicitly try to avoid dealing with the details of brains. How does Edelman’s definition of Functionalism match to Rey’s definition of Functionalism? The match is poor. BUT, what happens if we remove the word “brain” from Edelman’s definition of Functionalism or substitute “mind” for “brain” in that definition and Edelman’s corollaries? If we remove “brain” and focus on “mind”, suddenly what Edelman is saying begins to sound like what Rey claims Functionalism is really about.
This is not a trivial issue dealing with these two words, “mind” and “brain”. For Materialists, the goal is often to reduce mind to brain. For Functionalists, the goal is often to ignore the details of the brain and yet still say something useful about the mind. The main issue before this court is can these two different goals be reconciled within a Science of Mind or is there something horribly unscientific about the Functionalistic approach?
Beyond the fact that Functionalism intends to deal with mind function rather than brain mechanisms, the other key difference is what is meant by “to deal with”. Edelman, in his TNGS, deals with the reductionistic program of identifying brain mechanisms that account for mental functions. Rey, in his CRTT, deals with the construction of a useful heuristic description of the function of human minds. Edelman argued that this key difference between the approaches to mind taken by Functionalists and Materialists implies that any theory of mind built upon syntactical computations does not deal with the key issue of mind, which is semantics. According to Edelman, the only way to correctly deal with semantics is to explore the mechanistic details of brain processes which account for how biological brains come to contain true understandings of their environment. Edelman argued that the closest Functionalists come to dealing with semantics is to make a mistaken assumption that there is some “natural” and logical relationship between the symbols of their theories (terms in LOT) and physical objects in the environment. Edelman argued that since this is not true, theories constructed by Functionalists have nothing useful to say about semantics (the most important issue to be dealt with) and so are hopeless or vacuous. According to Edelman, the only way to correctly deal with semantics is to understand the physical brain processes that allow meaning to be encoded in human brains. Thus, Edelman’s argument against Functionalism is really an argument against Functionalism’s ability to deal with Edelman’s research program. This raises the important issue of why Edelman was unable to address the ability of Functionalism to deal with the research goal that is the self-proclaimed goal of Functionalism. SEE COMMENT #1, below.
Taking the rest of Edelman’s argument at face value (even though it may be irrelevant to what Functionalism really is trying to accomplish), we find that Edelman is concerned with keeping people from being led down a mistaken path by Functionalism. “The more vocal practitioners of cognitive psychology have unknowingly subjected themselves to a swindle.” Edelman worries that “the uninitiated reader” is at risk of running into the mistaken yet “dangerously seductive” views of Functionalism. What is this “swindle”? It is how Edelman thinks that Functionalism accounts for semantics. According to Edelman, Functionalists assume that semantics comes from “fixed relationships between abstract symbols and objects in the environment.” He argues that this is part of “objectivism” which he defines as the view that there are naturally occurring categories in the world. (Edelman’s “objectivism” has nothing to do with Randian Objectivism.) This common human belief is rooted in the human tendency to believe in the existence of Platonic Ideal Forms which can be viewed as the defining “algorithms” for real world objects in that real world objects are just “imperfect” physical instantiations of the “perfect” Ideal Form. This leads to a common assumption that there are logical relations between things that exist objectively in the world and that these logical relations are what provide us with a “natural” basis for semantics in formal systems. Edelman claims that in order for a theory like CRTT to “work”, mentalese requires an accurate and unambiguous link between terms in LOT and objects in the external world. It is clear that “work” is used in Edelman’s sense of providing a mechanistic account for mental states in terms of brain processes. Edelman does not consider the possibility that CRTT could “work” as a heuristic theory of mind independent of physical mechanical details. Edelman thinks that Functionalists assume objectivism as their “mechanism” to account for semantics. Rey discusses several means by which semantics can be “imported” into CRTT, including use of neural network models to provide a mechanism for capturing semantics within a mind. Thus, while some Functionalists may rely on “objectivism”, not all do, so Edelman’s argument against Functionalism fails.
In fact, Edelman seems to fit into a psycho-physiologico-teleo-functionalistic category. Edelman is not an eliminativist. Edelman feels that “ordinary folk psychology” elements like beliefs and desires can be reduced to physiological brain functions. Edelman’s TNGS is an attempt to show how neural processes can account for mental states like belief. Thus, if we just accept that Edelman was wrong about all Functionalists relying on a faulty theory of semantics (objectivism), then would it be possible that Functionalism could be a valid approach to the mind, even though it does not attempt to incorporate Edelman’s preferred theory of semantics? SEE COMMENT # 2, below.
Edelman has complaints against Chomsky that are relevant to Functionalism. Functionalists like Rey make a major case for the importance of Chomsky’s idea of a Language Acquisition Device (LAD). Rey seems to be excited about Chomsky’s ideas both as a source of arguments against eliminativists and as a source of justification for the syntactical emphasis of CRTT. Edelman dislikes Chomsky because Edelman thinks that Chomsky’s ideas have been used as motivation for Functionalists to assume that “language is independent of the rest of cognition” or that the LAD can construct human language syntactic competencies “automatically” and independent of semantics. While I agree with Edelman that studies of human language acquisition (as well as more recent studies of bonobos) show that a cognitive understanding of one’s environment forms first then language use is built on the foundation of non-linguistic cognition, I do not think that Chomsky or Rey have ever argued that “language is independent of the rest of cognition”. Edelman argues that the collection of algorithms (rules for manipulating symbols) in a transformational grammar or a theory like CRTT has no practical relationship to the physical brain processes that occur in brains. Clearly, we need to know what “practical relationship” means to Edelman in order to understand this claim. For Edelman, “practical relationship” concerns his goal of finding the physical brain processes that underlie high order mental states and processes (like language use). Edelman favors the idea of trying to develop “cognitive grammars” that incorporate the details of how brains learn real meaning about the environment and place that process (like the horse) before the issue of any formal analysis of linguistic symbols (which for Edelman, is “the cart” to be placed behind “the horse” of learning mechanisms). Since Edelman’s goal is not the goal of Functionalism, we must abandon Edelman’s self-centered analysis of the inadequacies of Functionalism and attempt to determine if the methods used by Functionalists like Rey can accomplish something useful within the Science of Mind. SEE COMMENT #2, below.
Edelman’s condemnation of Functionalism seems to involve the common
fear that Functionalism is a form of dualism in that the “independence”
of theories like CRTT from physical brain mechanisms seems to involve some
claim for non-reducibility of the elements of the theory to physical processes.
Edelman argued that Functionalists produce theories that are so detached
from physical reality (abstract) that they will never be able to be united
with neuroscientific views of mind. Since Rey’s version of Functionalism
insists that theories like CRTT can be anchored to the physical realities
of biological brains, we cannot take Edelman’s fears of creeping dualism
as the basis for an automatic condemnation of Functionalism. We must judge
Functionalism on its own terms and from the broad perspective of a multi-disciplinary
Science of Mind.
Part 6. Judge’s comments on the Argument given by Gerald Edelman.
Comment #1. If the real goal of Functionalism is to construct heuristic theories of the mind rather than reductionistic theories of the type favored by Edelman, why did Edelman fail to recognize this fact? As a Materialist, Edelman is very committed to reductionistic theories. It is interesting that on the same page where Edelman condemns Functionalism for providing nothing useful in our search to make a meaningful theory of mind, he proclaims Darwin to be a great theoretician because Darwin formulated the heuristic law of evolution by natural selection. What if Functionalists can construct a heuristic algorithm that explains human behavior in terms of computations acting on symbolic representations? Would this not be an important contribution to any meaningful theory of mind?
It is interesting that Edelman accepts that psychology is an integral component of the human attempt to construct a good theory of Mind. However, it seems clear that Edelman is only interested in psychology as a tool for identifying higher order mental states that must be accounted for by the reductionistic theories of neurobiologists. Does Edelman totally fail to comprehend the possibility of high level heuristic theories of the mind? It is interesting that Edelman’s theory of mind has been almost universally ignored by even his fellow biologists who either quibble with the details of the theory (such as the presumed dominant role of selectionistic processes in neural networks rather than instructional processes) or more importantly, who fail to understand Edelman’s theory of how neural networks can account for memory and learning and higher brain (mind) functions. If we take Edelman’s TNGS as being typical of the types of theories that we have to expect will continue to be produced by neuroscience in the next century, then we have to admit that it is questionable to what extent they will contribute to our over-all goal of improving human understanding of the mind. If even other biologists cannot understand such theories, what hope is there for philosophers or the average man on the street?
It seems safe to say that the average working biologist has very poor communication skills. Biologists are seldom called upon to communicate their ideas to non-scientists, and when they try to communicate to non-experts, scientists usually botch the job. The problem is often a matter of focus. Working scientists are sucked up tight to the details of particular experimental issues and they have a hard time stepping back and dealing with the big picture. Scientists forget how much they rely on specialized jargon. It is almost impossible for scientists to translate their work into everyday language that can be understood by everyone. Additionally, theories of mind have their own special problem: complexity. Some scientific theories may involve a tricky counter-intuitive fact or two, but they are fairly simple and do not take much time to explain. In contrast, human minds are not simple, and any complete mechanistic theory of how a brain makes a mind is going to be tricky and long. The Science of Mind is going to have to confront this problem of complexity. If our goal is to make useful and meaningful theories of the mind, we are going to have to have heuristic short-cut versions that cut to the chase and do not get bogged down in all the details. This is always how humans deal with a flood of information. Cut the crap and get to the heart of the matter. Does Edelman understand this? Apparently not. If he did, he would have admitted the possibility that Functionalism can help provide the Science of Mind with a kind of Get-The-Big-Picture-And-Ignore-The-Fussy-Details theories of mind that is needed.
Comment #2. Is it possible to make a theory of mind by putting the “cart” of syntactic computation in front of the “horse” of the mechanism of semantics? The core of Edelman’s attack on Functionalism seems to be that we must understand semantics first, by way of neuroscience. Is this view just Edelman’s error?
It is clear that Rey would be unhappy if CRTT was just a meaningless exercise in symbol manipulation according to formal rules. Edelman seems to claim that the only way that a theory of mind can deal with meaning is by means of the physical brain processes that allow biological brains to contain meaningful understandings of the environment. Is this claim true? Is there some alternative source of meaning that can save the day for Functionalism? Answering this question will be a major concern for Parts 7 and 8 (see below), but independent of Rey’s Chapter 9 in Contemporary Philosophy of Mind, is there a way of dealing with this issue in general?
It is a fact of human nature that we are able to construct heuristic
theories of human behavior. Is there some fundamental reason why such theories
should be expressed in a form that relies on syntactical computation while
being able to ignore the details of how semantics can be accounted for?
I am trying to reason along the lines of Dug Hofstadter’s analysis of hierarchical
systems as presented in his book, G.E.B. It is the nature of hierarchical
systems that descriptions of such systems at high levels of organization
can be given in the form of a concise code. The use of such a code may
or may not include an algorithm for decoding the details that are embedded
at the lower levels of the hierarchy. We can use a “medical code” for matching
treatments to symptoms without understanding the mechanisms of the body
or the mechanisms of the actions of the drugs used to treat illnesses.
Codes, algorithms, and descriptions of system behavior lend themselves
naturally to formal descriptions and computational methodologies. Even
if the underlying physical processes at a low level of description of a
system are not well suited to computational algorithms, the abstract coded
description at a higher level may be. There are many familiar examples
of this in science. It is impossible to use digital computers to calculate
the behavior of all of the atoms in the atmosphere. It is possible to use
heuristic and algorithmic approximations of atmospheric dynamics to make
predictions about tomorrow’s weather. Most of us can glance at a weather
map showing today’s conditions and our brains will construct a heuristic
projection of the weather for the next day. It is true that heuristic
algorithms leave out details that are included in mechanistically realistic
algorithms. It may be computationally impossible to calculate the trajectory
of a space craft using a complete model of every object in the solar system
and the full power of general relativity. However, some tricks for estimation
(such as just looking at the most important 2-body interactions and use
of non-relativistic mechanics) will allow you to calculate a reasonable
rocket launch and course corrections. Ideally, the intuitions of people
who have plotted many trajectories will also allow rule of thumb solutions
that will confirm that the more detailed calculations performed by the
stupid computer were correctly coded and interpreted. Thus, dealing with
many real world problems can involve three levels of analysis: 1) the most
detailed mechanistic account possible, 2) an algorithmic approximation
, and 3) a human guess or intuition. Each of us has our own intuitions
about how human minds work, Functionalism stands ready to attempt to find
an algorithm of human behavior that could be implemented in a kind of Turing
machine or digital computer model, and neurobiologists are concerned with
discovering the details of the physical brain mechanisms that produce minds.
Are not all of these useful for understanding Mind?
Part 7. Judge’s summary of the argument presented by Georges Rey.
Much of Rey’s book is a dance around the role of subjective experience in the Science of Mind. There are some things that our social group defines as being taboo. Within the social group of scientists, it has been taboo to construct theories of physical reality that include subjective experiences. This has been a powerful methodological approach for dealing with many aspects of both physical and biological science. In general, if you do not need to introduce subjective experience into a scientific theory, you are safest to leave it out. However, what happens when we are dealing with the brain, the physical system that allows us to have subjective experiences?
There are several possible attitudes we can hold concerning the role
of subjective experience in the Science of Mind:
1) Deny any need to be concerned with subjuctive experience (Eliminativism)
2) try to explain Mind as fully as possible independent of dealing
with the issue of the subjective
3) take subjective experience as a key aspect of Mind that we need
to explain mechanistically, in terms of physical brain processes (Edelman,
TNGS)
4) accept subjective experience as a source of data with which we can
construct a theory of Mind (Rey, Functionalism)
Since nobody really knows how to construct a complete theory of Mind, it is probably good news that there are people willing to hold all four of these attitudes. Most working scientists hold attitude #2, and so have little interest in those who hold attitude #4. Churchland voiced the complaint against attitude #4 that is common to those who hold attitude #1. Edelman voiced the complaint against attitude #4 that is common to those who hold attitude #3.
Rey’s argument included a major emphasis on arguments against eliminativism. There are two main sources of “data” that can be taken as justifying “mental states” as key elements of a theory of mind: subjective experience and psychological descriptions of behavior. The role of subjective experience within science is still mainly taboo, so Rey presents himself as a psychofunctionalist, mainly defending Functionalism based on arguments that fit comfortably within a standard psychological framework and having roots in analytical behaviorism. At the start of his Chapter 5, Rey points out the fact that introspection is limited, it does not by itself reveal everything we want to know about mental states. What is the correct conclusion from this fact? Should we abandon introspection? Even if non-subjective methods of psychology can produce descriptions of mental phenomena, does that mean we should ignore subjective experience as a complementary source of information?
I must wonder if Rey weakened his case by leaving out so much that could have been said in favor of the idea that subjective experience itself is a rich source of information about the Mind. Rey’s version of CRTT is an outline of a theory of thought based on a high level algorithmic description of the dynamics of mental states.There is no question that human brains have evolved so as to be able to make humans conscious of the high level algorithmic description of the dynamics of mental states that we all use in order to predict the behavior of others and to plan our own future actions. If our subjective experience is constructed from just the kind of information we need to construct an algorithmic theory of mind, then we would be fools to ignore subjective experience as a source of data. It does not matter if all physical theories of non-living things and most theories of non-brain-related biological things can better be constructed by removing everything subjective. Theory of mind is different. It is true that subjective experience is often a difficult source of data because of irreproducibility and other practical problems. But this only means that we must take care to identify these problems and deal with the limitations that are inherent in the use of subjective experience as a souce of information about minds. A major revolution is now occuring in neuroscience with respect to subjective experience. Non-invasive brain monitoring methods are now being devised which make it possible to match objective measurement of physical brain processes to specific subjective states. We find nothing unusal about being able to feel contractions in our muscles while at the same time a doctor can watch or otherwise monitor those contractions. It is now becoming possible for us to introspectively monitor our subjective mental states while at the same time external observers can watch the activity of the neurons that are involved in producing those subjective states. Just as a heart attack pateint would be a fool to think that his subjective experience of the pain of a heart attack was the best description of the heart attack, we would be foolish to expect that our subjective experiences are the be-all and end-all of mental states. Our conscious states float on a great sea of unconscious neuronal activity. But it would also be insane for doctors to ignore patient reports of pain. Doctors start with subjective reports of pain and then look with greater precision for objective views of the dammaged tissue. Subjective experience of mental states will come to be used in the same way for construction of theories of Mind. If done correctly, inclusion of subjective experience in the Science of Mind will be powerful and important.
Rey’s main argument for Functionaism is based on the psychological level of descriptions of mental activity. The Materialists are mainly concerned with lower level descriptions of mental activity. Given this fundamental clash of emphasis, it is unclear why Rey did not include a more careful account of levels of description of Mind. SEE COMMENT #1, below. It is also too bad that Rey is so focused on eliminativism, that he does not pay attention to arguments from people like Edelman who are made uneasy by Functionalism even though they are not eliminativists. It is one thing to justify (as Rey does) inclusion of the elemental descriptions of behavior that are common to psychology in his theory of thought, it is another to justify avoiding taking a reductionistc approach towards those elements. By simply saying, “CRTT does not take it upon itself to do reductionism” you get an easy out, but you do not take the time to say what needs to be said to ease the concerns of people like Edelman who suspect creeping dualism whenever reductionism is avoided. The fact that it takes until half way through his book for Rey to make some fairly wishy washy statements about the relationship of high level functionalist theories to low level reductionistic theories of mind, is odd. This is a contentious issue that should be dealt with right up front and dealt with in clear language. SEE COMMENT #2, below.
What about the details of Rey’s presentation of CRTT? Any formal system risks being opaque to the uninitiated viewer. If you look at G.E.B. you can see a clear introduction to some formal systems. In contrast to the job of presentation done by Hofstadter, Rey’s page 210 is a travesty of exposition. The COG “program” is not only a “toy”, it has the full frustration level of a baterry-powered toy with no batteries. There is no shortage of such formal systems that have been proposed as solutions to problems in AI research. To drop one such formal syntactic system on the reader with the whispered aside, “thought will plausibly involve reference to both syntax and semantics” is bizzar. It would be like Alan Greenspan saying, “We think there is a proper way to calculate the desired value of the prime interest rate.” and then having no such calculation presented, and insted watching him role a pair of dice each month in order to set the rate. It is often said that you can lead a horse to water, but you may not be able to get the horse to drink. It is also true that on a hot day, a horse led to water will usually want to drink. Rey leads us to CRTT then does not allow us to drink. Darwin did not present his algorithmic theory of evolution in an abstract way and then walk away. His book was full of dozens of examples of how his theory could deal with specific cases of evolution. Why does Rey avoid doing the same for his theory?
The fact that CRTT is presented in two parts, C and then R allows Rey
to try to riggle out of filling our cups of expectation. Rey first says,
“here is how to do the computations if we had a theory of semantics” then
he can say “here are several possible ways of providing the semantics”
and then he can walk away. A real theory must put the elements of the theory
together into a functional whole. If it turns out to be the case that
the meanings of mental representations are to a first approximation obtainable
by “internalist” means and to a second degree of approximation obtainable
by “tele-semantics” then Rey is admitting that FP will grow into a highlevel
description of mind that has to incorporate the results of neuroscience.
It seems clear that Rey dreams that CRTT can be a useful theory without
this fill in the blanks process having to involve neuroscience. Does he
ever say what this might be? Only is a very vague way. Rey seems to have
some faith that a complete science of psychology will provide us with a
complete set of (O), (A), and (D) COG-L sentences and computation rules.
Rey never discusses the fact that these are just the sorts of rules that
top-down AI research has searched for for 50 years without much success.
Rey seems unaware of the fact that this failure of AI research is what
has driven so many AI researchers to the kind of bottom-up methods used
by connectionists. Rey devotes page after page to arguments against RCON
with just a quiet aside to mention that CRTT and LCON should work together.
Why not make the emphasis be on the positive conclusion that CRTT needs
something like LCON in order to deal with the problem of semantics? Rey
is so far removed from even thinking about this issue that he does not
even know about the work of biologists like Edelman who share Rey’s view
that RCON is inadequate for explaining human thought. What if Rey knew
that Edelman has proposed just the type of neural network mechanism that
is needed to avoid the inadequacies of RCON and yet allow importation of
semantics into CRTT? It is ironic that while those who attack Functionalism
have blind spots which Rey tries to deal with, Rey also has his blind spots.
Miscommunication is two way.
Part 8. Judge’s comments on the argument presented by Georges Rey.
Comment #1. The argument between reductionistic Materialists and Functionalists is often centered on the difference in the levels of descriptions that are being sought (functionaism is concerned with high level descriptions, the network folks are concerned with lower level mechanistic issues). So why does Rey do such a poor job of dealing with this difference? The classic description of hierarchical levels of description of the mind was given by Hofstadter in his book, G. E. B. in 1979. The analysis of mind in terms of a hierarchy of organizational levels has even been adopted by dualists (see Stairway to the Mind by Alwyn Scott). Rey seems unaware of the importance of thinking about mind in terms of different organizational levels, or at least uninterested. Why? Is it really possible to try to be Darwin in this age of molecular biology? Maybe. But is it wise? Does such an attitude invite misunderstanding within the Science of Mind? I cannot escape the conviction that Rey thinks CRTT can be something more than "just" a descriptive, heuristic, algorithmic description of mind. I think that he should be satisfied with the prospect that functionalism could provide the mental equivalent of Darwinism. He cannot expect scientists to become card-carrying functionalists, they have other fish to fry.
Comment #2. If, in the end, Rey wants to make
Functionalism a part of Materialism then he has to become better at building
bridges to his fellow materialists. It makes sense for Rey to emphasize
his interests in linking CRTT to the science of Psychology, but this is
only a first step. E. O. Wilson, in his book Consilience , has made explicit
the challenge to science to unite all of science into a continuous structure
from physics to chemistry to biology to sociology. Many scientists have
learned to ignore philosophers because of a sense that philosophers want
to stand outside of the circle of science. Rey states that he is working
in the spirit of science, so he needs to work to overcome the predudice
of scientists against philosophers. He has made a good start towards this
end, but he can do better.
I am sympathetic to both Edelman’s approach to mind and the objectives
of CRTT. While I think Edelman’s approach is the more powerful, I also
feel that Rey’s approach is valid. I feel that there is useful work being
accomplished by both Functionalists like Rey and neurobiologists like Edelman.
Both Edelman and Ray are in error about some of what they have included
(and excluded!) in Neural Darwinism and CRTT, but at least, unlike some
others who are trying to understand mind, they do not seem to be confused.
If biologists and philosophers are going to be able to cooperate in dealing
with the challenge of developing a human understanding of thought and our
own minds, we need to be able to find common ground and work together.
I hope both functionalism and reductionistic materialism can improve and
grow together and eventually provide us with the understanding of mind
that we seek. Thus, the request to remove Functionalism from the Science
of Mind is not granted. In the future, Functionalism will be able to exist
within the Science of Mind as part of the effort to provide a materialistic
understanding of mind.
References
1. Georges Rey (1997) Contemporary Philosophy of Mind ISBN 0-631-19069-4.
2. Daniel C. Dennet (1991) Consciousness Explained
ISBN 0-316-18066-1.
3. Paul M. Chruchland (1995) The Engine of Reason, the Seat of the Soul ISBN 0-262-03224-4.
4. Gerald M. Edelman (1987) Neural Darwinism ISBN 0-465-04934-6
5. Gerald M. Edelman (1992) Bright Air, Brilliant Fire ISBN 0-464-00764-3
6. Kanzi
: The Ape at the Brink of the Human Mind by Sue Savage-Rumbaugh,
Roger Lewin
7. Consilience
: The Unity of Knowledge by Edward Osborne Wilson
8. Gerald M. Edelman (1989) The Remembered Present ISBN 0-465-06910-X
9. Francis H. C. Crick (1994) The Astonishing Hypothesis ISBN 0-684-19431-7
10. Ernst Mayr (1982) The Growth of Biological Thought
ISBN 0-674-36446-5
11. Paul M. Churchland (1981) "Eliminative Materialism
and the Propositional Attitudes" Mind LXXVIII: 67-90.
Go to John's Book Page.
Go to John's
Home Page.