Alwyn Scott (1995) presents a useful summary of mind/brain philosophizing, listing a good sample of key thoughts on the subject including those from William James, George Santayana, B. F. Skinner, Donald Hebb, Erwin Schrodinger, Alan Turing, Eugene Wigner, Roger Sperry, Karl Popper, John Eccles, Roger Penrose, Francis Crick, Christof Koch, Daniel Dennett, John Searle, Erich Harth, Henry Stapp, and David Chalmers. But make no mistake, Scott has carefully selected this collection of folks since they are mostly mind/brain dualists and mostly physical scientists. These choices were made in order to try to support and justify Scott's own philosophical position, which is clearly rooted in his own mathematical and physical science biases.
Chapter 4 (The Nerve Afire) and chapter 5 (Is There a Computer in Your Head?) is where Scott's own scientific experiences in electrophysiology speak loudest.
Scott's journey up the "great chain of causality" from physics to chemistry to biology to anthropology (with the [nearly] obligatory artificial intelligence and computer brain modeling side step) is well formulated and its clarity is almost an improvement on Douglas Hofstadter's handling of the hierarchy in 1979.
I say "almost" because Scott and I seem to disagree over what I think is the most important thing to be said about the relationship between levels in a hierarchy. It may be difficult to predict behavior of a complex system that is at a higher level of a hierarchy from the simpler behavior of that complex system's components at a lower level of the hierarchy (a specific example: predicting the chemical properties of a complex molecule from a quantum mechanical description of the molecule), but this does not mean we should loose heart and seek mystical explainations for the complex behavior, it just means that we need to take our time, make sure that we identify the key links in the chain of causality by which the complex system's components determine the system's behavior, and then make do with the best model we can devise within the limits of the computational complexity of the problem.
What should we do if, as is common in science, our best reductionistic model does not allow us to predict very much about the behavior of a complex system? I think what should then be done is that we must attempt to ascertain the extent to which our model's poor performance is due to our failure to completely understand the system (back to the laboratory!) under study and to what extent out troubles are due to unmanageable computational complexity or other theoretically unsolvable difficulties (back to the chalk board!). So much for the prediction problem.
What about the issue of determinism? Are there situations where, in the absence of being able to completely predict the behavior of a system we can still have confidence (as reductionistic materialists) that we are dealing with a deterministic system? I would say yes; it seems that here is where Scott would start to talk about emergence. A major problem here is semantic; there are two different ways in which the word determinism is used within science. Many physicists define "deterministic" to mean predictable. This is their discipline-specific definition. The broader definition of "deterministic" is less restricting and only implies that there are rules by which the past leads to the future, even if humans can not use those rules to predict the future.
This sets up a key issue of metaphysics and ontology: many mathematicians and physicists feel that if they can not prove a theorem that seems to be true or calculate from quantum mechanics the observable properties of a molecule then we are dealing with an emergent property either of the universe at large of the human mind in particular. Physicists and mathematicians want to researve "determinism" to refer to predictable systems and use the word "emergence" to label things that they can not predict. Why are physicists unwilling to use the labels "predictable" and "unpredictable" rather than deterministic and emergent? The only reason I have found is that they, by introspection, feel that they have free will, and so they want to be able to say that humans are not deterministic systems.
In his book, Elbow Room, (more on Dennett and Free Will) Dan Dennett gives solid philosophical arguments as to why everyone, including physicists, should be able to live with the more direct statement "people are unpredictable", but this is the great divide that goes back to Plato and Aristotle. People with their primary mind-set established by early and extensive exposure to biology (like Aristotle) are very comfortable dealing with unpredictability, complexity, and confusing populations of non-identical components that interact over long periods of time as adaptive systems. People with their major intellectual development being tied up with mathematics and physical science (like Plato) are most comfortable with simple problems in physical science that can be solved by standard mathematics. As Scott says, "linear phenomena have long been favored by scientists", and of course, he means physical scientists. We can be thankful that some physicists such as Murray Gell-Mann have begun the task of orienting physics towards the types of complex problems that abound in biology, and doing so in clear-headed way that avoids the physicist's reflexive turns toward over-simplification (as practiced by Penrose and other quantum consciousness folks) and essentialistic thinking. I think that use of the term "emergent" is dangerous and distracting and a branch of thought growing out of Platonic thinking.
The specific chemistry example mentioned by Scott concerns the difficulty of calculating the properties of water molecules from the Schrodinger equation. Scott says that since nobody can solve the Schrodinger equation for the water molecule, the chemical properties of water such as its dielectric constant are emergent properties. Does this label "emergent" tell us anything? As far as I can tell it is just a convenient, one word label that means, "we can not predict the chemical properties of water from quantum mechanics." The danger of throwing around such a handy label is that it becomes a Platonic Ideal, a glowing concept of an unbreachable boundary, a slippery slope of thought that turns hard calculations into impenetrable mysteries. I prefer to just say, "As far as we know, the chemical properties of molecules are determined by the physics of their constituent atoms and the complex interactions of those atoms, but we can not do the complex calculations required to actually calculate the chemical properties, so we hire chemists to experimentally measure the chemical properties, and then we get on with the rest of our lives."
Can we admit that the gap between physics and chemistry is a problem for scientists, not a problem in how the universe is constructed? As a useful heuristic, we can imagine that the universe effortlessly "calculates" the chemical properties of every water molecule, even though we can not do so. Alternatively, we can honestly admit to ourselves that as humans our concern about the "water molecule" is only a convenient way for human brains to artificially chunk parts of the universe into compressible units; maybe the deterministic rules of the universe say nothing about the "essences of water molecules" (the chemistry of water). Instead of admitting the limits of knowledge that are due to the nature of human brains and their intrinsic limitations, the physical scientist or the mathematician begins to use essentialistic thinking, and the great boundaries have to be erected. By page 169 of Scott's book, the Platonic Ideals swarm like flies. The Hodgkin-Huxley equations and Schrodinger's equation are listed as two excellent yet "unrelated" theories. They are Platonic Ideal Forms, forever isolated by the impossibility of a mathematician deriving one equation from the other. Scott describes the essential differences of the two theories, such as how the solutions of these two types of equations differ in their symmetries with respect to time. In recognizing such essentially isolated levels of description of the reality of the electrical properties of axons (or the analogous differences between "mind-stuff" and "nerve-stuff"), Scott asks if he must be called a dualist, and concludes that if so, then so be it.
And, yes, this is the heart of dualism: ride your essentialistic thinking to the obvious conclusion; if you assume (because you can not mathematically calculate the relationship between two things) that two things are essentially different, then you have to honestly say that you believe that they are essentially different, that you feel there is a "nonreducible" dualism. You have now made a clear ontological statement about the nature of reality: mind and brain are essentially different, you can not reduce mind to brain, there are "emergent" properties of mind that can not be predicted by our best model of brain activity. In this way the Platonic thinker will claim to have solved the problem and is free to ignore the possibility that the real problem is only an epistemological problem. Maybe the brain is just so complex that we can not hope to completely calculate the properties of mind from a detailed model of the components of a brain. Maybe the universe has no problem creating mind from brain (just as the universe has no problem making water molecules have the "correct" chemical properties), maybe the human inability to do so is just a statement about an understandable human limitation.
The biggest problem with Stairway to the Mind is that Scott's Mathematician's bias shows through. Scott can't entirely escape the lure of Platonic essentialism and the dream that quantum uncertainty might explain the "mystery" of the mind. Scott is a dualist. He tries to excuse himself by calling himself an "emergent dualist".
Scott's obsession with equations pervades his book. One example of his mathematical over-kill:
"It may be that the present laws of physics are sufficient to understand the nature of mind. In this case it would be required to find an equation that would relate consciousness to the underlying physical world."
We must assume that Scott's attitude about life is the same. Would he say:
It may be that the present laws of physics are sufficient to understand the nature of life. In this case it would be required to find an equation that would relate vitality to the underlying physical world.
Clearly, we can understand that life is made possible by chemical reactions, even if we cannot write an equation that captures the Platonic essence of life. Plato may lament our mathematical failure, but the rest of humanity gets on just fine, sure that life does not need any magic beyond that of standard physics.
Frankly, I am left wondering if we have a problem in semantics here. What is the difference between saying:
1) Life can be explained in terms of chemistry and physics
and
2) Life is an emergent property of complex collections of molecules
?????????????????????????????????????????????????
Here is a line from Scott; "I suggest consciousness is an emergent phenomenon, one born of many discrete events fusing together as a single experience." The only thing that bothers me about that sentence is the word "emergence". I would be perfectly happy with that sentence if I could change it to: "I suggest consciousness is a complex phenomenon, one born of many discrete events fusing together as a single experience." Does "complex" mean the same thing to me as "emergent" does to Scott? Is it just semantics? For me, saying "emergent" amounts to throwing up your hands in surrender while saying "complex" is a call to battle, an invitation back into the laboratory.
For some folks, "materialism" and "reductionism" are words to be used in the same way Ronald Regan would use the words "communism" and "atheism". If Scott just wants me to say that Mind is an emergent property of the brain, OKAY! I can do that. Its just words. If I say it that way or if I say I want to EXPLAIN mind in terms of brain, I am still left with the same task: I need to study the details of brain and behavior. It seems that Scott's approach of saying MIND is an emergent property allows folks to avoid the hard work of figuring out the biology of the brain. If MIND is just an emergent property of complex brain tissue, then people like Scott are tempted to just list MIND as a fundamental property of the universe and feel they are done. I guess that is fine for philosophizing mathematicians, but it is really a cheat.
I guess the worst thing about Scott's book is when he does things like characterize Francis Crick as a dualist. It's like calling a democrat a "Liberal" or a socialist a "communist'. It is a fun game to play if you can catch people who are hiding under the wrong label, but it is a dangerous game to play if you are wrong. In the current explosion of mind research, there is going to have to be a shake-out of terminology. If we keep talking, we may be eventually be able to agree on the meanings of the words we use, but only if we keep talking about objective experimental results. Here is the challenge for philosophers of mind, replacing our biases as physical scientists and biologists with an empirical foundation built on a new type of science that successfully brings physical methods to the study of complex biological systems.
Bottom line: this is a great book. Read it. Keep a clear head concerning the difference between the science of mind and philosophy of mind.
Go to John's Book Page.
Go to John's
Home Page.