POSTSCRIPT TO ARTIFICIAL INTELLIGENCE.
Reflections on John Searle’s Rediscovery of the Mind
As I said in the original article, my stance on the issue of computer intelligence is undecided. Searle however is not. He affirms that human intelligence or more specifically, human consciousness is a subjective phenomenon. He further asserts this phenomenon is not accessible by the reductionist methodology. Searle makes a very persuasive argument, but I cannot go as far as he does. Searle wants us to believe that consciousness by its subjective nature is unassailable by this method. His objection, while he claims it’s not the Kantian argument concerning the a priori nature of consciousness, it is just that. He goes to great lengths to show that the very subjectivity of consciousness makes it impossible to reduce ones own consciousness to something else in terms of its constructs. In plainer terms:
consciousness is irreducible. Why?
Because by doing this, we lose the subjective sense of self. It vanishes upon our deconstructing it into
its constituencies. At present, he
admits and I concur, we can’t even do this reduction with the human mind. But
supposing we could, what then?
How
does Searle come to this conclusion? First, he
differentiates consciousness into two philosophic schools:
the epistemic and the ontological arguments.
By epistemic, he means our knowledge of our conscious
states. By ontological he means our
being in such conscious states. In the
former, we have our ability to look at consciousness as a phenomenon,
which can be studied through causative inputs and effectual outputs. This study allows us to understand its process. In the latter, we have ourselves as beings, which necessarily
exist in a conscious state at any given moment and are not analyzable in an
objective third-person fashion. We
can’t, Searle asserts because we, ourselves are the beings to which we are
trying to apply this analysis. This
attempt is doomed because to be obstructs to know. If this objection sounds similar to Goedel's Incompleteness paradox I’ve described above, you're right, it is. It is
also very similar to the argument Kant made in his Critique of Pure Reason. Here is where we get to the heart of Searle’s
debate with AI scientists. His debate is larger than AI proper and the scientists that are working in the specialized field of cognitive science, but I'm focussing on them in this article. He
tells us, they believe a form of materialism known as Reductionism. They believe there must be a way to explain
the complex machinery of our minds in terms, that are understandable via some
form of analysis. In the case of AI,
its functionalism, sometimes called computationalism.
He does not agree. He believes
that the subjective nature of consciousness makes it unassailable by this
method or any other for that matter. He
explicitly states what he knows is a cardinal sin to scientific theorists: All
things aren’t knowable! He goes further
and asserts this is common sense and should not alarm or disquiet anyone within
the cognitive science discipline.
I
have great respect for John Searle and agree with his conclusion that the Turing Test is flawed and not sufficient grounds to establish consciousness for an AI. However, I must point out, that he is on
uncertain logical grounds with his objection to computational AI. But, I can't disagree more, that an AI is theoretically unattainable because of his stubborn believe that the human mind cannot be understood via computational methods. My intuition is he is not as certain or
convinced of his own views as he suggests.
He just might feel the bold assertion that consciousness is irreducible
to computational functionalism might be disproved. Note a passage from the book as evidence:
“Furthermore, when I speak of the irreducibility of
consciousness, I am speaking of its irreducibility according to standard
patterns of reduction. No one can
rule out a priori the possibility of a major intellectual revolution that would
give us a new—and at present unimaginable—conception of reduction, according to
which consciousness would be reducible.”
A-ha,
so there is still hope for we materialists and the functional perspective yet.
The
problem with Searle’s argument (though it may be cogent and compelling) is it
doesn’t play fair. It rules out every
possible way to inspect and verify it.
If consciousness is irreducible because the subjectivity of
consciousness is lost when we try to make this reduction, then consciousness is
unknowable by reductionist methodology.
But, notice the irreducibility principle is not a conclusion of a deductive
argument, but an assertion of a common sense notion, according to Searle. It is put forth as the result of a thought
experiment in which he makes the assumption that subjective consciousness is
indivisible, because doing this would contradict the very nature of being
subjective. Again, this is assumed not
proved. It has the semblance of
Goedel’s Incompleteness Theorem paradox, but lacks the deductive structure. Goedel came to his startling conclusion
based on a set of valid axioms. Searle
is basing his argument on assuming the conclusion.
But,
lets not dwell on this issue. What
about the opposite side of the coin?
How can human consciousness come into being? Here, Searle again falsely attempts to debunk a materialist
argument. Actually, it was Hegel that
first laid the groundwork for what modern cognitive and computer scientists
call the ‘emergence principle of consciousness’. Hegel’s dialectical process, states explicitly that quantity and
quality affect each other reciprocally. Changes in quantity lead to
commensurate changes in quality, and vice versa. It is this principle that modern scientists invoke, when they
claim that the human brain’s myriad of subcellular entities, cells, tissues, organs and subsystems working in
concert, produce an organized self-aware system we call consciousness,
a qualitatively different phenomenon than any of its constituencies. Searle attacks this notion too.
TO BE CONTINUED