| An Epistemological Nightmare |   
Raymond M. Smullyan, 1982 
Scene 1
Frank is in the office of an eye doctor. The doctor holds up a book and asks
"What color is it?" Frank answers, "Red." The doctor says, "Aha, just as I
thought! Your whole color mechanism has gone out of kilter. But fortunately your
condition is curable, and I will have you in perfect shape in a couple of
weeks."
 Scene 2
(A few weeks later.) Frank is in a laboratory in the home of an experimental
epistemologist. (You will soon find out what that means!) The epistemologist
holds up a book and also asks, "What color is this book?" Now, Frank has been 
earlier dismissed by the eye doctor as "cured." However, he is now of a very 
analytical and cautious temperament, and will not make any statement that can
possibly be refuted. So Frank answers, "It seems red to me." 
 Epistemologist:     Wrong! 
 Frank:     I don't think you heard what I said. I 
merely said that it seems red to me.
 Epistemologist:     I heard you, and you were wrong. 
 Frank:     Let me get this clear; did you mean that 
I was wrong that this book is red, or that I was wrong that it seems red to me? 
 Epistemologist:     I obviously couldn't have meant
that you were wrong in that it is red, since you did not say that it is red. All 
you said was that it seems red to you, and it is this statement which is wrong. 
 Frank:     But you can't say that the statement "It 
seems red to me" is wrong. 
 Epistemologist:     If I can't say it, how come I 
did? 
 Frank:     I mean you can't mean it. 
 Epistemologist:     Why not? 
 Frank:     But surely I know what color the book
seems to me! 
 Epistemologist:     Again you are wrong. 
 Frank:     But nobody knows better than I how things 
seem to me. 
 Epistemologist:     I am sorry, but again you are
wrong. 
 Frank:     But who knows better than I? 
 Epistemologist:     I do.
 Frank:     But how could you have access to my
private mental states?
 Epistemologist:     Private mental states!
Metaphysical hogwash! Look, I am a practical epistemologist. Metaphysical
problems about "mind" versus "matter" arise only from epistemological 
confusions. Epistemology is the true foundation of philosophy. But the trouble
with all past epistemologists is that they have been using wholly theoretical 
methods, and much of their discussion degenerates into mere word games. While
other epistemologists have been solemnly arguing such questions as whether a man 
can be wrong when he asserts that he believes such and such, I have discovered
how to settle such questions experimentally. 
 Frank:     How could you possibly decide such things
empirically? 
 Epistemologist:     By reading a person's thoughts
directly. 
 Frank:     You mean you are telepathic?
 Epistemologist:     Of course not. I simply did the 
one obvious thing which should be done, viz. I have constructed a brain-reading
machine--known technically as a cerebroscope--that is operative right now in 
this room and is scanning every nerve cell in your brain. I thus can read your
every sensation and thought, and it is a simple objective truth that this book 
does not seem red to you.
 Frank (thoroughly subdued):     Goodness gracious, I 
really could have sworn that the book seemed red to me; it sure seems that it
seems read to me! 
 Epistemologist:     I'm sorry, but you are wrong
again.
 Frank:     Really? It doesn't even seem that it
seems red to me? It sure seems like it seems like it seems red to me!
 Epistemologist:     Wrong again! And no matter how 
many times you reiterate the phrase "it seems like" and follow it by "the book 
is red" you will be wrong. 
 Frank:     This is fantastic! Suppose instead of the 
phrase "it seems like" I would say "I believe that." So let us start again at 
ground level. I retract the statement "It seems red to me" and instead I assert 
"I believe that this book is red." Is this statement true or false? 
 Epistemologist:     Just a moment while I scan the 
dials of the brain-reading machine--no, the statement is false. 
 Frank:     And what about "I believe that I believe 
that the book is red"? 
 Epistemologist (consulting his dials):     Also 
false. And again, no matter how many times you iterate "I believe," all these 
belief sentences are false. 
 Frank:     Well, this has been a most enlightening 
experience. However, you must admit that it is a little hard on me to realize 
that I am entertaining infinitely many erroneous beliefs! 
 Epistemologist:     Why do you say that your beliefs 
are erroneous? 
 Frank:     But you have been telling me this all the 
while! 
 Epistemologist:     I most certainly have not! 
 Frank:     Good God, I was prepared to admit all my 
errors, and now you tell me that my beliefs are not errors; what are you trying 
to do, drive me crazy? 
 Epistemologist:     Hey, take it easy! Please try to 
recall: When did I say or imply that any of your beliefs are erroneous? 
 Frank:     Just simply recall the infinite sequence 
of sentences: (1) I believe this book is red; (2) I believe that I believe this 
book is red; and so forth. You told me that every one of those statements is 
false. 
 Epistemologist:     True. 
 Frank:     Then how can you consistently maintain 
that my beliefs in all these false statements are not erroneous? 
 Epistemologist:     Because, as I told you, you 
don't believe any of them. 
 Frank:     I think I see, yet I am not absolutely 
sure. 
 Epistemologist:     Look, let me put it another way. 
Don't you see that the very falsity of each of the statements that you assert 
saves you from an erroneous belief in the preceding one? The first statement is, 
as I told you, false. Very well! Now the second statement is simply to the 
effect that you believe the first statement. If the second statement were true, 
then you would believe the first statement, and hence your belief about the 
first statement would indeed be in error. But fortunately the second statement 
is false, hence you don't really believe the first statement, so your belief in 
the first statement is not in error. Thus the falsity of the second statement 
implies you do not have an erroneous belief about the first; the falsity of the
third likewise saves you from an erroneous belief about the second, etc. 
 Frank:     Now I see perfectly! So none of my 
beliefs were erroneous, only the statements were erroneous. 
 Epistemologist:     Exactly. 
 Frank:     Most remarkable! Incidentally, what color 
is the book really? 
 Epistemologist:     It is red. 
 Frank:     What! 
 Epistemologist:     Exactly! Of course the book is 
red. What's the matter with you, don't you have eyes? 
 Frank:     But didn't I in effect keep saying that 
the book is red all along? 
 Epistemologist:     Of course not! You kept saying 
it seems red to you, it seems like it seems red to you, you believe it is red, 
you believe that you believe it is red, and so forth. Not once did you say that 
it is red. When I originally asked you "What color is the book?" if you had 
simply answered "red," this whole painful discussion would have been avoided. 
 Scene 3
Frank comes back several months later to the home of the epistemologist. 
 Epistemologist:     How delightful to see you! 
Please sit down. 
 Frank (seated):     I have been thinking of our last 
discussion, and there is much I wish to clear up. To begin with, I discovered an 
inconsistency in some of the things you said. 
 Epistemologist:     Delightful! I love 
inconsistencies. Pray tell! 
 Frank:     Well, you claimed that although my belief 
sentences were false, I did not have any actual beliefs that are false. If you 
had not admitted that the book actually is red, you would have been consistent. 
But your very admission that the book is red, leads to an inconsistency. 
 Epistemologist:     How so? 
 Frank:     Look, as you correctly pointed out, in 
each of my belief sentences "I believe it is red," "I believe that I believe it 
is red," the falsity of each one other than the first saves me from an erroneous 
belief in the proceeding one. However, you neglected to take into consideration 
the first sentence itself. The falsity of the first sentence "I believe it is 
red," in conjunction with the fact that it is red, does imply that I do have a 
false belief. 
 Epistemologist:     I don't see why. 
 Frank:     It is obvious! Since the sentence "I 
believe it is red" is false, then I in fact believe it is not red, and since it 
really is red, then I do have a false belief. So there! 
 Epistemologist (disappointed):     I am sorry, but 
your proof obviously fails. Of course the falsity of the fact that you believe 
it is red implies that you don't believe it is red. But this does not mean that 
you believe it is not red! 
 Frank:     But obviously I know that it either is 
red or it isn't, so if I don't believe it is, then I must believe that it isn't.
 Epistemologist:     Not at all. I believe that 
either Jupiter has life or it doesn't. But I neither believe that it does, nor 
do I believe that it doesn't. I have no evidence one way or the other. 
 Frank:     Oh well, I guess you are right. But let 
us come to more important matters. I honestly find it impossible that I can be 
in error concerning my own beliefs. 
 Epistemologist:     Must we go through this again? I 
have already patiently explained to you that you (in the sense of your beliefs, 
not your statements) are not in error. 
 Frank:     Oh, all right then, I simply do not 
believe that even the statements are in error. Yes, according to the machine 
they are in error, but why should I trust the machine? 
 Epistemologist:     Whoever said you should trust 
the machine? 
 Frank:     Well, should I trust the machine? 
 Epistemologist:     That question involving the word 
"should" is out of my domain. However, if you like, I can refer you to a 
colleague who is an excellent moralist--he may be able to answer this for you. 
 Frank:     Oh come on now, I obviously didn't mean 
"should" in a moralistic sense. I simply meant "Do I have any evidence that this 
machine is reliable?" 
 Epistemologist:     Well, do you? 
 Frank:     Don't ask me! What I mean is should you 
trust the machine? 
 Epistemologist:     Should I trust it? I have no 
idea, and I couldn't care less what I should do. 
 Frank:     Oh, your moralistic hangup again. I mean, 
do you have evidence that the machine is reliable? 
 Epistemologist:     Well of course! 
 Frank:     Then let's get down to brass tacks. What 
is your evidence? 
 Epistemologist:     You hardly can expect that I can 
answer this for you in an hour, a day, or a week. If you wish to study this 
machine with me, we can do so, but I assure you this is a matter of several 
years. At the end of that time, however, you would certainly not have the 
slightest doubts about the reliability of the machine. 
 Frank:     Well, possibly I could believe that it is 
reliable in the sense that its measurements are accurate, but then I would doubt 
that what it actually measures is very significant. It seems that all it 
measures is one's physiological states and activities. 
 Epistemologist:     But of course, what else would 
you expect it to measure? 
 Frank:     I doubt that it measures my psychological 
states, my actual beliefs. 
 Epistemologist:     Are we back to that again? The 
machine does measure those physiological states and processes that you call 
psychological states, beliefs, sensations, and so forth.
 Frank:     At this point I am becoming convinced 
that our entire difference is purely semantical. All right, I will grant that 
your machine does correctly measure beliefs in your sense of the word "belief," 
but I don't believe that it has any possibility of measuring beliefs in my sense 
of the word "believe." In other words I claim that our entire deadlock is simply 
due to the fact that you and I mean different things by the word "belief." 
 Epistemologist:     Fortunately, the correctness of 
your claim can be decided experimentally. It so happens that I now have two 
brain-reading machines in my office, so I now direct one to your brain to find 
out what you mean by "believe" and now I direct the other to my own brain to 
find out what I mean by "believe," and now I shall compare the two readings. 
Nope, I'm sorry, but it turns out that we mean exactly the same thing by the 
word "believe." 
 Frank:     Oh, hang your machine! Do you believe we 
mean the same thing by the word "believe"? 
 Epistemologist:     Do I believe it? Just a moment 
while I check with the machine. Yes, it turns out I do believe it. 
 Frank:     My goodness, do you mean to say that you 
can't even tell me what you believe without consulting the machine? 
 Epistemologist:     Of course not. 
 Frank:     But most people when asked what they 
believe simply tell you. Why do you, in order to find out your beliefs, go 
through the fantastically roundabout process of directing a thought-reading 
machine to your own brain and then finding out what you believe on the basis of 
the machine readings? 
 Epistemologist:     What other scientific, objective 
way is there of finding out what I believe? 
 Frank:     Oh, come now, why don't you just ask 
yourself? 
 Epistemologist (sadly):     It doesn't work. 
Whenever I ask myself what I believe, I never get any answer! 
 Frank:     Well, why don't you just state what you 
believe? 
 Epistemologist:     How can I state what I believe 
before I know what I believe? 
 Frank:     Oh, to hell with your knowledge of what 
you believe; surely you have some idea or belief as to what you believe, don't 
you? 
 Epistemologist:     Of course I have such a belief. 
But how do I find out what this belief is? 
 Frank:     I am afraid we are getting into another 
infinite regress. Look, at this point I am honestly beginning to wonder whether 
you may be going crazy. 
 Epistemologist:     Let me consult the machine. Yes, 
it turns out that I may be going crazy. 
 Frank:     Good God, man, doesn't this frighten you? 
 Epistemologist:     Let me check! Yes, it turns out
that it does frighten me. 
 Frank:     Oh please, can't you forget this damned 
machine and just tell me whether you are frightened or not? 
 Epistemologist:     I just told you that I am. 
However, I only learned of this from the machine. 
 Frank:     I can see that it is utterly hopeless to 
wean you away from the machine. Very well, then, let us play along with the 
machine some more. Why don't you ask the machine whether your sanity can be 
saved? 
 Epistemologist:     Good idea! Yes, it turns out 
that it can be saved. 
 Frank:     And how can it be saved? 
 Epistemologist:     I don't know, I haven't asked 
the machine. 
 Frank:     Well, for God's sake, ask it! 
 Epistemologist:     Good idea. It turns out that... 
 Frank:     It turns out what? 
 Epistemologist:     It turns out that... 
 Frank:     Come on now, it turns out what? 
 Epistemologist:     This is the most fantastic thing 
I have ever come across! According to the machine the best thing I can do is to 
cease to trust the machine! 
 Frank:     Good! What will you do about it? 
 Epistemologist:     How do I know what I will do 
about it, I can't read the future? 
 Frank:     I mean, what do you presently intend to 
do about it? 
 Epistemologist:     Good question, let me consult 
the machine. According to the machine, my current intentions are in complete 
conflict. And I can see why! I am caught in a terrible paradox! If the machine 
is trustworthy, then I had better accept its suggestion to distrust it. But if I 
distrust it, then I also distrust its suggestion to distrust it, so I am really 
in a total quandary. 
 Frank:     Look, I know of someone who I think might 
be really of help in this problem. I'll leave you for a while to consult him. Au 
revoir! 
 Scene 4.
(Later in the day at a psychiatrist's office.) 
 Frank:     Doctor, I am terribly worried about a 
friend of mine. He calls himself an "experimental epistemologist." 
 Doctor:     Oh, the experimental epistemologist. 
There is only one in the world. I know him well! 
 Frank:     That is a relief. But do you realize that 
he has constructed a mind-reading device that he now directs to his own brain, 
and whenever one asks him what he thinks, believes, feels, is afraid of, and so 
on, he has to consult the machine first before answering? Don't you think this 
is pretty serious? 
 Doctor:     Not as serious as it might seem. My
prognosis for him is actually quite good. 
 Frank:     Well, if you are a friend of his, 
couldn't you sort of keep an eye on him? 
 Doctor:     I do see him quite frequently, and I do 
observe him much. However, I don't think he can be helped by so-called 
"psychiatric treatment." His problem is an unusual one, the sort that has to 
work itself out. And I believe it will. 
 Frank:     Well, I hope your optimism is justified. 
At any rate I sure think I need some help at this point! 
 Doctor:     How so? 
 Frank:     My experiences with the epistemologist 
have been thoroughly unnerving! At this point I wonder if I may be going crazy; 
I can't even have confidence in how things appear to me. I think maybe you could 
be helpful here. 
 Doctor:     I would be happy to but cannot for a 
while. For the next three months I am unbelievably overloaded with work. After 
that, unfortunately, I must go on a three-month vacation. So in six months come 
back and we can talk this over. 
 Scene 5.
(Same office, six months later.) 
 Doctor:     Before we go into your problems, you 
will be happy to hear that your friend the epistemologist is now completely 
recovered. 
 Frank:     Marvelous, how did it happen? 
 Doctor:     Almost, as it were, by a stroke of 
fate--and yet his very mental activities were, so to speak, part of the "fate." 
What happened was this: For months after you last saw him, he went around 
worrying "should I trust the machine, shouldn't I trust the machine, should I, 
shouldn't I, should I, shouldn't I." (He decided to use the word "should" in 
your empirical sense.) He got nowhere! So he then decided to "formalize" the 
whole argument. He reviewed his study of symbolic logic, took the axioms of 
first-order logic, and added as nonlogical axioms certain relevant facts about 
the machine. Of course the resulting system was inconsistent--he formally proved 
that he should trust the machine if and only if he shouldn't, and hence that he 
both should and should not trust the machine. Now, as you may know, in a system 
based on classical logic (which is the logic he used), if one can prove so much 
as a single contradictory proposition, then one can prove any proposition, hence 
the whole system breaks down. So he decided to use a logic weaker than classical 
logic--a logic close to what is known as "minimal logic"--in which the proof of 
one contradiction does not necessarily entail the proof of every proposition. 
However, this system turned out too weak to decide the question of whether or 
not he should trust the machine. Then he had the following bright idea. Why not 
use classical logic in his system even though the resulting system is 
inconsistent? Is an inconsistent system necessarily useless? Not at all! Even 
though given any proposition, there exists a proof that it is true and another 
proof that it is false, it may be the case that for any such pair of proofs, one 
of them is simply more psychologically convincing than the other, so simply pick 
the proof you actually believe! Theoretically the idea turned out very well--the
actual system he obtained really did have the property that given any such pair 
of proofs, one of them was always psychologically far more convincing than the 
other. Better yet, given any pair of contradictory propositions, all proofs of 
one were more convincing than any proof of the other. Indeed, anyone except the 
epistemologist could have used the system to decide whether the machine could be 
trusted. But with the epistemologist, what happened was this: He obtained one 
proof that he should trust the machine and another proof that he should not. 
Which proof was more convincing to him, which proof did he really "believe"? The 
only way he could find out was to consult the machine! But he realized that this 
would be begging the question, since his consulting the machine would be a tacit 
admission that he did in fact trust the machine. So he still remained in a 
quandary. 
 Frank:     So how did he get out of it? 
 Doctor:     Well, here is where fate kindly 
interceded. Due to his absolute absorption in the theory of this problem, which 
consumed about his every waking hour, he became for the first time in his life 
experimentally negligent. As a result, quite unknown to him, a few minor units 
of his machine blew out! Then, for the first time, the machine started giving 
contradictory information--not merely subtle paradoxes, but blatant 
contradictions. In particular, the machine one day claimed that the 
epistemologist believed a certain proposition and a few days later claimed he 
did not believe that proposition. And to add insult to injury, the machine 
claimed that he had not changed his belief in the last few days. This was enough 
to simply make him totally distrust the machine. Now he is fit as a fiddle. 
 Frank:     This is certainly the most amazing thing 
I have ever heard! I guess the machine was really dangerous and unreliable all 
along. 
 Doctor:     Oh, not at all; the machine used to be 
excellent before the epistemologist's experimental carelessness put it out of 
whack. 
 Frank:     Well, surely when I knew it, it couldn't 
have been very reliable. 
 Doctor:     Not so, Frank, and this brings us to 
your problem. I know about your entire conversation with the epistemologist--it 
was all tape-recorded. 
 Frank:     Then surely you realize the machine could 
not have been right when it denied that I believed the book was red. 
 Doctor:     Why not? 
 Frank:     Good God, do I have to go through all 
this nightmare again? I can understand that a person can be wrong if he claims 
that a certain physical object has a certain property, but have you ever known a 
single case when a person can be mistaken when he claims to have or not have a 
certain sensation? 
 Doctor:     Why, certainly! I once knew a Christian 
Scientist who had a raging toothache; he was frantically groaning and moaning 
all over the place. When asked whether a dentist might not cure him, he replied 
that there was nothing to be cured. Then he was asked, "But do you not feel 
pain?" He replied, "No, I do not feel pain; nobody feels pain, there is no such
thing as pain, pain is only an illusion." So here is a case of a man who claimed 
not to feel pain, yet everyone present knew perfectly well that he did feel 
pain. I certainly don't believe he was lying, he was just simply mistaken. 
 Frank:     Well, all right, in a case like that. But 
how can one be mistaken if one asserts his belief about the color of a book? 
 Doctor:     I can assure you that without access to 
any machine, if I asked someone what color is this book, and he answered, "I 
believe it is red," I would be very doubtful that he really believed it. It 
seems to me that if he really believed it, he would answer, "It is red" and not 
"I believe it is red" or "It seems red to me." The very timidity of his response 
would be indicative of his doubts. 
 Frank:     But why on earth should I have doubted 
that it was red? 
 Doctor:     You should know that better than I. Let 
us see now, have you ever in the past had reason to doubt the accuracy of your 
sense perception? 
 Frank:     Why, yes. A few weeks before visiting the 
epistemologist, I suffered from an eye disease, which did make me see colors 
falsely. But I was cured before my visit. 
 Doctor:     Oh, so no wonder you doubted it was red! 
True enough, your eyes perceived the correct color of the book, but your earlier 
experience lingered in your mind and made it impossible for you to really 
believe it was red. So the machine was right! 
 Frank:     Well, all right, but then why did I doubt 
that I believed it was true? 
 Doctor:     Because you didn't believe it was true, 
and unconsciously you were smart enough to realize the fact. Besides, when one 
starts doubting one's own sense perceptions, the doubt spreads like an infection 
to higher and higher levels of abstraction until finally the whole belief system 
becomes one doubting mass of insecurity. I bet that if you went to the 
epistemologist's office now, and if the machine were repaired, and you now 
claimed that you believe the book is red, the machine would concur. 
 No, Frank, the machine is--or, rather, was--a good one. The epistemologist 
learned much from it, but misused it when he applied it to his own brain. He 
really should have known better than to create such an unstable situation. The 
combination of his brain and the machine each scrutinizing and influencing the 
behavior of the other led to serious problems in feedback. Finally the whole 
system went into a cybernetic wobble. Something was bound to give sooner or 
later. Fortunately, it was the machine. 
 Frank:     I see. One last question, though. How 
could the machine be trustworthy when it claimed to be untrustworthy? 
 Doctor:     The machine never claimed to be 
untrustworthy, it only claimed that the epistemologist would be better off not 
trusting it. And the machine was right.  
  
           |