----------
From: Rob
Bass <rhbass@gmail.com>
----------
> From: George
>
> [There is] the distinction
>
between what one believes is correct, and the standards of proof by
which
> one believes it correct. Do the standards of proof
themselves have to be
> correct ones to be used to prove
correctness? If so, what standards does
> one use to prove
them?
Sure, the standards have to be correct, but it doesn't
follow that we have to have a proof of their correctness in order for
them to be correct (just as it doesn't follow from the fact that
the world is round that anybody knows or has a proof that the world
is round).
That may be too compressed and I'm going to
expand on it at some length below, but as a preliminary, consider
this. Suppose that, with regard to some fact, I have sufficient
evidence and argument available to me that, if I apply the correct
standards of reasoning to that evidence and argument, I will be
entitled to or justified in the claim that I know it. Suppose I do
apply those standards and, thus, that I know it. Now, whether I have
applied the correct standards is a further fact and one that I
may not know. My knowing that p is consistent with my not knowing
that I know that p. (I may not even believe that I know that p! Even
beyond that, I might know that p and believe that I do not know that
p--in which case, of course, that second belief would be false!)
Here, we're heading in the direction of some deep questions
in epistemology. In particular, if we are to have knowledge or good
reasons for beliefs, must everything--including standards of
reasoning--be justified? If you say yes, then you've got,
ultimately, only two options. You can end up as some kind of
foundationalist who seeks (or claims to have found) self-evident or
self-justifying truths to base knowledge on or you can end up as a
skeptic who won't accept any knowledge claims because they all
depend on something that hasn't itself been justified. I think the
skeptic wins that argument.
I don't think, however, that
that's the end of the story. Note that both the foundationalist and
the skeptic are agreeing that everything has to be justified.
But if everything has to be justified, what about that? What
justifies it? Is there any alternative to saying that everything has
to be justified? I think there is. Roughly, by the time we are able
to consider the skeptic's doubts, we already have lots of beliefs,
including beliefs about what's true and what the correct standards
of reasoning are. None of them, I think, are sacrosanct in the
foundationalist sense, but the fact that they're not guaranteed to
be right is no reason at all for thinking they're wrong. The
answer to the question, "Where do you start?" is: you start where
you are. Any particular belief may be called into question--and,
if it is (and if you are rational), you consider any criticisms in
the light of other beliefs that are not, for the time being, giving
you trouble. This might sound problematic if the skeptic were able to
raise his questions without relying on other standards and beliefs.
But he is not. He has to rely on your adherence to rational standards
even to raise doubts. In principle, no member of the system of your
beliefs, including beliefs about rational standards, is immune to
revision (though some may never need revising), but at any given
time, the need for revision has to be mediated through other beliefs
and standards that you accept.
As I wrote somewhere else:
In my view, foundationalism and skepticism
feed on one another. Foundationalists (correctly) wish to defend
human knowledge. Skeptics (correctly) point out that the proposed
foundations aren't certain or self-evident or indubitable or
whatever.
But consider the skeptic's doubts: Almost
everything that people do or can do (with the possible exception of
some automatic bodily functions) are also things they can imagine
doing, fantasize doing, pretend to be doing or feign doing. The
reason the skeptic consistently wins the running debate with the
foundationalist is that he has had a monopoly on doubt. What
if we open up doubting to fair competition? What if we doubt his
doubts? Are the skeptic's doubts real--or is he just pretending?
If he's just pretending, then the right response might be to
pretend that we have an answer--and pay him no more attention.
Once the foundationalist model of some privileged class of
foundational truths is seen to be wrong, skepticism goes with it. Any
particular claim can be challenged--but only if there's good
reason to challenge it. And once the skeptic admits that he can tell
the difference between a good reason and one not so good, he's no
longer a skeptic.
----------
> From: George
>
>
The universal primacy--I don't think it's an exaggeration to
call it
> "absolutism"--of logic is as evident in human
action as it ever was
> before Gödel wrote. That's
because the validity of logic does not
> depend on truth (in
the sense of, correspondence with reality) at
> all. Whether a
proposition is logical is a completely distinct issue
> from
whether it is true.
Why do you want to distinguish between
logic and truth? Of course, truths of logic aren't all truths, but
I don't see that that means they're not true at all. What is
"a proposition [that] is logical" if you don't mean that it's
a logical truth? I think I'd rather say that logic is true no
matter what else is. Suppose it's a logical truth that an
indicative sentence and its denial are not both true. Look around
you. Do you ever find a case in which a sentence and its denial are
both true? Of course not. So, in what sense does "a sentence and
its denial are not both true" not correspond to reality?
Isn't it instead that it "corresponds" to everything?
Here's
an attempt to say a little more about how I look at logic. We can
start with a characterization from Popper: a system of logic is a set
of rules for transmitting truth from premises to conclusions and for
retransmitting falsehood from conclusion to premises. In other words,
if the premises are true (and the rules have been followed),
then the conclusion has to be true as well. If the conclusion is
false (and the rules have been followed), then at least one of the
premises must be false as well. (This is just a starting point and
Popper was mistaken if he thought it was a full characterization. A
full characterization would need to say something about systems with
more truth-values than "true" and "false," or that employ
probability-metrics or degrees of confirmation, and so on.) That, so
to speak, sets an ideal for a system of logic.
The question
is: how do we bring the ideal down to earth? Where do our
logical systems come from? I don't think we start from intuition
except perhaps in the blandest and most innocuous sense--namely,
with things that seem obvious for which we do not at present see a
further reason. Where we start, both in the life of an individual and
in the history of the species, is with inferences or arguments, with
taking one thing as a reason for something else. Plainly, both in the
case of each person individually and in the case of societies
historically, we did that before we ever formulated or considered any
principles of logic.
But when, on this level, we reflect a bit
on the arguments and inferences we make and hear others making, we
find that some seem better than others; one of the obvious things
about arguments is that they're not all equally good. We attempt to
construct systems of rules that will both formalize our practice and
discipline it, that will give us the "intuitively" right answers
in the easy cases and help us to find right answers where they aren't
intuitively obvious. They help us to understand what good arguments
have in common as well as to see where defective arguments (which may
also seem good) go wrong. These are systems of logic. None of
them, I think, are founded on absolutely self-evident or
incontrovertible principles. They are, one and all, open to
development and improvement. (Some, though, may not need to be
improved; it's possible we've gotten something right.)
Now, this perspective does depend on a kind of assumption, but it is not the assumption that any particular logical principle is a certain starting-point. Rather, it is the assumption that we are not hopelessly bad at reasoning. If we were hopelessly bad, then there'd be no more reason to accept what seems obviously right than to reject it, nor would there be any reason to try to improve our logical systems. If you like the term, you might call this our basic cognitive act of faith. But if it's an act of faith, it's one to which we really have no alternative: If it's a mistake, all bets are off--including the bets of anyone tempted to deny it. For even to make clear what is being denied, one has to rely on argument and inference. The doubter would be making an argument that there are no good arguments, providing a reason for not paying attention to reasons.
When people first hear of multiple systems of logic, I suspect they often feel dizzy or disoriented, as if the ground were shifting under their feet. But actually, there's considerable convergence among different systems and such disagreement as there is is often at the margins where we are currently trying to extend the reach of our formal systems or is over the best or most perspicuous way to formalize facts and relations admitted on all sides. In short, there is a substantial consensus. Familiar principles like modus ponens or non-contradiction in their ordinary applications are in no danger whatever. But I think it's a mistake to say that the validity of logic "comes from human agreement or consensus." If that were really so, we could have no good reason to change or alter what there was already agreement upon. There just isn't the sharp distinction between what is true or accepted because we agree on it and what we agree upon, if we do, because it's true.
Rob
rhbass@gmail.com
http://oocities.com/amosapient