Standards of rationality are a lot more complicated than a handful of truisms like “don’t contradict yourself.� There are questions of what counts as evidence, how much and what quality of evidence is needed to establish a claim, or make it reasonable to believe or to overturn a contrary belief, how much weight should be accorded to testimony, how beliefs should be updated on the basis of new evidence and argument, what principles of reasoning should be accepted, and lots more.
One area where the problem shows up is in reasoning about probabilities. Most people are pretty bad at it. (There’s a vast literature in psychology about evidence of human irrationality in dealing with probabilities.) And it’s not just that they make mistakes or are careless. If that were the explanation, you’d expect a random distribution around the correct answers. Instead, the answers people give are systematically biased. We come to the same wrong conclusions – which is to say we’re operating with mistaken principles of probabilistic reasoning. People who (eventually) get to be good at probabilistic reasoning generally have to go through a process of beating their intuitions into submission.
As an example, are you familiar with the Monty Hall Problem? Imagine you’re a guest on a game show like “Let’s Make a Deal.� The host is about to offer you a choice of three doors behind which prizes may be. Once he hears your answer, but before he opens the door you’ve chosen, he’s going to give you an additional piece of information and ask if you want to change your selection. He will do this regardless of what you choose, and there are no tricks like sliding platforms behind the doors to shift the prizes around.
Here’s the choice he offers you. He tells you that there’s one new car behind one of the doors and a goat behind each of the other two. (Assume you’d rather have a car than a goat.) You make your choice, and he opens one of the other doors that you did not pick and shows you a goat. Then he asks you if you’d like to switch from the door you picked to the other closed door. Should you do it? If so, why? If not, why not?
The first time people come across it, almost everyone (me included) gets the answer wrong. I’ve presented it to a lot of people, some of them exceptionally bright, and not one, who had not previously come across the problem, got it right. It’s interesting that not only do most people get it wrong, they are passionately sure that they’ve gotten it right and that the correct answer is wrong! (You can find the right answer, with a bit of explanation, here. But, if you haven’t come across the problem before, don’t look before you’ve tried for yourself.)
That’s something of a digression, but the general point is that no one has any guarantee that his standards of rationality are exactly correct. There’s an overwhelming likelihood that they’re not. That’s why a revision of one’s standards of rationality can be an improvement.
Of course, you couldn’t reasonably revise some standard you accepted unless there were some other standards you were relying upon to do it, but that is, at least often, readily available. For example, people who don’t buy the fairly simple mathematical reasoning that leads to the right choice in the Monty Hall problem can set up experiments or simulations to test whether their reasoning is right. You test your probabilistic reasoning in terms of something else that, for the time being, is not in question.
More generally, we adjust our principles of reasoning in light of what we take to be true and unproblematic; we adjust what we think true in light of what we take to be correct and unproblematic principles of reasoning. This may be a bit uncomfortable because we don’t have guarantees that our adjustments are going in the right direction, but, for beings like us, who don’t have guarantees that we’re perfectly rational to begin with, it’s the only reasonable (in a fairly commonsensical meaning) option.
Rob
_____
Rob
Bass
rhbass@gmail.com