2021-11-14

Reconsidering Rationality, part 1


Consider this question – a test of your rationality. Suppose that:
  • The prevalence of a certain type of cancer – cancer X -- is 1%.
  • The sensitivity of a test for cancer X is 90% -- meaning the true-positive rate is 90%. In other words, for 90% of people who have cancer X and take that test, the test comes back positive.
  • The false-positive rate is 9 percent.
Audrey takes the test, and it comes back positive. What’s the chance she has that cancer X?

You probably want to say there’s a 90% chance Audrey has cancer – after all, the test was positive, and the test was 90% accurate. But let’s break it down. Suppose we administer the test to a randomly selected 10,000 people.
1% of them have cancer X – so that’s 100 people have it, and 9,900 don’t. The test has a 90% true-positive rate, so that’s 90% of the 100 people who have the cancer. 90% of 100 is 90 true-positive results. The other 10 people out of the 100 who have the disease get a false-negative. So we have 90 in the true-positive cell, and 10 in the false negative cell.

What about the other 9,900 people – the 99% who don’t have the disease? There’s a 9 percent false-positive rate. So 9% of 9,900 is 891 people who test positive for cancer X even though they don’t have cancer X. We have 891 false-positive test results. The remaining 9,009 of the 9,900 who don’t have Cancer X get a true-negative test result.

So here’s our breakdown:

Out of 10,000 people taking the test, we get:
True-positives: 90.
True-negatives: 9,009.
False-positives: 891.
False-negatives: 10.

The total number of positive test results will be 90 plus 891 – or 981.

Out of 981 positive test results, 90 of them – 9.2% are true-positive, and the remaining 90.8% are false-positive.

Audrey’s positive test results mean that she has a 9.2% chance of having Cancer X.

Where did our intuition go wrong? Why did we think there was a 90% chance she had the Cancer, when in fact there was a greater than 90% chance she didn’t have it?

We tend to forget about the base rate: that’s the 1% of the population that has the cancer. And why shouldn’t we? The true-positive rate is 90%, whether the base rate is 1% or 50%.

But, of course, the base rate matters a lot, because there’s a false-positive rate of 9% -- and when the base rate is only 1%, then 9% of those who don’t have the cancer will be a lot more than 90 percent of those who do – making Audrey much more likely, despite her positive test result, to not have Cancer X. That’s good news for Audrey, but maybe not such good news for our human rational capacity.

When I first came across that hypothetical scenario, my intuition was – as maybe yours was -- that Audrey had a 90 percent chance of having the disease. Or maybe an 81% chance – taking the 90% true-positive rate and subtracting the 9% false-positive rate – which is completely wrong. But, heck, I’m a UU minister – before that, a humanities major. Surely actual doctors know their way around epidemiology statistical inference much better than I do?

Some of them do. “The most popular answer from a sample of doctors given these numbers ranged from 80 to 90 percent” (Pinker). As Steven Pinker sums it up:
“That’s right, the professionals whom we entrust with our lives flub the basic task of interpreting a medical test, and not by a little bit. They think there’s almost a 90 percent chance she has cancer, whereas in reality there’s a 90 percent chance she doesn’t.”
Maybe we could use a little more rationality.

I used to be a big fan of rationality – which doesn’t mean that I actually was very rational, only that I thought I was (mostly) – and I thought it was a good thing to be. I was a debater in high school – which is all about making reasoned arguments.

And then I became a philosophy professor – which is all about making reasoned arguments. “A philosopher,” one of my own professors had said, “is someone who can make the best possible argument for any position, no matter how wrong-headed.”

Along the way, I would sometimes hear that I was in my head all the time, that I might be more whole if I were in touch with my body and emotions. Usually this came from people other than my fellow philosophy academics.

Then I started preparing for the ministry and went to divinity school, and I heard it a lot more. Can’t just be all in your head. As I felt my way along toward what was, as near as I could discern, greater spiritual wholeness, something that Scottish philosopher David Hume had said helped me grasp the proper and reduced place of reason. Hume said:
“Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”
There’s no distinction to be made between rationality and rationalization. The role of reason was to concoct after-the-fact rationales for what other parts of the brain had already decided it was going to do anyway. That’s what the studies showed.

Benjamin Libet’s experiments in the mid-1980s showed that the motor signal is headed to the muscle several hundred milliseconds before we become conscious of it. We have already begun the action before the apparatus of conscious decision-making comes on line. In Michael Gazzaniga’s experiments, he flashed the word "walk" in a part of the visual field that would be seen by only the right hemisphere. It’s the left hemisphere that processes language consciously, so subjects were not conscious of seeing the word. Yet many of them would stand and walk away. When asked why they were getting up, subjects had no problem giving a reason. "I’m going to get something to drink," they might say. Our inner interpreter module is good at making up explanations, but not at knowing it has done so.

Then in 2011, an essay by Hugo Mercier and Dan Sperber argued that our vaunted human rationality is more about social bonding than for discerning truth. Titled, “Why Do Humans Reason?” the essay noted that if reason evolved to discern truth or make better decisions then natural selection would have weeded out confirmation bias – that is, "the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses, while giving disproportionately less consideration to alternative possibilities."

Confirmation bias is a huge distortion – an enormous obstacle to adopting the belief that best fits all the available evidence. Why do we have brains built this way? Why didn’t natural selection weed this out? Good question! And Mercier and Sperber’s suggested answer seemed to make sense. Confirmation bias exists because forming beliefs that fit the evidence is not the purpose of human reasoning. Forming social bonds is the purpose of reasoning. Human thinking is fundamentally relational because for our ancestors going back millions of years survival had more to do with strong relationships and social bonds of support than it did with reaching conclusions that fit the evidence. Competition between groups placed a premium on group solidarity, and group solidarity was reinforced by sharing an ideology – a characteristic pattern of reasoning.

Homo sapiens have been on the planet for about 300,000 years. The genus homo has been around for between 2.5 and 3 million years. But the scientific method for less than 400 years. Clearly, coming up with a story that really fits best with all the evidence that has been or could be gathered is a low priority for brains like ours. But having a story that we share with our tribe-mates is a high priority. So confirmation bias is actually quite useful: it attunes us to look for evidence that will help us fit in with the people with whom we most need to fit in.

Rationality, then, is nothing but after-the-fact rationalization made up to explain something we did without really knowing why – and we offer these explanations to each other by way of social bonding rather than discerning truth.

You may be noticing a certain irony here. My path toward decreased valuation of rationality was itself highly rational – I looked at empirical studies. One of the key traits of a rational person is a willingness to change their mind in the face of evidence – which, looking back, it looks like that’s just what I did.

But now I’m reconsidering rationality – maybe it’s actually possible. And a good thing. Maybe it’s not just for social bonding shared rationalizations, as fun as that may be.

This reconsideration started last August (2021) with a New Yorker article, “Why is it so hard to be rational?” and then I followed up by reading Steven Pinker’s new book, Rationality: What It Is, Why It Seems Scarce, and Why It Matters.” I'll get into that in part 2.


No comments:

Post a Comment