People often seem confused that others ignore strong and obvious evidence. How can they not see that so-and-so is a liar, or that such-and-such policy doesn't work? Can't they see that personality flaw? How could they be so in denial?
Often, these epistemic failures are blamed on tribalism (e.g., "beliefs are not about truth, they are just ways we show what side we're on") or on personal failures (e.g., "He is incapable of taking criticism"). But there may well be a more fundamental and more useful explanation than these.
So, why do our human brains so often reject relevant evidence? Wouldn't it be "better" if we just incorporated all evidence rather than rejecting it?
What does "better" mean?
The core of the issue is that seeking truth is just one way of obtaining benefit from our beliefs. Our brains use several different approaches when evaluating evidence. Below, we’ve taken a look at three core things that our brains might seek to maximize – each of which is "better" in a different sense, and each of which dominates some of the time:
1️⃣ The accuracy of our beliefs (e.g., via approximate Bayesian updating)
It intuitively makes sense that our brains are wired to take into account evidence properly (i.e., in an accuracy-maximizing way) since seeing the world accurately would have helped our ancestors survive. If your ancestors thought there was no tiger by the watering hole when there actually was, they would have been prone to getting attacked by tigers. If your ancestors thought that a berry was safe to eat when it wasn't, they were prone to get poisoned. Accuracy provides a real survival advantage. Some research in cognitive science and philosophy supports the idea that we have “Baysesian brains” theory” – hypothesis-testing mechanisms that update their own internal models of the world using evidence from sensory data, in order to maximize the accuracy of their predictions.
2️⃣ The pleasantness of our beliefs (by accepting evidence for things it feels good to believe and rejecting or re-interpreting evidence for things it feels bad or is painful to believe)
It also intuitively makes sense that our brains are motivated by pleasantness, aiming to seek pleasure and avoid pain. Our dopamine system plays a key role in processing rewards and reinforcing behaviors that lead to more of them. These reward signals evolved to aid the propagation of our genes, which is a common explanation for why safe, calorically dense foods are tasty and why sex is pleasurable. This is also the basis of operant conditioning, a theory dating back to the 1930s, widely attributed to psychologist B.F. Skinner. When we get a reward, we (and non-human animals) tend to do the behavior preceding the reward more often, and when we are punished, we do the behavior less often. Our thoughts and beliefs themselves can provide reward and punishment. It feels good to believe some things, and bad to believe others. This can come into conflict with seeing the world as it is.
3️⃣ The long-term benefit of our beliefs (by accepting the evidence if our brain predicts it will be useful for us to believe, and rejecting it if it is predicted to be costly to believe)
The third reason to believe is long-term benefit. By what mechanism does our brain take into account the future benefit (or cost) of a belief? Given that (as mentioned above) our brains evaluate evidence both with the aims of having accurate beliefs and of having pleasant ones, it makes sense that our evidence-gathering process will also be mediated through the positive and negative emotions generated by how we predict believing certain evidence will impact us in the future.
If, for instance, you're considering accepting evidence that would cause your family to disown you (for example, because you have evidence that your father did something terrible, but your family is likely to side with your father), your brain is likely to predict the social isolation implied by accepting the evidence. When you're considering this evidence, you may immediately experience a lot of anxiety. As soon as you find a way to dismiss or ignore the evidence, this anxiety goes away, rewarding you for not believing it. As we'll see later, there are cases where these two forces even go in opposite directions, because a belief feels pleasant but has negative long-term consequences (or vice-versa).
Conflicting beliefs
Often, there is no tension between the three things that our brains are (implicitly) trying to maximize (accuracy, pleasantness, and future benefit). For instance, suppose you received a credible email saying you were accepted into an educational program you applied for.
Believing that you were actually accepted (as opposed to disbelieving it or questioning it) is likely to be the most accurate belief, the most pleasant (or pain-avoidant) belief, and the most (long-term) beneficial belief.
But these three approaches sometimes point in different directions. In such cases, it matters which we prioritize. Consider these situations where one of the objectives tugs in the opposite direction to the other two:
Imagine you are on your way to a wedding that you're concerned you are running late for. If you get evidence that you've taken a wrong turn, it will maximize both accuracy and long-term benefit to accept this evidence. Still, you would immediately feel bad if you did so since you'd have realized you're going to be even later than you thought.
If someone pays you a compliment but there is significant evidence that they weren't
being sincere, it may feel good to believe the compliment is genuine, and it may be helpful to believe the compliment (e.g., by slightly improving your low self-esteem), but rejecting the evidence of insincerity is not truth-seeking.
If you are in a very happy long-term monogamous relationship, but you secretly have a small crush on someone else, and you get some evidence that this person has a crush on you too, it is accuracy-seeking and may be pleasurable to accept this evidence - but it may be to your long-term detriment to do so (both you and your partner may be better off if you don't think this other person has feelings for you).
Sometimes, just one of these objectives may be enough to dominate the others:
Accuracy winning: Someone gets strong evidence from their doctor that they have an incurable genetic disorder that may negatively impact them in many years (when they are much older). It feels bad to hear that, and they don't anticipate benefits from knowing, yet they believe the doctor.
Long-term benefit winning: You get evidence that you might be excelling in your math class, but you know yourself well enough to know that if you truly believed that you're excelling, you'd start to slack off and ultimately do poorly - so, you reject the idea that you're doing well, despite the fact it would be pleasurable (and accurate) to believe it.
Pleasantness (or avoidance of pain) winning: Someone has a serious flaw, and they could improve it if they worked at it, but it's so painful for them to hear it that they reject all evidence that they have this flaw.
But when our brains are being tugged in different directions, one has to win out in the end! What determines this? One hypothesis is that when these forces aren't all pointed in the same direction, it is the strength of each of them that determines which direction wins out – that is, whether you accept the evidence or reject it:
The stronger the evidence is, the stronger the pull towards not rejecting it (on accuracy grounds). For instance, it's easy to dismiss mild evidence that a close friend secretly hates you, but if you overhear a conversation where they say they hate you, that's hard to dismiss, even if it's extremely painful to believe.
You're more likely to reject (or distort) evidence in proportion to how painful it would be to believe it (or how pleasurable it would be not to believe it).
The more your brain expects to avoid negative long-term consequences (or expects to get long-term benefits) from rejecting or distorting the evidence, the more likely you are to reject (or distort) that evidence.
There may be individual differences here. For example, some people intrinsically value truth-seeking, and are therefore more likely to prioritize the accuracy of their beliefs. On the other hand, some may be particularly prone to emotional avoidance, which may cause them to disproportionately seek out pleasant beliefs.
What does this all mean?
So, if our brains take different approaches to collecting evidence, and if the approach that ultimately wins out depends on the individual and on the situation – what implications does all this have?
Well, it may help to explain why people so often seem to not be truth-seeking on topics like politics, even when they get strong evidence that their position is wrong - their brains may predict a long-term cost from accepting the evidence. For instance, if your friends are highly political and strongly support one politician, but you get evidence that the politician is a liar, your brain may accurately predict that believing they are not a good leader could be a problem, according to your political friends, with real social consequences. So you may dismiss that evidence (it's just lies from the opposing side) or re-interpret it (well, all politicians lie, so it doesn't mean anything). Research has pointed to the influence of social pressure on political beliefs, and the use of motivated reasoning to maintain them, which is a key driver of polarisation.
Another phenomenon these ideas may help explain is the way that people who care a lot about the well-being of animals, who also agree that animals suffer in factory farms, often will jump to rationalizations when it's pointed out that their food purchasing behavior is likely causing additional animals to suffer in factory farms. It is instantly painful to believe this, and their brain may be projecting that the belief comes at a substantial future cost to them. So it isn't surprising that they would either dismiss this information (e.g., "I don't believe factory farms hurt the animals") or rationalize away their behavior to make it okay (e.g., "I need to eat meat"). Psychological research has identified the “meat paradox”, where people love and care about animals, but continue to eat them. One mechanism for reconciling these two things is to diminish the moral status of animals, and so reduce guilt about consuming them. People may also systematically underrate the mental capacities of animals that they consume.
At first, it may seem bizarre that our human brains so often reject valid evidence. But it makes sense if we see the brain as optimizing for more than just accuracy: it also is influenced by pain and long-term costs. Sometimes, our brains prioritize accuracy. Other times, considerations of long-term cost win. Still, other times, pleasantness wins. Paying attention to each of these different forces in your own mind can help you better understand your own decision-making process.
When you find yourself rejecting information, it can be useful to reflect on why. Try asking yourself “Do I think that I’m rejecting this information out of a desire for more accurate beliefs, or is it because rejecting this information will feel pleasant or come with longer-term benefits?” Whatever the answer is, you can then ask yourself whether that motivation aligns with your values and goals.
By recognizing when pain avoidance or concern about future consequences is driving your resistance to certain information, you can make more conscious choices about whether to challenge these automatic responses and engage more deeply with uncomfortable truths.