Published on

Why Conspiracy Theories Work

Authors

Why Do We Believe?

In our current day and age, it seems that many people can’t be convinced. Their beliefs can’t be changed, their minds can’t be altered. They are utterly and truly convinced they are right, and nothing can sway them from that conviction. Why does this happen? It turns out that we can prove that if someone holds certain types of beliefs, no amount of contrary evidence can change their mind. Furthermore, in certain cases, contrary data will actually convince them that they are more right, instead of causing them to question their beliefs. In particular, if a person believes in a theory that perfectly explains a situation, like a conspiracy theory, evidence that seems to show something else will actually make them more sure of their conspiracy. 1

This post will be in two parts. In the first, non-technical post, I will discuss the concepts without math. This post should be accessible to everyone, and so if you don't understand, the problem is me, not you. I will be using a Bayesian perspective, but don’t worry if you’re not familiar, I will explain the concepts as we go. It turns out that Bayesian statistics describes how we think, and so we can use it to prove what’s going on, and the concepts are surprisingly simple. In the second part, I will work through the math of the examples, and give a little more depth in introducing Bayesian statistics.

TLDR: Using Bayesian statistics, we can show why people who hold beliefs too strongly will ignore all facts and evidence.

Prior Belief

Let’s start with a simple thought experiment: how does the human brain decide what to believe? Your friend Gary is an average human. He doesn’t know everything, but he thinks he knows more than he does. He happens to be passionate about bowling. He is kind of an average bowler, but he’s always looking for ways to improve. One day he is watching his favorite bowling Youtube channel, and he sees a video that claims to show the key to bowling, the perfect technique.

“AAAHA,” thinks Gary. “This is the way! I can use this technique to throw a strike every time! I’m going to be the Sultan of Strikes.”

Excited, he goes to the bowling alley, and starts throwing. Normally, he throws a strike about 1/10 of the time. But on this day, his first throw is a strike. Eureka, it works! He is now even more sure that he has found the key to eternal bowling stardom.

You, as his friend, are with him at the bowling alley, but are a little more skeptical. Gary didn’t just become the Ace of the Alley overnight, right? So, let’s ask the question, how many strikes will Gary have to throw to convince you that he has found bowling bliss?

Moderate skepticism

The answer depends on your initial skepticism. In other words, what is your prior belief? Let’s say you are moderately skeptical, and you think there’s only a 10% chance that Gary is right. You then might want to see 3 strikes in a row before making up your mind. Recall that Gary normally only throws a strike 1/10 of the time. So 3 strikes in a row would have a probability of 1/1000, which is pretty unlikely, and perhaps enough to overcome your moderate skepticism.

For the math, see Part 2.

Extreme skepticism

On the other hand, maybe you are super skeptical and think there’s a 1 in a million chance he’s telling the truth. You might then need to see 10 strikes in a row, which would only happen 1 in ten billion times if Gary had not improved. In Bayesian statistics, this initial belief is called a prior. The prior represents your prior knowledge. In other words, given your previous life experience, how likely is it that Gary is correct?

Priors

There’s strong evidence that this example, this Bayesian way of thinking, accurately describes how our brains work. When we enter a situation, we have some initial belief or skepticism. Then, we update that belief with the data that we see in the world, to form a new belief. We apply that new belief in the next situation we encounter, etc.

The prior belief is a key difference between Bayesian and frequentist statistics. Bayesian allows you to take into account your knowledge: that Gary isn’t a very good bowler, and you are skeptical that he could have improved that much overnight. In contrast, in classical, frequentist statistics, each situation is treated in isolation, without using your prior knowledge. The frequentist model doesn’t make sense in our lives. How would we ever make a decision if we couldn’t take into account our previous knowledge and experience? Bayesian statistics is the correct way to view statistics in most situations, including in science. I’ll discuss this important point in a future post.

We will go over these examples in more detail in Part 2, and quantify how much evidence you should need to see to be convinced, given your prior.

Beliefs that never change

So, what would happen if your prior belief was 0? It turns out that a prior of 0 (where you believe there’s a 0% chance of something being true) is a really interesting case, because it means that you are never going to change your mind. You have a fixed belief. Gary could throw 100 strikes in a row, and you wouldn’t be convinced. This situation is intuitively obvious: if your mind is made up and you have no doubts at all, nothing is going to change your mind. Evidence is irrelevant.

Similarly, a prior of infinity (100% chance) would mean that you believe without a shred of doubt that Gary is right. So, what would happen if Gary then did not throw a strike? If you truly have a prior of infinity, the data is irrelevant, and so you would still believe that he is the Ace of the Alley. To account for a miss, people might start inventing all sorts of alternate hypotheses, like, the pins are glued to the floor, Gary is just having an off-day, etc. Or, maybe you would just refuse to believe that you actually saw the Great Gary miss. This situation is analogous to any number of conspiracy theories. For instance, flat-earthers claim that NASA photos of Earth from space are faked, and will invent new, convoluted explanations for any other seemingly contradictory data.

A prior of infinity and 0 are both dangerous places to be, because you are ignoring the evidence in front of you. People do this in gambling, the stock market, politics, etc. all the time. My plea to you is, don’t be here. Always be willing to update your belief based on the evidence in front of you, no matter how weird or painful that belief change might be.

For the math, see Part 2.

Conspiracy theories

We’ve considered the case where you simply don’t believe your friend (prior belief of 0), and how having such a fixed belief can be dangerous. But how does this relate to conspiracy theories? It turns out that what you believe initially, and how well that belief explains the data, can prevent you from seeing alternatives.

Sometimes your current theories are insufficient, and so we have to come up with an alternative. Recall that we were comparing two theories: A) Gary is still Garden-variety Gary and B) Gary is now the Bowling Boss. Now, let’s say Gary throws 10 strikes in a row. If Gary was still an average bowler, 10 strikes would be astronomically unlikely (1 in ten billion to be exact). So, Gary’s Bowling Boss theory overwhelmingly explains the data compared to the ordinary Gary theory. You consider yourself a reasonable person, and so it seems that you now have to discard the ordinary Gary theory. However, common sense dictates that there is no way that Gary is now the Bey of Bowling. So, it’s time to consider a third hypothesis. Maybe Gary is…gasp… cheating!

Now, we have a fascinating situation, because we have two theories that perfectly explain the data. The first theory is that Gary is the Sultan of Strikes, and only throws strikes. Since he has only thrown strikes so far, the theory explains the data perfectly. Our second theory is that he is cheating. Of course, if he’s cheating, he can create any result he wants, so the cheating theory also perfectly explains the data. Since both theories explain the data equally well, they offset (cancel), and all you are left with is your prior belief. Thus, if you initially believed that Gary was cheating more than you believed he became the Ace of the Alley, then more strikes are not going to convince you otherwise. I’ll show this mathematically in Part 2.

At this point you might find yourself thinking that, the more strikes he throws, the MORE convinced you would be that he’s cheating. It turns out Bayesian statistics can explain that too.

The data is a conspiracy

Let’s change the numbers a bit, and say that Gary knows he’s not perfect, and so he only claims to be able to throw strikes 90% of the time. He then throws 9/10 strikes, which is perfectly in line with his theory. However, you still think he’s cheating. This situation gets us a really interesting result, since more data will actually convince you, not that he’s the Sultan of Strikes, but that he’s cheating!

Let’s say Gary throws another 10 strikes and 1 miss. We now have 20 strikes and 2 misses, which is overwhelming evidence in favor of his theory, when compared with the ordinary Gary theory. However, we still have some leftover doubt. Since we were skeptical at the beginning, no amount of evidence will ever make us completely sure. On the other hand, the cheating theory perfectly explains the data. There’s no doubt there. It turns out that as you gather more data, the leftover doubt about Gary’s theory actually gets stronger relative to your sureness in the cheating theory. Thus, more strikes will actually make you more sure that he is cheating. Think about this intuitively: if you suspected Gary of cheating, you’d probably be pretty convinced that he’s hoodwinking you after 10 strikes, and you’d be even more sure after 20. The evidence would not convince you that Gary is telling the truth, even though the data appears to support his claim. For a more rigorous treatment, see the example in part 2.

How did this happen? It turns out that we are comparing apples to oranges. We have two hypotheses, one that is probability based (that Gary is now Lord of the Lane), and one always explains the data perfectly (Gary is cheating). We are then trying to use probability-based data to explain both theories, but that data only actually applies to one of them. If Gary thinks he is trying to convince you that he is no longer average, more strikes would make sense. The more strikes he throws, the more likely it is that he has vastly improved, and the less likely it is that he is still average. However, if in reality you think he’s cheating, he will be harming his case, because more strikes will not convince you that he’s not cheating. He needs to convince you that he’s being truthful through some other means.

The takeaway here is, when trying to convince someone, make sure you understand whether their belief depends on data or not. If their belief does not depend on evidence, be very careful of presenting additional evidence to try to refute it. Instead, try to help them examine the actual belief that you are trying to change, and the assumptions that underly it.

As a real world example, let’s think about a conspiracy theory. If someone believes in a conspiracy theory, they often use it to construct elaborate schemes to explain the world. In other words, they use their theory to perfectly explain what happens, and they are perfectly convinced that it is true. This situation is a combination of both of our dangerous situations. Their prior belief in their theory is infinite, and in other theories is 0, so they will ignore all evidence as irrelevant. And, if you do somehow get them to consider evidence, and that evidence is probability based, it will actually convince them that they are more correct! Thus, you aren’t going to convince this person with data. We’ve all been in this situation, where we try to present the facts, and they get ignored.

So, how do you convince a conspiracist that they are wrong? You have to challenge beliefs, assumptions, and feelings that underly their theory. Asking questions like, “why do you believe that,” “what are the assumptions you are making,” and “what would convince you,” can be good starting points to get a person to think outside their box. Once you get them to actually start thinking, rather than reacting, they might be more open to considering your points.

Conclusion

In this post, we learned about prior beliefs, theories that do and do not depend on data, and when data become irrelevant to an argument. I hope this post will help you to recognize situations where you or others hold unshakeable beliefs. In my humble opinion, there are far too many of those beliefs in our world. Our beliefs about most things in our lives are pretty fixed: relationships, how the world works, morality, religion, and politics to name a few. Many of those beliefs may be based on false or misleading assumptions. I think the world would be a happier, more sane place if we relied a little less on our preconceptions, and a little more on our observations of reality. So, if you enjoyed this post, try examining some of your own beliefs, and ask yourself, am I really willing to change them based on what I observe?

Continue to Part 2.

Footnotes

  1. This post was inspired by an example in Bayesian Statistics the Fun Way by Will Kurt. I'd highly recommend this book if you want a gentle, but more in depth introduction to Bayesian Statistics.