Skip to content

The Power of Echo Chambers

   

In a political culture divided not simply by ideology, but basic judgments about reality, it should be no wonder that echo chambers–homogeneous clusters, in the research parlance–are commonplace.

A couple of papers were brought to my attention, both written by (more or less) the same group of Italian researchers. They are:

  * [Debunking in a World of Tribes](http://arxiv.org/pdf/1510.04267v1.pdf)
  * [The spreading of misinformation online](http://www.pnas.org/content/113/3/554.full.pdf)

I will quote from both, as needed. They have similar premises and conclusions. To sum them up is to say that an individual’s worldview is iterated primarily through confirmation bias: information that fits into the existing worldview is more likely kept, while information that runs counter to the existing worldview is more likely discarded. I say “more likely” rather than making a more definitive statement since, of course, people can and do change their minds when confronted with the right input at the right time under the right circumstances, so it is deceptive to generalize and say that no one ever changes their mind.

The studies evaluated two types of social media posts: conspiracy theories and scientific articles. The researchers intended to evaluate how people who read conspiracy theory or science-related pages interact with those posts. Are the posts shared? How quickly and widely? How persistent is a particular post in terms of its continued circulation? And do efforts to debunk conspiracy theories gain any traction within audiences who consume such theories?

First, it helps to describe what distinguishes conspiracy theories from scientific posts. From the misinformation study:

On the one hand, conspiracy theories simplify causation, reduce the complexity of reality, and are formulated in a way that is able to tolerate a certain level of uncertainty (19–21). On the other hand, scientific information disseminates scientific advances and exhibits the process of scientific thinking. Notice that we do not focus on the quality of the information but rather on the possibility of verification. Indeed, the main difference between the two is content verifiability. The generators of scientific information and their data, methods, and outcomes are readily identifiable and available. The origins of conspiracy theories are often unknown and their content is strongly disengaged from mainstream society and sharply divergent from recommended practices (22), e.g., the belief that vaccines cause autism.

Concise descriptions that illustrate the key differences between them, I think.

What was found with regard to the spread of information within these two distinct types of communities is that they spread at essentially the same rate, peaking about two hours after initial posting, then sharply trailing off, virtually disappearing after less than three days. At the least, it’s a good indicator of how transient social media posts are–most links reach their fastest rate of circulation in a matter of hours, and almost completely stop being circulated after a few days. Of course, that only applies to social media sharing, and says nothing about how likely those pages are to be hit by search engine results in the future. (In fact, I would love to know what proportion of a given URL’s lifetime hits come from inbound search engine clicks vs. social media shares. There’s got to be a paper in that.)

The second major component measured in the misinformation paper involved cascade dynamics. There is substantial research into the behaviors of various networks, and one of the features shared by many types of networks is the possibility of a cascade. If you aren’t familiar with cascades in this context (I wasn’t, myself), the idea is straightforward: a cascade occurs when nodes in a network interact in such a way that a particular influence on the network grows uncontrollably. This sounds overly abstract, but I can give you an everyday example of a cascade: a viral social media post. It is rare that a post goes viral intentionally–or rather, plenty of people may wish their posts to go viral, but few actually pull it off. But cascades are powerful, as they tend to have reach far beyond their original intended audience, and tend to leave an outsize impression on the part of the network in which they originated.

What was found with regard to science and conspiracy posts is that science posts don’t have much of a cascade effect, while conspiracy posts do. Essentially, while science posts are initially treated much the same way by their original audience as conspiracy posts, they don’t continue to be shared around long past their original posting. Conspiracy posts, on the other hand, just keep on spreading, albeit more slowly.

The gist on misinformation:

Users tend to aggregate in communities of interest, which causes reinforcement and fosters confirmation bias, segregation, and polarization. This comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.

It is noted that this transpires essentially the same way in both conspiracy-oriented and science-oriented communities, and only the cascade effects differ substantially. In other words, science posts come and go, but once a conspiracy theory begins to spread, it is nearly impossible to get rid of–it will keep being shared indefinitely.

But what about the other study? What about debunking? Does it work?

In a word: no.

In two words: hell no.

What was the study trying to accomplish?

The aim of this work is to test the effectiveness of debunking campaigns in online social media. To do this, we start our analysis by statistically characterizing users attention’ patterns with respect to unverified content and we use scientific news as a control. We then measure the effects of interaction by usual consumers of unsubstantiated claims with information aimed at correcting these false rumors – i.e., debunking posts.

Sounds reasonable enough. “Usual consumers of unsubstantiated claims” is an amusing longhand for “conspiracy theorists,” I’ll admit.

One of the interesting results, right off the bat, is that while science and conspiracy posts attract social media “likes” at more or less the same levels (relative to the number of users engaged), conspiracy posts get a lot more comments. One could speculate as to the reasons for this. My first suspicion would be that conspiracy theories easily invite users to debate, propose alternate explanations, or hurl invective at one another, while science-related posts likely generate a lot of nodding along and thus approval gestures in the form of likes, but not a lot in the way of commentary.

Even so, the researchers noted that the interaction patterns are very similar:

In summary, contents related to distinct narratives aggregate users into different communities. However, users attention patterns are similar in both communities in terms of interaction and attention to posts. Such a symmetry resembles the idea of echo chambers and filter bubbles – i.e., users interact only with information that conforms with their system of beliefs and ignore other perspectives and opposing information.

But here comes the fun part: what happens when you present material that debunks the prevailing narratives of an echo chamber?

A few interesting things happen. First of all, like and comment rates spike. When it comes to conspiracy debunking in particular, attitudes are highly polarized. Based on this study, a general sample of social media users is, on average, 18% in favor of a debunking post, 3% against, with an overwhelming 79% expressing a neutral attitude. Conspiracy theorists, on the other hand, are neutral only 52% of the time, favorable toward the debunking 12% of the time, but against it 36% of the time. This is obviously a huge difference. Perhaps even more interesting, it was found that conspiracy-oriented users not exposed to debunking material were actually more likely to eventually stop interacting with conspiracy-oriented posts. That is to say, being forced to confront evidence contrary to your worldview makes you less likely to give up that worldview than if you had been left to figure it out on your own.

From the paper’s conclusions:

Our findings shows that very few users of the conspiracy echo-chamber interact with debunking posts and that, when they do, their interaction often leads to increasing interest in conspiracy-like content. When users are confronted with new and untrusted opposing sources online, the interaction leads them to further commit to their own echo chamber. Users tend to aggregate in communities of interests which causes reinforcement and fosters confirmation bias, segregation, polarization, and partisan debates. This comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by false rumors, mistrust, and paranoia. Conspiracy related contents become popular because they tend to reduce the complexity of reality and convey general paranoia on specific objects and are more affordable by users. On our perspective the diffusion of bogus content is someway related to the increasing mistrust of people with respect to institutions, to the increasing level of functional illiteracy – i.e., the inability to understand information correctly– affecting western countries as well as the combined effect of confirmation bias at work on a enormous basin of information where the quality is poor.

For some time, it has been my practice to not bother with debunking unsupported claims to people who believe them. My suspicion was that it rarely worked. At least according to the research here, this is true. However, it is still possible that observers of such interactions, who may as yet be undecided on the matter at he hand, might be influenced by the conduct and presentation of the conspiracist and the debunker. This means that, if you wish to address someone ensconced in such an echo-chamber narrative, it is likely preferable to make your comments in such a way that they will influence uncommitted observers, which is at least more likely, rather than the conspiracist, who is bound to become only more entrenched.

For what it’s worth, the authors offer little in the way of solutions. Information content isn’t the issue–neither is gullibility. Rather, individuals are resistant to changing their beliefs because such beliefs are tightly integrated with one’s identity. To have a belief attacked is to have one’s identity and sense of self attacked. This appears to be more pervasive among conspiracy-minded individuals, who may consider their consumption of conspiracy theories an important part of their identity. Attacking those beliefs head-on, then, is simply counterproductive.

If I were to suggest any particular course of action meant to influence conspiracy-minded individuals directly, it might be to start with more innocuous information and gradually construct an alternative worldview from a non-threatening premise, and work inward from there. Aggression and blunt discrediting of false narratives cause defensive reactions, but it seems at least possible that friendlier, gentler overtures that slowly chip away a conspiracist mindset would be more effective in the long run.

Of course, I can imagine a conspiracy fanatic reading this and seeing nothing but plans to turn them into yet one more of the unwashed sheeple masses.

Photo by jacilluch