<< Back To Home

MISINFORMATION AND ITS CORRECTION

Sunday, 7th of December 2014 Print

An excellent review of the published literature on this subject.

 

Excerpts appear below. The full text is at http://psi.sagepub.com/content/13/3/106.full.pdf+html?ijkey=FNCpLYuivUOHE&keytype=ref&siteid=sppsi

 

Excerpts, below

•Consider what gaps in peoples mental event models are created by debunking and fill them using an alternative explanation.

•Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.

 

•To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.

 

•Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.

 

•Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.

 

•Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.

 

•If you must present evidence that is threatening to the audience worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation. 

•You can also circumvent the role of the audience worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.

Future Directions

Our survey of the literature has enabled us to provide a range of recommendations and draw some reasonably strong conclusions. However, our survey has also identified a range of issues about which relatively little is known, and which deserve future research attention. We wish to highlight three such issues in particular—namely, the roles played by emotion, individual differences (e.g., race or culture), and social networks in misinformation effects.

Concerning emotion, we have discussed how misinformation effects arise independently of the emotiveness of the information (Ecker, Lewandowsky, & Apai, 2011). But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009), which means that the emotiveness of misinformation may have an indirect effect on the degree to which it spreads (and persists). Moreover, the effects of worldview that we reviewed earlier in this article provide an obvious departure point for future work on the link between emotion and misinformation effects, because challenges to peoples worldviews tend to elicit highly emotional defense mechanisms (cf. E. M. Peters, Burraston, & Mertz, 2004). 

Concerning individual differences, research has already touched on how responses to the same information differ depending on peoples personal worldviews or ideology (Ecker et al., 2012; Kahan, 2010), but remarkably little is known about the effects of other individual-difference variables. Intelligence, memory capacity, memory-updating abilities, and tolerance for ambiguity are just a few factors that could potentially mediate misinformation effects.

Finally, concerning social networks, we have already pointed to the literature on the creation of cyber-ghettos (e.g., T. J. Johnson et al., 2009), but considerable research remains to be done to develop a full understanding of the processes of (mis-)information dissemination through complex social networks (cf. Eirinaki, Monga, & Sundaram, 2012; Scanfeld, Scanfeld, & Larson, 2010; Young, 2011) and of the ways in which these social networks facilitate the persistence of misinformation in selected segments of society.

 

Concluding Remarks: Psychosocial, Ethical, and Practical Implications

We conclude by discussing how misinformation effects can be reconciled with the notion of human rationality, before addressing some limitations and ethical considerations surrounding debiasing and point to an alternative behavioral approach for counteracting the effects of misinformation. 

Thus far, we have reviewed copious evidence about peoples inability to update their memories in light of corrective information and have shown how worldview can override fact and corrections can backfire. One might be tempted to conclude from those findings that people are somehow characteristically irrational, or cognitively “insufficient.” We caution against that conclusion. Jern, Chang, and Kemp (2009) presented a model of belief polarization (which, as we noted earlier, is related to the continued influence of misinformation) that was instantiated within a Bayesian network. A Bayesian network captures causal relations among a set of variables: In a psychological context, it can capture the role of hidden psychological variables—for example, during belief updating. Instead of assuming that people consider the likelihood that hypothesis is true only in light of the information presented, a Bayesian network accounts for the fact that people may rely on other “hidden” variables, such as the degree to which they trust an information source (e.g., peer-reviewed literature). Jern et al. (2009) showed that when these hidden variables are taken into account, Bayesian networks can capture behavior that at first glance might appear irrational—such as behavior in line with the backfire effects reviewed earlier. Although this research can only be considered suggestive at present, peoples rejection of corrective information may arguably represent a normatively rational integration of prior biases with new information.

Concerning the limitations of debiasing, there are several ethical and practical issues to consider. First, the application of any debiasing technique raises important ethical questions:

While it is in the public interest to ensure that the population is well-informed, debiasing techniques can similarly be used to further misinform people. Correcting misinformation is cognitively indistinguishable from misinforming people to replace their preexisting correct beliefs. It follows that it is important for the general public to have a basic understanding of misinformation effects: Widespread awareness of the fact that people may “throw mud” because they know it will “stick” is an important aspect of developing a healthy sense of public skepticism that will contribute to a well-informed populace. 

Second, there are situations in which applying debiasing strategies is not advisable for reasons of efficiency. In our discussion of the worldview backfire effect, we argued that debiasing will be more effective for people who do not hold strong beliefs concerning the misinformation: In people who strongly believe in a piece of misinformation for ideological reasons, a retraction can in fact do more harm than good by ironically strengthening the misbelief. In such cases, particularly when the debiasing cannot be framed in a worldview-congruent manner, debiasing may not be a good strategy.

An alternative approach for dealing with pervasive misinformation is thus to ignore the misinformation altogether and seek more direct behavioral interventions. Behavioral economists have developed “nudging” techniques that can encourage people to make certain decisions over others, without preventing them from making a free choice (e.g., Thaler & Sunstein, 2008). For example, it no longer matters whether people are misinformed about climate science if they adopt ecologically friendly behaviors, such as by driving low-emission vehicles, in response to “nudges,” such as tax credits.

Despite suggestions that even these nudges can be rendered ineffective by peoples worldviews (Costa & Kahn, 2010; Lapinski, Rimal, DeVries, & Lee, 2007), this approach has considerable promise.

 

Unlike debiasing techniques, behavioral interventions involve the explicit design of choice architectures to facilitate a desired outcome. For example, it has been shown that organ-donation rates in countries in which people have to “opt in” by explicitly stating their willingness to donate hover around 15–20%, compared to over 90% in countries in which people must “opt out” (E. J. Johnson & Goldstein, 2003). The fact that the design process for such choice architectures can be entirely transparent and subject to public and legislative scrutiny lessens any potential ethical implications.

A further advantage of the nudging approach is that its effects are not tied to a specific delivery vehicle, which may fail to reach target audience. Thus, whereas debiasing requires that the target audience receive the corrective information—a potentially daunting obstacle—the design of choice architectures automatically reaches any person who is making a relevant choice.

 

We therefore see three situations in which nudging seems particularly applicable. First, when behavior changes need to occur quickly and across entire populations in order to prevent negative consequences, nudging may be the strategy of choice (cf. the Montreal Protocol to rapidly phase out CFCs to protect the ozone layer; e.g., Gareau, 2010). Second, as discussed in the previous section, nudging may offer an alternative to debiasing when ideology is likely to prevent the success of debiasing strategies. Finally, nudging may be the only viable option in situations that involve organized efforts to deliberately misinform people—that is, when the dissemination of misinformation is programmatic (a case we reviewed at the outset of this article, using the examples of misinformation about tobacco smoke and climate change).

 

In this context, the persistence with which vested interests can pursue misinformation is notable: After decades of denying the link between smoking and lung cancer, the tobacco industrys hired experts have opened a new line of testimony by arguing in court that even after the U.S. Surgeon Generals conclusion that tobacco was a major cause of death and injury in 1964, there was still “room for responsible disagreement” (Proctor, 2004). Arguably, this position is intended to replace one set of well-orchestrated misinformation—that tobacco does not kill—with another convenient myth—that the tobacco industry did not know it. Spreading doubts by referring to the uncertainty of scientific conclusions—whether about smoking, climate change, or GM foods—is a very popular strategy for misinforming the populace (Oreskes & Conway, 2010).

 

For laypeople, the magnitude of uncertainty does not matter much as long as it is believed to be meaningful. In addition to investigating the cognitive mechanisms of misinformation effects, researchers interested in misinformation would be well advised to monitor such sociopolitical developments in order to better understand why certain misinformation can gain traction and persist in society.

41096019