Disinformation is a form of socially constructed ignorance, so let's look at the actors who disseminate it and their motives.
A number of recent studies point toward the idea that “Fake News” or disinformation is a small subset of the information shared on online social networking platforms (e.g., Broniatowski et al., 2022; Grinberg et al., 2019; Guess et al., 2019). Yet, this seemingly small subset is generating great concern and has contributed to the build-up of the “folk” idea, according to which we are over-exposed to Fake News and we need to do something about it now. In response, a realm of solutions has emerged in the past years.
On the one hand, the production and circulation of misleading content are tackled with content moderation policies by large platforms or with new regulation proposals to expand the transparency and accountability of digital platforms (such as the Chronology of EU’s actions against disinformation and the Platform Accountability and Transparency Act).
On the other hand, the reception of misleading content is being addressed through increased media literacy and the rise of fact checking as a new journalistic genre with the ambition to bring the truth to people (see also the International Fact Checking Network). Both approaches presume that individuals are naive, ignorant, and need to be educated to improve their discernment capabilities when navigating online.
In this short note, I argue that focusing on the diffusion of and exposure to misleading content blurs many facets that could refine the apocalyptic discourse about disinformation. More specifically, ideas from ignorance studies and my current research on Twitter indicate the importance of decentering attention from the truthfulness of statements and the medium used to circulate them (digital technology) and re-centering it on the study of individuals as actors with a wide spectrum of motives, worldviews, influence, and means of action and expression.
Disinformation studies has developed as a new, independent field with its own features, along with the emergence of online platforms. The role of digital technology and the credibility of created or shared content online both occupy a prominent place in the literature. To be more specific, much attention is given to measuring the truthfulness of content (see, for example, the Iffy Quotient for English language news by Resnick et al. (2018) and the articles that cite it). Others focus on the way online social networks impact the consumption of misleading content, namely the research about filter bubbles, echo chambers, and the diffusion of misinformation (e.g., Allcott et al., 2019; Flaxman et al., 2016; Pennycook et al., 2021; Vosoughi et al., 2018). Yet taking knowledge as a subject of study is not new. In this short note, instead of proposing new avenues for what comes after disinformation studies, I take a step back and revisit studies of ignorance, a sub-field of Science and Technology Studies. This large body of research considers ignorance as the product of a social construction, rather than a simple lack of knowledge. The objective is to reappraise some of its findings and especially its methodology to bring new perspectives into the study of disinformation.
Ideas borrowed from this literature include the political foundations of ignorance, the institutional and structural mechanisms of ignorance, and uncomfortable knowledge (see Barbier et al. (2021) for an excellent literature review). These ideas invite us to focus more on the characteristics, interactions, and motives of individuals rather than simply considering a homogenous population of internet users who disseminate disinformation. Ignorance studies scholarship also provides a methodology to consider the whole ecosystem around the “Fake News phenomena,” including specific regulations to tackle disinformation, funding of fact-checking activities, and research.
Furthermore, the idea that sometimes uncomfortable knowledge can be excluded in the goal of not threatening the ability of institutions to pursue their goals (Dedieu & Jouzel, 2015; Rayner 2012) might be very relevant to the study of disinformation. It can help in analyzing the rise of alternative discourses in response to ongoing events, especially in situations where institutional decisions are made at the same time as the scientific knowledge is being produced. Finally, studying disinformation as one kind of socially constructed ignorance can address one main limitation of the ignorance studies literature, which is its focus on a small range of themes like health and environmental issues.
Let me provide two concrete examples that will illustrate the link between disinformation studies and ignorance studies even though they address different themes. The first example is a classic one in ignorance studies: glyphosate, the active ingredient in the widely commercialized herbicide called Roundup. Glyphosate is a chemical agent which is “probably carcinogenic to humans” (group 2A) according to the International Agency for Research on Cancer. It is present in commercialized goods because the scientific evidence at present is not conclusive about whether it is absolutely dangerous for humans or not. For decades, NGOs, groups such as the ‘Ban Glyphosate’ European citizens initiative, investigative journalists, researchers, and whistle blowers have demanded it be regulated (Wylie et al., 2017).
The second example has also become a classic one linked to disinformation: COVID-19 vaccines. Inaccurate claims about vaccines have circulated online and were identified as such by fact-checking organizations. To date, there are two competing hypotheses about whether exposure to misinformation leads or does not lead to a substantial change in behavior and orients decision making (Greene and Murphy, 2021; Valensise et al., 2021). Yet policies specific to COVID-19 misinformation have been rapidly implemented by Twitter and Facebook and governments so that online exposure to inaccurate claims is avoided. Both situations are very different, yet there are two similarities. In both situations there is scientific research that still needs to be conducted (i.e., for disinformation one main limitation is the availability of pertinent and complete data), and a group of actors think there is a high risk that calls for attention.
Today the mere existence of false or inaccurate information on online platforms mobilizes public opinion and decision makers, but doubt or non-knowledge related to scientific issues (e.g., health, chemicals) that could impact large populations has long existed. It is unclear whether this discrepancy in regulation is linked to who disseminates information (a few internet users versus experts) and whether it is related to situations of confusion and uncertainty brought to our attention by social media (Girel, 2017; Gross and McGoey, 2015; Proctor and Schiebinger, 2008).
The before-mentioned idea emerged from my current investigation of the online Twitter discourse on climate. The research takes climate as a case study, because it’s a topic that is prone to misinformation and misperceptions in spite of the existing scientific consensus around human-caused global warming (Cook et al., 2016). The initial motivation was to ask whether there are “metrics'' that correlate with group membership, when considering Twitter accounts of influential individuals categorized according to differences in their (a priori) stance about climate change: Scientists, Activists, and Delayers. “Delayers” consist of individuals who appear in the Desmog Climate Disinformation Database and hold a Twitter account with more than 10k followers. The quantitative investigation of websites (sources) shared in tweets by members within the three groups reveals a very large discrepancy between Scientists and Delayers in terms of sharing high-credibility websites (when taking a truth-centered approach), according to the media bias fact-check website.
However, beyond quantitative exercises, by looking qualitatively at certain tweets of Delayers, many claims, sentences, and arguments squared with claims that appeared in early advertisement campaigns about “repositioning global warming as theory (not fact)” or “emphasizing the uncertainty” or “economic scaremongering.” Geoffrey Supran and Naomi Oreskes surveyed the “forgotten oil ads that told us climate change was nothing” in an article published in The Guardian (November 2021). This observation invites us to de-center attention from the truthfulness of claims and the medium used and to re-center it on the study of actors. In other words, the study of online disinformation about climate might be more pertinent if studied jointly with the offline discourse on climate and by studying the characteristics of actors who disseminate inaccurate information, how and why they do it, and what impact it has.
Allcott, Hunt, Matthew Gentzkow, and Chuan Yu. “Trends in the Diffusion of Misinformation on Social Media.” Research & Politics 6, no. 2 (2019).
Barbier, Laura, Soraya Boudia, Maël Goumri, and Justyna Moizard-Lanvin. “Ignorance. Widening the Focus.” Revue d’anthropologie des connaissances 15, no. 15-4 (2021).
Broniatowski, David A., Daniel Kerchner, Fouzia Farooq, Xiaolei Huang, Amelia M. Jamison, Mark Dredze, Sandra Crouse Quinn, and John W. Ayers. “Twitter and Facebook Posts About COVID-19 Are Less Likely to Spread Misinformation Compared to Other Health Topics.” PloS one 17, no. 1 (2022).
Cook, John, Naomi Oreskes, Peter T. Doran, William RL Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton et al. “Consensus on Consensus: A Synthesis of Consensus Estimates on Human-Caused Global Warming.” Environmental Research Letters 11, no. 4 (2016).
Dedieu, François, and Jean-Noël Jouzel. “Comment Ignorer Ce Que L’on Sait? La Domestication Des Savoirs Inconfortables Sur Les Intoxications Des Agriculteurs Par Les Pesticides.” Revue Française de Sociologie (2015): 105-133.
Flaxman, Seth, Sharad Goel, and Justin M. Rao. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80, no. S1 (2016): 298-320.
Girel, Mathias. Science et Territoires de L’Ignorance (2017).
Greene, Ciara M. and Gillian Murphy. “Quantifying the Effects of Fake News on Behavior: Evidence from a Study of COVID-19 Misinformation.” Journal of Experimental Psychology: Applied 27, no. 4 (2021): 773.
Grinberg, Nir, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. “Fake News on Twitter During the 2016 US Presidential Election.” Science 363, no. 6425 (2019): 374-378.
Gross, Matthias, and Linsey McGoey, eds. Routledge International Handbook of Ignorance Studies. Routledge, 2015.
Guess, Andrew, Jonathan Nagler, and Joshua Tucker. " Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook." Science Advances 5, no. 1 (2019).
Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. “Shifting Attention to Accuracy Can Reduce Misinformation Online.” Nature 592, no. 7855 (2021).
Proctor, Robert N., and Londa Schiebinger. Agnotology: The Making and Unmaking of Ignorance (2008).
Rayner, Steve. “Uncomfortable Knowledge: The Social Construction of Ignorance in Science and Environmental Policy Discourses.” Economy and Society 41, no. 1 (2012): 107-125.
Resnick, Paul, Aviv Ovadya, and Garlin Gilchrist. “Iffy Quotient: A Platform Health Metric for Misinformation.” Center for Social Media Responsibility 17 (2018).
Shabayek, Shaden, Emmanuel Vincent, and Héloïse Théro. “Digital Platforms’ Governance: Missing Data & Information to Monitor, Audit & Investigate Platforms’ Misinformation Interventions.” Policy Brief Series, De Facto Observatory of Information (2022).
Valensise, Carlo M., Matteo Cinelli, Matthieu Nadini, Alessandro Galeazzi, Antonio Peruzzi, Gabriele Etta, Fabiana Zollo, Andrea Baronchelli, and Walter Quattrociocchi. “Lack of Evidence for Correlation Between COVID-19 Infodemic and Vaccine Acceptance.” arXiv preprint (2021).
Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (2018).
Wylie, Sara, Elisabeth Wilder, Lourdes Vera, Deborah Thomas, and Megan McLaughlin. “Materializing Exposure: Developing an Indexical Method to Visualize Health Hazards Related to Fossil Fuel Extraction.” Engaging Science, Technology, and Society 3 (2017): 426.
 There is strong evidence that it can cause cancer in humans, but at present it is not conclusive, and it has been classified as part of group 2A since March 2015. Glyphosate is approved in the EU until 15 December 2022.