Skip to main content
SearchLoginLogin or Signup

Don't Look Up: Watch Your (Two) Step with a Bottom-Up Approach to Disinformation Research

Digital disinformation is changing society, but the societal 3 C's (class, community, and context) are changing disinformation, too.

Published onJan 23, 2023
Don't Look Up: Watch Your (Two) Step with a Bottom-Up Approach to Disinformation Research
·

In the study of digital politics, a new object of fascination emerges every few years. This usually follows the news cycle, such as claims that social media have sparked social movements or destroyed elections: Twitter Revolutions! Cambridge Analytica! The newest moving target—Fake News!—has brought with it a cascade of scholarship.[1] But one central problem plagues research on disinformation diffusion, as it has in previous domains—a hyper-focus on data that’s flashy, shiny, and digitally visible.

This is a methodological weakness that ends up skewing findings. Copernicus faced the same problem 500 years ago when he discovered that the earth was not the center of the solar system.[2] In turn, online disinformation is not the center of politics or even its most central issue, but rather is part of a broad ecosystem of information distribution. In this environment, people make daily choices—perhaps even surprising ones that are yet to be more fully uncovered—about what to read, what to share (or not), and what to believe. But by only looking at Big Data and hashtag-centric analyses, we not only lose sight of these practices at a local level but also the broader societal structures that may shape them. We have also focused so heavily on publicly available articles and posts that we have lost sight of the backchannel ways, both online or offline, in which people distribute information.

In turn, most existing scholarship on disinformation skews toward top-down powerful players: platforms (like Twitter), politicians (like Putin), or policies (like the GDPR). This extant research ends up shoving aside the broader array of everyday bottom-up active media practices and mechanisms of sharing—or not sharing. In other words, selecting on the dependent variable of disinformation hotspots and only looking at “successful” disinformation campaigns of producers obscures the ability to detect variation among users’ class, community, and context. And it is these 3 C’s that we need to tackle in the next generation of disinformation research.

First, we need a much more nuanced look at social class than the trope that believers of disinformation are just uneducated idiots. While people from lower socioeconomic backgrounds are more likely to believe fake news,[3] we know much less about how it spreads rapidly within lower social classes. At the same time, conservatives, who tend to come from higher socioeconomic backgrounds than progressives, are more likely to share and spread disinformation.[4] While we know about various Twitter communities, the preponderance of research on platforms like Twitter provides little demographic information about specific users, particularly on social class, and such platforms make it impossible to tell what populations are absent from them. We know from the digital inequality literature that there are persistent class-based challenges, not just in terms of accessing the internet, but also having the skills, savvy, and literacy to produce, navigate, and evaluate information.[5] Still, most class-based digital inequality research focuses on education and income levels, rather than class power relations, which could help explain the steadfast belief in falsehoods about those in power. Studying these incongruities and intersections could reveal the less visible ways that inequalities operate with disinformation diffusion.

Next is community. Often overlooked are local institutional and civil society connections. The presumption is that disinformation is orchestrated from above and that users are individual consumers, rather than situated in local communities. The tendency in some research, especially computational network analysis, tends to consider disinformation as part of a grand political theater in which central actors become puppet-masters of duped marionettes—social media users are unwittingly fooled to believe in fake news. These top-down approaches suggest that the audience for disinformation is passive, rather than active participants in the generation and recirculation of news and information.

Decades of communication research has disproven the idea that information flows unidirectionally or that people lack agency in believing the messages to which they are exposed, akin to following orders like a robot. This injection model of communication was first challenged by the two-step flow theory (Katz and Lazarsfeld 1955),[6] and a rich literature in audience and reception studies has further criticized this unidirectional media effects framework for not giving people the agency that they clearly have and use. Building on prior calls to bring back active audience theories,[7] we need to go beyond decontextualized network nodes and instead center civil society connections and the role of opinion leaders in the flow of dis/information.

By studying the role of local community leaders and local institutions, such as unions and churches, we can better see what the key levers and dams are in the pipelines of news and information. Local opinion leaders, embedded in civil society groups, may be key to trusting, or mistrusting, mainstream news. More research is needed on how local, bottom-up institutions promote or counteract digital disinformation in particular, rather than simply identifying online network leaders. Simply put, network leaders and community organizations may dissuade, not just promote, digital disinformation.

Finally, geographic context matters. We have certainly seen news on Covid junk science cross international borders, but all information practices are not equal. With some exceptions, we know more about fake news in so-called Western countries but less about practices in countries where the disinformation rates are lower, such as France, or where it is on the rise, such as Brazil.[8] A deep dive into other countries and a global comparison across them would elucidate how disinformation flows vary across contexts. In the same vein, a growing body of literature has shown that people in rural areas are more likely to share fake news than those living in cities.[9] People in rural areas have been more supportive of new (and authoritarian) leaders and far-right populism.[10] Based on my current research, one hypothesis is that rural citizens are more likely to distrust those in power, whether the government or media, because the elite power centers are based in urban areas and do not represent their views. Understanding these regional contexts would also include mapping local communication tools, whether online or offline. While the accessibility of one or two sources of public American platform data across countries is useful for comparative purposes, social media’s everyday use varies by geography. Therefore, cross-country analyses would be enriched with the use of multi-method, especially qualitative, approaches. Moreover, we need to widen data collection from public platforms, or even semi-private ones like WhatsApp, to include contextually-sensitive offline information-sharing practices.

Yet this call for a bottom-up approach to the study of disinformation is not simply a plea for qualitative or offline methods. We are—or at least should be—beyond this binary.  Instead, we need a broader approach than current research allows. Consumers and sharers of fake news are not individuals floating in space dismantled from class, community, and contextual cleavages. We need better answers to pressing questions. For instance, one unsolved puzzle is that a much larger number of people believe disinformation than consume it.[11] Integrating the 3 C’s into our research and recognizing that consumers are more active than we may think, could help bridge this divide between seeing and believing.

Photo by Yannik Mika on Unsplash

More importantly, we cannot simply jump to the latest digital fetish and abandon disinformation itself. The rise of authoritarianism and the disinformation horse it rides is not going away. Yet disinformation, like activism or elections, has been transformed over the last few decades, not simply because of new digital tools but because of the political movements, economic structures, and social institutions that are using them. Switching the causal arrows is key. We must understand not simply how digital disinformation is changing society, but how the societal 3 C’s are changing disinformation.

To see how dis/information is recirculating and not recirculating, I am not suggesting we simply take the trite New York Times approach and go into a diner in a coal mining town and talk to random people that “look” working class and who may be from a different community or context than a researcher. Instead, we need to be systematically tackling the bottom-up in our research design, questions, and methods. Such a multi-method and interdisciplinary approach would center users in their local context, communities, and class relations, rather than simply starting with a firehose of digital data.

References

Allen, J., B. Howland, M. Mobius, D. Rothschild, and D. J. Watts. 2020. “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” Science Advances 6(14). https://www.science.org/doi/10.1126/sciadv.aay3539

Fletcher, Richard, Alessio Cornia, Lucas Graves, and Rasmus Kleis Nielsen. 2018. “Measuring the Reach of ‘Fake News’ and Online Disinformation in Europe.” Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/our-research/measuring-reach-fake-news-and-online-disinformation-europe

Guess, Andrew, Jonathan Nagler, and Joshua Tucker. 2019. “Less than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook.” Science. 5(1):1–9. https://www.science.org/doi/10.1126/sciadv.aau4586

Katz, Elihu and Paul F. Lazarsfeld. 1955. Personal Influence: The Part Played by People in the Flow of Mass Communications. Glencoe: IL: Free Press.

Marwick, Alice. 2018. “Why do People Share Fake News? A Sociotechnical Model of Media Effects.” Georgetown Law Technology Review, 2(474). Accessible here.

McAdam, Doug and Hilary Boudet. 2012. Putting Social Movements in Their Place: Explaining Opposition to Energy Projects in the United States, 2000-2005. New York: Cambridge University Press.

Schradie, Jen. 2011. “The Digital Production Gap: The Digital Divide and Web 2.0 Collide.” Poetics 39(2):145–68. https://doi.org/10.1016/j.poetic.2011.02.003

Stockemer, Daniel. 2017. “The Success of Radical Right-Wing Parties in Western European Regions: New Challenging Findings.” Journal of Contemporary European Studies, Volume 25: 1. https://doi.org/10.1080/14782804.2016.1198691

Tsfati, Yariv, H. G. Boomgaarden, J. Strömbäck, R. Vliegenthart, A. Damstra, and E. Lindgren. 2020. “Causes and Consequences of Mainstream Media Dissemination of Fake News: Literature Review and Synthesis.” Annals of the International Communication Association 44(2):157–73. https://doi.org/10.1080/23808985.2020.1759443

Ward, Jeremy K. et al. 2020. “The French Public's Attitudes to a Future COVID-19 Vaccine: The Politicization of a Public Health Issue.” Social Science & Medicine, Volume 265. https://pubmed.ncbi.nlm.nih.gov/33038683/

Comments
2
?
andrew scott:

Always great to read well researched articles. Wealthlancers

Guri Makeup:

I really enjoy reading your weblog. Just wanted to let you know that you have people like me who appreciate your work. Definitely a good post. I salute you! The information you provide is very helpful.

 

Comment by - <a href="https://gurimakeup.com/">Guri</a> <a href="https://www.instagram.com/gurimakeup/"> Makeup</a><a href="https://goo.gl/maps/ymsuxKG2mArdTCaaA"> Artist</a>