Skip to main content
SearchLoginLogin or Signup

What Is Disinformation to Democracy?

Epistemological instability poses a challenge for contemporary democracies. But it also opens democratic possibilities.

Published onJan 23, 2023
What Is Disinformation to Democracy?
·

That disinformation is an impediment to democracy has become a truism among many scholars, journalists, and even certain political parties.

Indeed, disinformation has emerged as an (in some quarters the) explanation for the ongoing wave of right-wing populism that has plagued the globe since 2015 or so. From Brexit to the rise and rule of Narenda Modi, Donald Trump, Rodrigo Duterte, Jair Boslonaro, the list goes on.

These anti-democratic impulses have long histories of their own and their contemporary salience is overdetermined and certainly not reducible to matters of factual inaccuracy nor social media’s ability to spread mis- and dis-information rapidly at a global scale. Instead of positing disinformation as a cause of democratic decline, we contend that the emergence of disinformation as a scholarly and political problematic reintroduces democratic potential in at least two respects.

First, framing disinformation as a civic infrastructure problem creates new possibilities for publicly engaged fights over the control and regulation of digital platforms, opening pathways for democratizing the design of communications technology. Second, concerns over disinformation are a diagnostic of deep political disagreement over the foundations of truth and facticity, disagreements that cannot be resolved through design choices alone.

Defeating disinformation, if that is our goal, requires engaging in, not avoiding, political conflict over facts, their meaning, and the platforms through which they circulate.

Photo by Kayla Velasquez on Unsplash

In an article in Harper’s last year, Joe Bernstein characterized the rise of Disinformation Studies (“Big Disinfo,” in his phrasing) as a bid for elite institutions to assert greater control over digital communication spaces through an “unofficial partnership between Big Tech, corporate media, elite universities, and cash-rich foundations.” This was a provocative and productive salvo that has ignited intense debate, though it also provided a frame that risks stripping from view messy political contests and competing visions surrounding the problem of disinformation.

On the one hand, framing disinformation as a systemic problem of today’s digital media has rekindled an earlier paradigm of communication studies. Like their Interwar counterparts who researched the efficacy of propaganda and how to mitigate it, Disinformation Studies scholars have refocused the field’s attention to media’s efficacy at persuasion and ideological cultivation (Bauer and Nadler, 2021).

On the other hand, the disinformation problematic has been embraced, though somewhat reluctantly, by tech industry actors. As Bernstein notes, social media companies have an interest in promoting a narrative that attributes great power to their platforms in terms of persuasion and indoctrination: the same characteristics that make their platforms “bad for democracy” make them good for advertisers seeking reassurance that their digital ad dollars are well spent. By foregrounding the role of bad actors circulating disinformation on their platforms, social media companies have been able to externalize the blame for their myriad ill effects.

Bernstein attributes this strong effects narrative to “Big Disinfo,” but media platforms (from print, to television, to digital) have long emphasized their efficacy at shaping the minds of their users in presentations to investors, even while their claims to the public have construed advertising as a mere convenience (Crain, 2021). Paul Lazarsfeld, a leading proponent of the “limited effects” theory, was notoriously two-faced in this regard. For some audiences he claimed that media effects were significantly mitigated by social networks while to others he framed the same data as evidence of media’s persuasive power (Pooley, 2006).

For those of us who recall the late Todd Gitlin’s crucial 1978 Theory & Society essay on the “dominant paradigm” in media sociology, Bernstein’s argument rings familiar. As Gitlin noted, scholars like Elihu Katz and Lazarsfeld used the putative “hypodermic needle” theory of strong media effects as a foil against which to establish what became a “limited effects” paradigm in mass communication research. This was the paradigm that pushed propaganda studies to the margins of the field until their recent re-emergence as Disinformation Studies.

Bernstein seems to be suggesting a similar move—a step away from “Big Disinfo” by tempering our belief in disinformation’s power at steering public opinion.

But is a return to limited effects really a desirable path beyond disinformation?

We contend that focusing primarily on the search for measurable effects of discrete exposures to disinformation overlooks something more significant. The disinformation panic itself represents an unprecedented pushback against the “technocratic populism” promised by tech companies and social media platforms since the mid-2000s. Technocratic populism thrived during a period when tech companies could claim that their role in the public sphere was one of empowering ordinary people while toppling gatekeepers and an elite’s monopolistic hold over traditional media. This kind of populist rhetoric was reflected in mission statements like Twitter’s “To give everyone the power to create and share ideas and information instantly, without barriers.”

The technocratic side of this vision, however, was that all the major decisions about how to design and govern online spaces were best left to the “neutral” expertise of tech companies. As recent research has starkly shown, such decisions have proven to be of no small consequence; this includes control over the design of algorithms, moderation policies, the architecture enabling digital surveillance, and much more (Gillespie, 2018; Noble, 2018; Zuboff, 2019). If mid-20th century journalism’s claim to cultural authority rested on a faith that journalists’ professional discretion offered the best means to decide what news was fit to enter public discussion, social media companies have construed a different legitimation myth. Theirs promises that their own expertise merely optimizes an infrastructure that allows a bottom-up, spontaneous public discourse to flourish.

There have always been challenges to these Silicon Valley visions of technocratic populism, but these challenges have greatly intensified alongside concerns regarding disinformation. From data breach scandals to battles over hate speech moderation to authoritarian campaigns leveraging social media for manipulative purposes, it is becoming widely acknowledged that online infrastructure is not merely technical nor is it apolitical. Questions of social value are clearly at stake in matters of how online worlds are designed and who gets to make those decisions.

The concerns so poignantly articulated by Bernstein represent legitimate challenges to significant tendencies within scholarly and journalistic discourses on disinformation. They point to serious dilemmas that advocates must grapple with in pushing for immediate reforms in social media. Yet researchers and critics concerned about these tendencies should not neglect the storyline. The perceived crisis around disinformation—and the broader techlash it has spurred—represents an eruption of popular discontent over the communication structures brought to us by Big Tech. This is an opportunity to channel public energies against the once-unquestionable assumption that design decisions for the digital public sphere should be left to commercial forces.

In short, the techlash stirred by “Big Disinfo” presents us with new opportunities for extending political contestation into spaces that have thus far avoided serious governmental regulation or public scrutiny (e.g., social media platform design). The disinformation panic presents a fleeting chance to democratize these spaces.

Photo by Greg Bulla on Unsplash

Foundations, historically wary of picking sides in partisan fights, have gladly funded seemingly non-partisan and apolitical “solutions” to a post-truth predicament. These solutions, as Benkler and his colleagues have noted, are rooted in partisan and ideological asymmetries in our digital public sphere (Benkler et al, 2018). While liberatory movements like #OccupyWallStreet, #BlackLivesMatter, and #MeToo have managed to leverage existing omnibus platforms like Twitter, Facebook, Instagram, and TikTok, power asymmetries, and the technocratic solutions that fail to account for them, favor the political right (Schradie, 2019).

The disinformation problematic is also indicative of a historical conjuncture where hegemonic conceptions of reality are themselves open to contestation. Public trust in journalistic and academic expertise has been on the decline for decades, a phenomenon that right-wing movements around the world have both exacerbated and exploited. As Robert Mejia, Kay Beckermann, and Curtis Sullivan (2018) have argued, the popularity of the “post-truth” explanation for the global resurgence of right-wing populism in the past decade is rooted in nostalgia for a pre-post-truth era of presumed epistemological stability that was, in fact, enabled by racist hegemony.

The notion that basic facts are subject to political contestation comes as little surprise to those experiencing and challenging long-standing systems of repression. These communities have long seen the basic facts of their lived experiences, and even existence, called into question in the interest of justifying or maintaining white supremacist, colonial, and other hierarchical orders (Kuo and Marwick, 2021).

Rather than see our contemporary epistemological instability as an impediment to democracy, activists and scholars would be wise to recognize the stakes of this opening. Dreams of an end to disinformation are underwritten by a desire for epistemological stability. But as Antonio Gramsci or Stuart Hall might tell us, moments of epistemological instability are moments of both danger and opportunity.

What might Disinformation Studies look like if it acknowledges and even embraces the inevitability of political conflict, of contrasting worldviews, of hegemonic struggle for the terms and facts with which we make sense of and establish a collective sense of reality? What might Disinformation Studies look like if it explicitly sets its sights on understanding and undermining the anti-democratic impulses of Big Tech and right-wing movements?

These are the questions Disinformation Studies scholars ought to ponder as we consider the field’s future.

References

Bauer, A.J., & Nadler, Anthony. (2021). Propaganda analysis revisited. Harvard Kennedy School (HKS) Misinformation Review. https://misinforeview.hks.harvard.edu/article/propaganda-analysis-revisited/

Benkler, Yochai, Robert Faris, and Hal Roberts. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York: Oxford University Press.

Bernstein, Joe. (2021). Bad news: Selling the story of disinformation. Harper’s (September): 25-31. https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/

Crain, Matthew. (2021). Profit over privacy: How surveillance advertising conquered the internet. Minneapolis: University of Minnesota Press.

Gillespie, Tarleton. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.

Gitlin, Todd. (1978). Media sociology: The dominant paradigm. Theory & Society 6(2): 205-253. https://www.jstor.org/stable/657009

Kuo, Rachel and Alice Marwick. 2021. Critical disinformation studies: History, power, and politics. Harvard Kennedy School (HKS) Misinformation Review. https://misinforeview.hks.harvard.edu/article/critical-disinformation-studies-history-power-and-politics/

Mejia, Robert, Kay Beckerman & Curtis Sullivan. (2018). White lies: A racial history of the (post)truth. Communication and Critical/Cultural Studies, 15(2): 109-126. https://doi.org/10.1080/14791420.2018.1456668

Noble, Safiya. (2018). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.

Pooley, Jefferson. (2006). Fifteen pages that shook the field: Personal Influence, Edward Shils, and the remembered history of mass communication research. Annals AAPSS 608 (November): 1-27. DOI: 10.1177/0002716206292460    

Schradie, Jen. (2019). The revolution that wasn’t: How digital activism favors conservatives. Cambridge: Harvard University Press.        

Zuboff, Shoshana. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

                       

Comments
1
?
hami choob:

بهترین و جدید ترین مدل جاکفشی رو از تولیدی صنایع چوبی حامی چوب تهیه کنید