Offering insight on how to compete with coordinated deceptive information campaigns - created in collaboration with UNC Center for Information, Technology, and Public Life
In our current online world, civic information—important information needed to participate in democracy—is too often drowned out by viral falsehoods, including conspiracy theories.
Often, this is not an accident. Carefully orchestrated social media campaigns exploit social media tools, like algorithmic amplification and micro-targeting, to manipulate users and the information environment. These campaigns leverage the inherent platform product design to promote narratives, sell products, persuade users, and even provoke users to act for political, economic, or social purposes.
As a result, today’s civic leaders must play a more active role in the amplification of fact-based information. As Dr. Anthony Fauci said, “we've gotta be out there — scientists and the general public and those who understand the facts — talking about true and correct information.”[1]
To be sure, the playing field is not even. Social media platform tools are better suited for campaigns seeking to manipulate and agitate users than to empower and inform. Platforms and regulators must get involved to fix the design flaws that allow false and misleading information to flourish in the first place.[2] Policymakers should update and enforce civil and human rights laws for the online environment, compel radical transparency, update consumer protection rules, insist that industry make a high-level commitment to democratic design, and create civic information infrastructure through a new PBS of the Internet. In the absence of such policy reform, amplifiers of civic information may never be able to beat out the well-resourced, well-networked groups that intentionally spread falsehoods. Nonetheless, there are strategies for helping civic information compete.
This handbook aims to:
Educate civic information providers about coordinated deceptive campaigns
…including how they build their audiences, seed compelling narratives, amplify their messages, and activate their followers, as well as why false narratives take hold, and who the primary actors and targeted audiences are.
Serve as a resource on how to flood the zone with trustworthy civic information
…namely, how civic information providers can repurpose the tactics used by coordinated deceptive campaigns in transparent, empowering ways and protect themselves and their message online.
This handbook will function as a media literacy tool, giving readers the skills and opportunity to consider who is behind networked information campaigns and how they spread their messages.
Its focus is limited to how information spreads on social media, but modern networked information campaigns work across an entire ecosystem of on- and offline tactics. Information campaigns use radio, mail, email, print media, television, and face-to-face communication.[3]
Definitions and terminology
An array of terms are applied to viral falsehoods, including fake news, misinformation, disinformation, malinformation, propaganda, and, in the national security context, information operations, hybrid threats, and hack and leak. Mis-, dis- and malinformation, as defined by Claire Wardle and Hossein Derakhshan, are three of the most prevalent and important terms today:
Misinformation – “Information that is false, but not created with the intention of causing harm.”[4]
Disinformation – “Information that is false and deliberately created to harm a person, social group, organization, or country.”[5]
Malinformation – “Information that is based in reality, used to inflict harm on a person, organization, or country.”[6] Examples include leaks, harassment, and hate speech.
While it is important to distinguish between the intentional and unintentional spread of falsehoods, discussions around mis- and disinformation tend to center the veracity of a specific narrative or piece of content. What is missing, oftentimes, is a focus on how false or misleading narratives are deployed by deceptive actors to accomplish a strategic goal. In addition to “what is true?”, we need to ask the question: who benefits? For that reason, in this handbook we often use the terms “networked information” or “coordinated deceptive" campaigns:
Networked information campaigns – a combination of grassroots efforts and a central organizer who frames issues, coordinates energies, and sets goals to spread any type of information – civic or false.
Coordinated deceptive campaigns – a subset of networked information campaigns that spread false or misleading information.
II. The Model: Online Networked Information Campaigns
The most salient and prominent online conspiracy theories rarely go viral on their own. Successful ones reach their target audiences by deploying a series of tactics that leverage the platforms’ design loopholes through several steps, involving all the tools and infrastructure social media has to offer.
Successful networked information campaigns begin by building massive digital audiences, often through large public accounts and partisan, opinion-oriented outlets. Campaigners then seed narratives that appeal to their targeted audiences and flood the zone with their message. Finally, successful networked information campaigns activate their audiences through opportunities to become involved and refine their messages based on signals from their audiences.
Although coordinated deceptive campaigns deploy these tactics in ways that are inherently manipulative, similar tactics can be used by providers of civic information to share factual information that enables people to engage more fully in their communities. For example, whereas manipulated and misattributed content is deceptive by its very definition, influencers, hashtags, and message testing can be used for civic information campaigns.
Building the pipeline
The first step in coordinated deceptive campaigns involves building up an ongoing audience, or “pipeline.” The biggest networked, manipulative campaigns are successful because they put in the effort to build audiences using outlets, public accounts, channels, and influencers across platforms. Each of these grows followers using distinct tactics:
Public accounts: Disguise their intentions or identities through innocuous content to build their audiences
Deceptive information campaigns grow audiences for public accounts – such as Facebook pages and Twitter, Instagram, and TikTok profiles – by disguising their intentions and identities. They run non-political, attractive content like cute cats on their accounts only to later leverage their followers for political and economic gain.
Some coordinated deceptive campaigns pay to grow their audiences, either through advertisements or by paying other accounts to cross-post their content.[7] Platforms also reward accounts for posting engaging content, showing users content from accounts they do not follow if someone they follow comments or reshares posts.[8]
Coordinated deceptive campaigns may operate profiles that appear superficially independent but are in fact centrally coordinated. For example, The Daily Wire posted its most interacted with article from the second half of 2021 51 times across 17 different Facebook pages.[11]
Outlets: Mimic news outlets while acting in manipulative ways
Deceptive outlets, which we have called ‘trojan horse outlets’, use the trappings of independent journalism while eschewing journalistic standards of transparency to spread misleading narratives.[13] News rating service NewsGuard evaluates whether outlets meet basic journalistic standards,[14] and many of the highest-engagement online outlets are rated as failing key criteria.[15]
Outlets are effective in part because social media sharing presents all content in similar formats, which strips news stories from signals of journalistic integrity. From an article’s URL and thumbnail alone, it is unclear if the news site has separate sections for news and opinion, a masthead, and bylines and datelines.
Video channels: Build audiences for videos across social media platforms
Videos can be particularly useful for deceptive information campaigns. Most social platforms do not highlight what videos are trending, making it harder for fact-checkers to intervene, and they are not easily searchable in the way that text or even photos are. Streaming services such as Twitch are especially difficult to moderate. YouTube channels use tools such as the subscribe button and recommendations to grow their audiences, though YouTube videos typically go viral thanks to amplification of the link on sites like Facebook and Twitter.[17]
Influencers: Influencers build audiences around their individual personas
Influencers often have public profiles across several platforms and may be influential in offline spaces as well as online. They develop intimate, one-sided, relationships with their audiences by sharing personal stories, posting selfies, and giving followers broad access to their lives, “mak[ing] sure the audience identifies with them, much in the way a friend would.”[18] Followers are more likely to accept information as true if it is shared by friends and family.
Influencers build their audience by activating their existing supporters, giving them opportunities to participate in the organizing work of raising the visibility of the influencer’s account. “Follow-for-follow” campaigns are an example of this, in which influencers cross-promote one another’s accounts to their own followers.[19]
Influencers make money by selling or promoting products, soliciting donations, taking part in social media profit-sharing partnership programs, and creating content for others (see “Targeting through paid influencer promotion”).
Community forums: Assemble like-minded users and facilitate mobilization using platform tools
Public and private community forums such as Facebook groups, Reddit threads, Twitter Communities, and WhatsApp groups are a key vector of recruitment for extremist movements. The most active political groups on Facebook have been rife with hate, bullying, harassment and misinformation, and they grew large quickly by leveraging platform tools.[21]
Facebook recommends groups to users based on their interests and helps owners find new members. Facebook tools allow users to build invitation lists from those who are members of similar groups or express related interests and to automatically invite them, which has helped networked information campaigns in the past. In 2021, Facebook stopped recommending health and political groups and slowed the growth of newly created groups.[22]
Another crucial medium for Stop the Steal organizing was the Reddit copycat website thedonald.win. The site got its start as a subreddit, r/The_Donald, where it amassed 790,000 subscribers before Reddit banned it in mid-2020. Much of this original audience migrated to thedonald.win, where users organized violent January 6th protests.[25]
Targeted ads: Using data about users to persuade them to become part of the pipeline
On social media platforms, ad buyers can target users based on demographic information, demonstrated interest in certain topics, or even with lookalike audiences.[26] This allows for microtargeting and siloed conversations.[27]
Off social media platforms, advertisers collect or purchase data using web tracking (cookies), location tracking, and behavioral data (clicks, impressions, or engagement on content).
Bringing the narrative to life
Coordinated deceptive campaigns push a specific narrative, for example, that vaccines are dangerous.[28] To do so, they deploy real-world stories or controversies, cherry-pick headlines or grains of truth, elevate anecdotes without context as “evidence,” and cater their message to narrow identity groups. Campaigns take advantage of what people are already inclined to believe and test out various messages to see what sticks. The following tactics are common ways coordinated deceptive campaigns bring a narrative to life:
Kernel of truth: A cherry-picked detail from a reputable source—such as an article from a responsible journalism outlet, a court filing, a personal anecdote, or a leak—presented with insufficient or misleading context
Kernels of truth can lack important context and obscure the big picture. These kinds of details sometimes originate from individuals and gain traction through bottom-up grassroots amplification. Other times, a central figure strips information from its context to be misleading. In either event, coordinated deceptive campaigns prime their audiences with a broad narrative which a kernel of truth, including the audience’s lived experience, can then support.[29]
Manipulated and misattributed content: Images, audio, and other content that is deceptively altered or shared in a misleading manner
Hashtags: Viral slogans that place individual posts within a broader context and connect them to other posts about the same topic
Hashtags can be instrumental to gaming the algorithm by giving followers a common set of language, which can help elevate content in a platform’s trending list.
Memes: Memorable pieces of visual content, often humorous, that are easy to produce and spread
Memes can be effective ways to spread false information because they are easily shareable, memorable, and can convey complex ideas quickly and simply. Because they are often humorous or emotionally appealing, they are more likely to be shared and spread, even if the information they contain is false or misleading.[33]
Keyword stuffing and data voids: Search engine manipulation tactics that game search result rankings
Keyword stuffing: Adding popular keywords to unrelated websites to promote content in search engine rankings. This elevated ranking can create the illusion that a site reflects the general consensus and is supported by the scientific community or independent journalism.[35]
Exploiting data voids: Data voids, a concept coined by Michael Golebiewski and danah boyd, describe obscure terms which, when entered into a search engine, return deeply problematic, few, or no results.[36] Data voids can lead searchers to sites filled with false information because those sites rank highly in search results in the absence of high-quality, trusted sites using the search terms. Users can stumble upon data voids or be directed to them by malicious actors who know there will be no counter-content.[37]
Identity appeals: Campaigns morph their messages to appeal to specific identities
Message testing: Optimize the effectiveness of content and susceptibility of audiences
Campaigns publish multiple versions of the same content, allowing them to determine what spreads fastest online. This tactic, used by coordinated deceptive campaigns and mainstream outlets alike, allows actors to test and refine the effectiveness of messaging.[40]
Flooding the zone
Once brought to life, these narratives get distributed through the pipeline to reach large audiences. Social media virality can have a compounding effect; on many platforms, algorithms boost content they recognize as popular, adding it to the timelines of users who may not follow any pages or belong to any groups within the pipeline. Each of the following tactics amplify a narrative so it will flood users’ online experience:
Cross-posting: Posting content across many accounts and platforms
Coordinated deceptive campaigns often post content across multiple pages, accounts, and social media platforms to game the algorithm and create the appearance of broad, grassroots support.
Influencers also cross-post each other’s content on topics of mutual interest.
Activating bots: Coordinating automated accounts to deceive users or manipulate algorithms
Bots are often used to artificially amplify a message, game a trending algorithm, or boost engagement metrics.[43]
A study of June 2017 Twitter bot activity found they produced approximately one-quarter of all original tweets referencing climate change on a typical day. In the weeks before and after the United States announced its withdrawal from the Paris climate agreement, 40% of the most active accounts posting about climate change were bots, although they comprised only 15% of the most influential accounts.[44]
Hashtag adoption: Shape the narrative and coordinate efforts
Online activists use a common set of hashtags or keywords. When these hashtags become popular enough, many social media algorithms will promote the topic in their targeted audience’s feeds or display it on a list of trending topics. Narratives with effective hashtags allow supporters to help the story trend and inflate its perceived popularity.
Evading automated content moderation: Replacing specific keywords or phrases that automated content moderation tools are likely to flag
Many social media sites rely on automated content moderation tools to flag the use of specific words or phrases as a first step in removing or reducing the spread of false content. To evade these tools, influencers use “Algospeak”—code words, turns of phrase or emojis—to prevent their posts from being removed or down ranked.[46]
Targeting audiences: Information operations match identity appeals to specific subgroups
Race plays a substantial role in the targeting of coordinated deceptive content. For example, an analysis of 5.2 million tweets from the Russian-funded Internet Research Agency troll farm found that presenting as a Black activist was the most effective predictor of disinformation engagement.[48] Targeting by race, ethnicity, or national origin preys on what Mark Kumleben, Samuel Woolley, and Katie Joseff have termed “structural disinformation,” or “systemic issues related to the broader information environment, born out of long-term efforts to control minority groups’ access to and understanding of country’s electoral and media systems.”[49]
Targeting through paid microtargeted ads: Paid targeting of users based on their interests
Targeting tools allow advertisers to send different ads to people based on their personalized profiles.
Targeting through paid influencer promotion: Leverage influencers’ trust among their audiences
Activation
Campaigns give audiences actions to take. These actions are ends in themselves and also strengthen audience enthusiasm and loyalty and bolster pipelines for future narrative campaigns. Typical forms of activation include:
Call to action – subscribe and engage: Give audience ways to join the distribution pipeline
Users follow and subscribe or provide data (via data trackers and by filling in their information). In doing so, they become part of the distribution pipeline for future narrative campaigns.
Call to action – build community: Invite others to follow, join groups, or subscribe
Campaigns offer supporters ways to stay involved and facilitate future organizing or activism.
Mobilize: Organize digital grassroots troops
Coordinated deceptive campaigns use social media including closed networks such as Facebook groups, messaging services, or alternative platforms to mobilize supporters to engage in on- and offline activism. Offline, campaigns may encourage their audiences to attend a protest or event or vote. Offline mobilization can, in turn, feed into online networked information campaign materials, when photos and videos of in-person events are posted online.
III. Actors and Goals
Who are the actors behind coordinated deceptive campaigns?
Foreign adversaries: State actors, including China, Iran, and Russia, set up fake social media accounts and newsrooms and used existing state media sites to spread false information. However, domestic coordinated deceptive campaigns now often dwarf foreign-operated ones.[56]
Scammers: Cybercriminals use falsehoods as bait in scams and phishing schemes.[57]
Profiteers: Deceptive tactics are used to sell products, subscriptions, and tickets to events.[58]
Political candidates and campaigns: Candidates and political elites promote false claims that support their policy positions and engage their supporters.[59]
Activists: People with sincere beliefs in conspiracies and false narratives work to promote those campaigns.[60]
Industry: Companies spread false and misleading information where doing so supports their business. For example, the oil industry has spread disinformation about climate change.[61]
Who are the targeted audiences?
Campaigns often tap into their audience’s beliefs, cultures, anxieties, and identities. For example, writing about “black on white crime” preys on white fear.
General public: Some campaigns target as broad an audience as possible in the hopes that pushback – such as fact-check replies – will boost the original content.[62]
Social identity groups: Some campaigns target social identities, such as race, ethnicity, religion, gender, or sexual orientation. These identities are used to unite members of the identity group and divide them from others.[63]
Conspiratorial thinkers: New narratives may build on existing conspiracy beliefs (like QAnon or medical pseudoscience) to gain audiences.[64]
Elites: Media manipulators target political elites, institutions, and influencers to reach larger audiences in a practice called “trading up the chain.”[65]
Political subgroups: Scammers and profiteers rely on political passions to promote their content.[66]
Activists: Like political beliefs, activism in one area (like anti-lockdown participation) can be channeled by other campaigns (like anti-vaccination falsehoods).
Specific regions: Election coordinated deceptive campaigns in particular may focus on specific states or cities to suppress the vote or spread falsehoods about candidates.[67]
Coordinated deceptive campaigns often target more than one category at once, such as conservative elites, older Black voters, or anti-vaccine mothers in California. The targeting often maps onto institutional, intersectional inequieties.
Bringing all this together, understanding a networked information campaign requires answering:
What are the goals?
Who are the targeted audiences?
What tactics are employed?
IV. The Case Study of Texas Wind Turbines
Activists, industry, and political candidates worked together during the February 2021 Texas winter storms to target conservatives, anti-renewable energy activists, and Texans with the false narrative that frozen wind turbines were responsible for widespread power outages. Some wind turbines indeed went offline due to the storm, but two-thirds of the state’s shortfall in power generation came from failures at gas and coal power plants. Despite this, the false claim about wind turbines soon went viral on social media, accruing millions of views and interactions, and top Texas officials and members of Congress parroted the claim. The false narrative deflected blame from the systemic causes of the power outages – an independent electric grid that made it difficult to import electricity, a failure to winterize power sources, failure to address the causes and effects of climate change – and onto renewable energies.[68]
Building the pipeline
Outlets: A collection of outlets – including Breitbart, Daily Wire, Texas Scorecard, Western Journal, and Fox News – built large audiences across social media while eschewing journalistic standards between 2016 and late 2020.[69]
Public accounts: The outlets operated their own Facebook pages and pro-fossil fuel pages such as “Friends of Coal” amplified the narrative. The Western Journal cross posts its content across over a dozen Facebook pages.
Video channels and influencers: Influencers across platforms with millions of combined followers and subscribers shared content about the outages and created new videos and posts.
Community forums: Political and issue-specific interest groups created Facebook groups to share articles and memes.
Targeted ads: Deceptive outlets had been building up their audiences through ads in the months preceding the power outages.
Bringing the narrative to life
Misattributed media: In the early stage of the crisis, social media posts featured an image of a helicopter spraying hot water on wind turbines in Sweden in 2014 to falsely blame green energy for the energy shortfalls in Texas.
Meme: As it was reshared, the image of the wind turbine became a meme, making fun of renewable energies and those who advocate for them.
Kernel of truth: The Austin-American Statesman, a reputable local newspaper, published an article about the frozen wind turbines on February 14 and provided context on the scope of the outages. However, deceptive outlets selectively cited the American Statesman article to feed their false narrative that green energy was the main culprit for the outages.
Message testing: The Western Journal tested different captions for the same article across its affiliated Facebook pages.
Flooding the zone
Cross-posting: The official Daily Wire Facebook page and three of the site’s top influencers shared the article, often pointing their readers to the false tweet of a helicopter spraying wind turbines.
Targeting through paid microtargeted ads: Texas Scorecard ran a Facebook advertising campaign using frozen wind turbines as a hook.
Activation
Call to action – subscribe and engage: The Lone Star Standard, one of 1,300+ sites run by political operatives but designed to look like local, independent journalism, used the power outages to target Facebook users in Texas and grow its following. This larger audience could be targeted in future narrative campaigns.[70]
V. Why Does This Work?
Psychological and sociological drivers of mis- and disinformation
Successful coordinated deceptive campaigns take advantage of humans’ psychological and sociological vulnerabilities. These biases and predispositions make us more likely to form or accept false views and create barriers to knowledge revision, even after the false view has been corrected.
We rely on two distinct systems of thinking. System 1 is intuitive and quick, which means that it relies on mental shortcuts, while System 2 is deliberative and analytical.[71]
System 1 shortcuts can be important and useful. Academic literature suggests that humans do not have the capacity to process everything and so we must rely on mental shortcuts (accuracy-effort trade-off theory) and that mental shortcuts can be particularly helpful for decision-making in specific situations with high uncertainty and redundancy (ecological rationality theory).[72] Coordinated deceptive campaigns take advantage of System 1 shortcuts, meaning that anyone can be vulnerable to a well-targeted message.
Illusory-truth effect: The illusory-truth effect describes the tendency to perceive repeated information as more truthful than new information.[74] Repetition can increase people’s perceptions of the truthfulness of false statements, even when they know that such statements are false.[75] Even trained Facebook content moderators embraced fringe views after repeated exposure to the videos and memes they were supposed to moderate.[76]
Barriers to belief revision: False information continues to influence people’s thinking even after they receive a correction and accept the correction as true. This also persists for those who can recall the correction.[77]
Appeals to emotion: Greater emotionality, of both positive and negative emotions, predicts increased belief in fake news and decreased truth discernment.
System 2 explanations[2] for why people believe mis- and disinformation and are likely to share it online focus on the social benefit of sharing the content. As researcher Alice Marwick notes, “people do not share fake news stories solely to spread factual information, nor because they are ‘duped’ by powerful partisan media. Their worldviews are shaped by their social positions and their deep beliefs, which are often both partisan and polarized. Problematic information is often simply one step further on a continuum with mainstream partisan news or even well-known politicians.”[80] Examples of System 2 explanations include:
Collective storytelling: Stories use connections in the human experience to make sense of the world.[81] Storytelling can link facts and events together in distorted but appealing narratives.
Identity signaling: People sometimes share a story because it is a useful way to build or reinforce an identity, which takes precedence over the truth value of the story.
Collective identity: People share stories as a way of building collective identity among the group engaging with a specific narrative. Coordinated deceptive campaigns aim to reinforce cultural identity and create discord.[82]
VI. Building Civic Infrastructure
Many of the tools used to spread deceptive narratives can be repurposed in transparent, empowering ways to boost civic information and build more trust in fact-based information. Civic information providers must play an active role in using digital platforms to amplify content to targeted audiences.
Joining and leading civic information campaigns can look a great deal like participating in a deceptive narrative campaign. The same tactics used to build a pipeline, develop a narrative, flood the zone, and activate followers can be used to promote community-strengthening, trustworthy information.
First, build your pipeline by engaging with other community leaders and groups. Seek out collaborations and help train other trusted voices to use these social media tools.
As you get to know your audience, develop narratives and draft new content on topics where you have the expertise or expert partners to do so.
Flood the zone, re-sharing and amplifying quality information from your own sources and other trusted accounts.
Activate your audience with opportunities to be part of the distribution of information.
Building the pipeline
Take stock of existing online accounts and sites and create new ones where gaps exist.
Build on- and offline relationships with journalists, government officials, and other sources of accurate information that you can amplify. Develop influence.
Map out networks, communities, advocates, and methods for collaboration. What other partners could join in amplifying civic information?
Create organic online groups, public accounts, video channels, community forums, outlets, and pages where they are helpful and needed.
Cross-pollinate ideas and messages about key civic issues that are relevant across different groups.
Connect with local influencers to have them share key messages, either for free or through a disclosed paid partnership.
Bringing the narrative to life
Translate civic information into compelling narratives using emotion, identity, and trusted messengers.
Define your target audiences. Who already trusts you? What shared identities can you use to build communities and establish trust?
Identify topics of interest, emerging trends, narratives that are starting to be seeded, and stories you want to share.
Be cognizant of your biases. Understand your personal biases and how they may impact the messages and messengers you find trustworthy.
Prime positive identities: where deceptive campaigns might draw on racialized fears, you can develop positive collective identities around a shared purpose. For example, you can prime a shared national or regional identity, race, ethnicity, gender, or a social or professional group.[84]
Share resources that pre-bunk the tactics used by coordinated deceptive campaigns.[85] Pre-bunking entails teaching audiences about the common characteristics seen across false narratives such as emotional language, scapegoating, or false comparisons between unrelated items, and showing them the critical thinking skills that will help them resist disinformation.[86]
Message testing: Try out different narratives that make factual information emotionally compelling and shareable.
Flooding the zone
Amplify content to your targeted audiences.
Cross-posting: post frequently and post across social media channels, via email, and on message boards. The repetition of accurate messages is necessary for engendering long-lasting resistance to disinformation.[89]
Amplify the posts from trusted communities and networks – get those algorithms working!
Ask trusted messengers, including social media influencers, to promote your civic information campaigns.
Create a hashtag to facilitate surfacing content and coordinating messages.
Target particular audiences by purchasing ads or paying to boost content, disclosing who paid for the ad.
Familiarize yourself with platform content moderation policies and practices. Marginalized social media users face disproportionate content moderation and removal, especially when discussing topics such as race, sexual orientation, gender identity, disability rights, or sexual assault.[90] Consider using “Algospeak” code words or eschewing specific terms but be cautious to avoid creating additional confusion for those unfamiliar with the code word.[91]
Activation
Give ways for followers to stay involved.
Provide opportunities for people to join your distribution pipeline, such as social media links or an email list sign-up.
Invite your personal network to join the distribution pipelines.
Give your audience actionable items, such as polls, petitions, and volunteer sign ups to expand your reach.
Ask your pipeline to amplify factual posts.
Track your effectiveness as you share your own content and refine messaging according to engagement metrics and audience feedback.
Participate in or host opportunities to engage offline. With permission, use photos and videos from the events to increase online engagement.
Amplifying civic information is a team sport. Subject matter experts, journalists, content creators, organizers, grassroots groups, and passionate individuals all play a vital role in ensuring civic information reaches targeted audiences.[95] They will be more effective if they network together.
VII. The Case Study of Ukraine’s Social Media Mastery
The Ukrainian effort to counter Russian propaganda and showcase the horrors of the war has been a master class in using the engine of social media to promote information.
Building the pipeline
Ukrainian President Volodymyr Zelenskyy, a former television star, and other government officials embraced social media accounts to share their messages. Civil society organizations such as Promote Ukraine, a nongovernmental media hub based in Brussels, post information online about the war. They translate reports from the ground into English and hold news conferences to amplify stories. Ordinary Ukrainians described their firsthand experiences of the war using social media.
Brining the narrative to life
Ukraine’s social media strategy has embraced the principle of “show don’t tell.” Zelenskyy’s accounts exhibit his own personal bravery by filming himself in his office and the streets of Kyiv. Citizens have posted photos and videos of the war’s destruction.[96] Some of this content is crucial evidence of Russian war crimes.
Flooding the zone
Ukrainian officials have paraded an increasingly tired and hoarse President Zelenskyy before world leaders and maintained a steady stream of videos of high-level officials pre-bunking Russian disinformation attempts.
Additionally, the Ukrainian people have taken to using Telegram, YouTube, TikTok, and even Twitter – which is not a popular platform within Ukraine – to disseminate information and updates on military and diplomatic success.
Activation
To mobilize support, Ukraine’s online campaigns have offered calls to action. They have asked for volunteers to join its decentralized “IT Army” in the cyberwar against Russia, volunteer fighters, cryptocurrency donations, political support for foreign aid, and medical materials.[97]
VIII. Conclusion
Civic information providers must take an active role in the amplification of the fact-based information that is critical to democracy, public health, and the environment. The tactics outlined above form a media literacy guide to understand how coordinated deceptive campaigns disseminate their messages, who the actors are, who they target, and why they are successful. This will serve civic leaders as they combat viral falsehoods and can function as a template for the distribution of their information online.
Civic information providers can and should play a more active role online, but platforms and regulators must be involved in order to fix the design flaws that allow false and misleading information to flourish in the first place. The debate should move beyond the focus on “disinformation” and the accuracy of individual pieces of content. The current, after-the-fact platform whack-a-mole content moderation strategy is ineffective and gives rise to concerns about censorship. Instead, platforms and regulators should apply the norms and laws developed over many decades for the offline issues of consumer protection, civil rights, media, election, and national security law and renew them for the online world.[98]
Appendix: Building Resilience Against Online Attacks
Trusted validators sometimes become targets of conspiracies or harassment. Below are practical steps on how to protect yourself and what to do if you are being harassed.
Online harassment maps onto existing power structures, privilege, and erasure. Women and people who are LGBTQ+, non-white, immigrants, have a disability, or belong to an ethnic or religious minority group are frequently targeted by attacks – especially if they belong to more than one marginalized group.
Protect yourself
Secure your accounts
Implement two-factor authentication and use unique and strong passwords on your credit card, cell phone provider, utilities, bank, and social media accounts. Conduct regular data backups.
Adjust personal social media account settings to the most private settings, remove addresses or specific locations from accounts, and avoid discussing personal information that could be used against you. Where possible, create separate professional accounts for social media.
Manage online footprints
Be aware of your digital footprint through services like DeleteMe or by searching for yourself on Google or DuckDuckGo.
Install a secure Virtual Private Network (VPN) to privatize your network traffic and ensure that attackers cannot find you using your IP address.
Strengthen community
Connect with others in your online network. Have a plan to speak out and support one another in the event of online harassment.
Identify a person who can monitor accounts if you are being harassed, have been doxed, or are experiencing a similar privacy-based emergency. This could be colleagues, friends, or family members.
Tell your family and friends about the risks of online harassment. Online attackers often target family and friends, so it is important that they learn how to reduce outside access to your social media accounts.
Have a plan
Carry out a risk assessment; different activities have different online risks and certain issues are likely to attract more online abuse than others. Once you have identified the potential attackers, become familiar with the actors and their tactics.[99]
Become familiar with the Terms of Service of the platforms you use and learn how to file a takedown request if your information gets posted.
Leaders of organizations should develop policies to protect their employees, such as a ban on giving out personal phone numbers and addresses.
Continue to educate yourself on ways in which you may be targeted. This is particularly important if you identify as a member of a historically marginalized group. What are the ways your community has been targeted in the past? In what ways may that show up today? Who is your support system to reach out to if you face online harassment?
What to do if you are being harassed
Understand and document abuse
Try to determine who is behind the attack and what their motives are. Many emails can be tied to a real person using an IP address. Understand that not all accounts attacking you are real people – some may be automated accounts or people paid to harass others online.
Create a system for documenting abuse, especially anything that you feel is especially threatening and could lead to a physical attack. Documentation should include screenshots of the offensive message or image, the date, time, and name or handle of the harasser, and the date and time of any instance where the abuse is reported to a social media platform.
Consider compiling an incident log and timeline on an encrypted word processing platform.[100]
Prioritize your physical and mental well-being
Stay away from online attacks. Avoid responding to trolls, since this is often what they want and can worsen the situation. Consider blocking or muting accounts that are causing problems and disabling replies to your posts.
Consider going offline. This could include locking down all accounts for a period of time, particularly based on your tolerance for risk and harassment, or asking a friend, colleague, or family member to monitor your accounts while you are offline.
Lean on your community for support and healing. This is important for all groups, but particularly historically marginalized individuals experiencing attacks on their livelihood online. Make space and time to connect with your community as you disengage from harmful online behavior.
Alert institutions
Tell your credit card companies, cell phone provider, utilities provider, and bank that you are a target of online attacks.[101]
If you are concerned about physical attacks, contact security, police or seek support and safety within your community.
Acknowledgements
Thank you to Yael Eisenstat for advising on this project. Yael is currently Vice President at the Anti-Defamation League, where she heads the Center for Technology and Society. Thank you to the University of North Carolina Center for Information, Technology, and Public Life (CITAP), including Heesoo Jang, Kathryn Peters, and Daniel Kreiss, for their insights and feedback.
[15] For example, in the first three quarters of 2022, six of the top twenty outlets for Facebook interactions failed the NewsGuard standard of “gathers and presents information responsibly,” by which they mean the outlet references multiple sources and does not egregiously distort or misrepresent information. Original GMF Digital research, using social media interaction data from NewsWhip.
[19] For example, “Trumptrains” were a way to mass-amplify messages and build up followers for pro-Trump Twitter users. Users posted lists of Twitter handlers, emojis and usually a meme or GIF, and the “train cars” operated as follow-for-follow networks. The result was explosive follower growth for everyone involved. Erin Gallagher, ”Trump Trains,” Medium, September 15, 2019; Karen Kornbluh and Ellen P. Goodman, ”Safeguarding Digital Democracy: Digital Innovation and Democracy Initiative Roadmap,” German Marshall Fund of the United States, March 2020, 21.
[25] Select Committee to Investigate the January 6th Attack on the United States Capitol, “Final Report,” December 22, 2022, 527. Example post from thedonald.win.
[26] Some social media companies, such as Meta, have banned the targeting of users by categories such as health conditions, race, political causes, specific sexual orientations, and religion. However, advertisers have been able to continue use proxies such as “Gospel music,” “Hispanic culture,” and “Anime movies,” as proxies. This demographic targeting can facilitate racial discrimination in employment, housing, and credit card opportunities. Jon Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It?,” The Markup, July 9, 2021.
[47] Ben Collins and Brandy Zadrozny, “Anti-vaccine groups changing into ‘dance parties’ on Facebook to avoid detection,” NBC News, July 21, 2021; Image source: Al Tompkins, “How anti-vaxxers avoid being detected by Facebook,” Poynter, July 26, 2021.
[56] Less than 1% of reports of election misinformation submitted to the Election Integrity Partnership, a non-partisan coalition of researchers tracking efforts to delegitimize the 2020 election, related to foreign interference. Center for an Informed Public, Digital Forensic Research Lab, Graphika, & Stanford Internet Observatory, “The Long Fuse: Misinformation and the 2020 Election,” Stanford Digital Repository: Election Integrity Partnership. v1.3.0, 2021.
[94] Colorado Informed, “Voting Made Easy,” 2022. The German Marshall Fund partnered with The Colorado Forum and COLab to identify ways to promote civic information in the leadup to the 2022 elections.