What we’re doing and why it matters
The DNC is committed to harnessing its resources to protect the democratic process in the U.S., but we can’t do it alone. Digital disinformation is a whole-of-society problem, and it will take a whole-of-society approach — involving social media companies, government, the media, campaigns, and the general public — to begin to address the challenges. In order to encourage the necessary action, the DNC is rolling out a series of recommendations on how each can do its part to confront online disinformation.
Why it matters
In 2016, Russian intelligence services deployed “active measures” online to attack the American presidential election – using bots, troll farms, fake accounts, disinformation, propaganda, and a massive “hack and dump” operation to divide Americans and support Trump. Foreign and domestic bad actors took advantage of loose social media policies and political polarization to spread false news.
In addition, the 2020 election saw right-wing domestic trolls and Republican campaigns employ Russian tactics – engaging in elaborate disinformation, harassment, doxing, and voter suppression efforts ultimately culminating in the Big Lie and the 1/6 insurrection. Social media remains rife with misinformation and malicious actors, many determined to target Democratic candidates.
Recognizing this new reality, the DNC has established a team of analysts devoted to the task of identifying and countering bad actors targeting Democratic campaigns online. The team is aiding campaigns, state parties, and party committees to combat online disinformation and social media manipulation and is acquiring tools and devoting human resources to detect social media threats, educate party stakeholders, and develop strategies to help counter attacks.
What we’re doing
The DNC has established a counter-misinformation unit within the DNC Technology team that serves as a knowledge base and intelligence unit aiding campaigns, state parties and party committees to combat online misinformation and social media manipulation. The team is engaging civil society, law enforcement, and social media platforms to identify, monitor, and disrupt these efforts. We are building and purchasing tools and devoting human resources to monitoring social media platforms, educating the Democratic ecosystem on the issue, working with campaigns to counter these tactics, and providing recommendations to interested parties on what they can do to help.
On the detection front, we’ve adopted a hybrid approach – purchasing commercial tools where available and building our own tools where we see a gap in the market. Existing commercial tools we’ve acquired allow us to detect trending content on social media and respond in real-time. We’re also building a custom in-house tool, expanding on a tool the DCCC successfully used last cycle, to provide us with a detailed view of social media traffic and alert us to platform manipulation related to our candidates. The team has already successfully used the tools we’ve acquired and human intelligence to provoke takedowns or demotions on platforms controlled by all three major social media companies –including thousands of misleading or inauthentic posts like the ones below:
To prepare the Democratic ecosystem for the coming onslaught, we are conducting counter disinformation training and webinars and recommending counter disinformation tools to campaigns, state parties, and our sister committees. We offer general security training, counter-human intelligence training, and training on disinformation. We’ve also launched the DNC War Room to make sure we’re correcting the record online and holding the Trump White House accountable, and we’ve deployed organizers to battleground states earlier than ever to speak directly with voters.
Despite the hard work we’re doing at the DNC, this is a whole-of-society problem and we need many others to play their part. We have compiled a set of recommendations for various stakeholders. Please visit https://democrats.org/disinfo for more information.
DNC recommendations for combating online disinformation for:
General public
Countries that are resilient to disinformation and foreign influence rely on whole-of-society approaches focused on digital literacy and awareness of disinformation tactics. Here are some tips and additional resources to protect yourself and your networks from disinformation.
- Actively seek out information online from multiple authoritative sources. Information you seek out directly will usually be of higher quality than what you absorb passively on social media. Notice what percentage of your time you spend on authoritative news sites as opposed to news you get from social media. NewsGuard and MediaBiasFactCheck.com have a comprehensive set of ratings of news outlets for partisanship and fact-based reporting. Install the NewsGuard browser plugin to help you navigate news sources online.
- Ask yourself who the author of online content is, why they posted the information, and what they are hoping you will do with it. Scrutinize the information you read before you share, especially if it confirms what you already believe to be true. Social media transparency features may be able to help you establish context.
- Avoid being manipulated by divisive or dishonest content. Often times, social media tends to reward the most outrageous and often false take on any event. When you share, make sure you are sharing content that is true and helpful to others, not as a knee-jerk reaction to content that angers or scares you.
- If you see something untrue on social media, try to inject truth into the debate without attacking the sharer (they may be a victim of false content themselves). Fact-checkers like Snopes, AP FactCheck, PolitiFact, Factcheck.org, and Lead Stories may be able to help.
- Educate yourself on the tactics of online manipulators.
Watch
- New York Times’ Operation Infektion on Cold War Soviet disinformation
- NBC’s Factory of Lies on 2016 Russian election interference
- SmarterEveryDay’s YouTube series on social media manipulation.
- Listen to The Daily’s account of the business of Internet outrage.
- Read the Senate Intelligence Committee, New Knowledge, Knight Foundation, Harvard and Oxford reports on disinformation.
- Get news from reputable sources and support quality journalism – especially local journalism.
Learn about the sources and flow of right-wing disinformation like:
- “Pizzagate” on how online conspiracy theories fuel real-world violence.
- #JobsNotMobs on how false online memes become Republican slogans.
- “Fake Protests” in Austin on how false news goes viral online.
- Follow smart civil society leaders and groups like @DisinfoPortal, @MMFA @Graphika_NYC, & @FSIStanford on Twitter.
- Don’t let yourself be manipulated. Be aware of Russian propaganda outlets like RT & Sputnik and educate yourself on Russian propaganda lines.
Read longer works documenting disinformation and propaganda:
- Messing with the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News – Clint Watts
- The Plot to Destroy Democracy: How Putin and His Spies Are Undermining America and Dismantling the West – Malcolm Nance
- Twitter and Tear Gas – Zeynep Tufekci
- LikeWar: The Weaponization of Social Media – P.W. Singer
- 1984 – George Orwell
- Republic of Lies – Anna Merlan
- Report On The Investigation Into Russian Interference In The 2016 Presidential Election – US Department of Justice
Campaigns
Campaigns and state parties are focused on winning elections and have limited resources and options for confronting online disinformation. However, campaigns and state parties are not powerless in the fight against online disinformation. In addition to the ongoing work the DNC does to assist campaigns and state parties, there are a number of concrete steps they can take to prepare for disinformation attacks.
- Establish a counter-disinformation lead within your organization — ideally a press or digital staffer, responsible for monitoring candidate social media traffic. The DNC will provide guidance on how to effectively counter disinformation, but campaigns need to be aware of what’s being said about their candidate online and take appropriate action.
- Educate yourself. Review the DNC Recommendations for Combating Disinformation – for General Public document and the DNC’s disinformation overview deck.
- Correct the record. Make sure your campaign is using the power of social media to its fullest to correct disinformation. While it is critical that we call on various platforms to take down false or fake information, the ecosystem cannot always rely on them, and campaigns should continue to work with the DNC to use their platforms to correct disinformation.
- Major political and breaking news events often leave an information vacuum that malicious actors seek to exploit. Campaigns should have an incident response plan in place to combat viral misinformation. The best time to plan for information attacks is before one happens! When contemplating a response to disinformation narratives, campaigns and state parties should consider whether misinformation has reached a tipping point where the costs of ignoring the issue are higher than the costs of the amplification that a response might generate. When misinformation has reached a tipping point, aggressive responses to misinformation that seek to reframe the debate tend to be most effective.
- If you see something, say something. Set up an internal escalation path for reporting suspicious online activity, and make that part of your incident response plan. Report malicious activity or suspected malicious social media activity in-platform and to platforms via approved escalation paths.
Legislation
Congress has a significant role to play in defending Americans and our information space against foreign & domestic manipulation. Democratic leaders in Congress have proposed a number of common-sense reforms to combat misinformation, improve transparency, and protect user privacy — many of which have been blocked by Republican members of the U.S. Senate:
- The For the People Act – This bill addresses voter access, election integrity, election security, political spending, and ethics for the three branches of government.
- The For the People Act includes a version of the Honest Ads Act, a bill that expands source disclosure requirements for political advertisements by
- (1) establishing that paid internet and paid digital communications may qualify as “public communications” or “electioneering communications” that may be subject to such requirements, and
- (2) imposing additional requirements relating to the form of such source disclosures and the information contained within.
- The bill also requires certain online platform companies to maintain publicly available records about qualified political advertisements that have been purchased on their platforms.
- The Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act would force online service providers to finally address harms or face potential civil liability. It does so by making clear that Section 230 of the Communications Decency Act:
- Doesn’t apply to ads or other paid content – ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers;
- Doesn’t bar injunctive relief – allowing victims to seek court orders where misuse of a provider’s services is likely to cause irreparable harm;
- Doesn’t impair enforcement of civil rights laws – maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms;
- Doesn’t interfere with laws that address stalking/cyber-stalking or harassment and intimidation on the basis of protected classes– ensuring that victims of abuse and targeted harassment can hold platforms accountable when they directly enable harmful activity;
- Doesn’t bar wrongful death actions – allowing the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life;
- Doesn’t bar suits under the Alien Tort Claims Act – potentially allowing victims of platform-enabled human rights violations abroad (like the survivors of the Rohingya genocide) to seek redress in U.S. courts against U.S.-based platforms.
- The Protecting Americans from Dangerous Algorithms Act narrowly amends Section 230 of the Communications Decency Act to remove liability immunity for a platform if its algorithm is used to amplify or recommend content directly relevant to a case involving interference with civil rights; neglect to prevent interference with civil rights; and in cases involving acts of international terrorism.
- The Health Misinformation Act:
- This bill would limit the liability protection that applies to a provider of an interactive computer service (e.g., a social media company) for claims related to content provided by third parties if a provider promotes health misinformation during a declared public health emergency.
- Specifically, the bill says the liability protection (sometimes referred to as Section 230 protection) shall not apply to a provider that promotes health misinformation using an algorithm unless the algorithm uses a neutral mechanism for the promotion, such as chronological functionality.
- Requires the Department of Health and Human Services to issue guidance about what constitutes health misinformation within 30 days.
- The Countering Russian Influence Through Interagency Coordination And Leadership Act:
- This bill establishes two bodies to address malign foreign influence, the Russia Influence Group and the Commission on Countering Global Malign Influence.
- The Russia Influence Group shall (1) coordinate and provide guidance on interagency efforts to counter malign Russian influence in Europe and the United States, (2) regularly meet with federal departments and agencies to address specific malign Russian influence tools such as election interference and disinformation, (3) work with U.S. embassies and international actors to counter malign Russian influence in foreign countries, and (4) report to Congress a strategy to counter malign Russian influence.
- The Commission on Countering Global Malign Influence shall (1) examine global malign influence from sources such as China and Iran, (2) conduct a review of U.S. preparedness to counter such malign influence, and (3) issue a final report on its findings. The commission shall terminate six months after the report is submitted.
- The Digital Citizenship and Media Literacy Act (links to House and Senate versions) – To promote digital citizenship and media literacy.
- The Bot Disclosure and Accountability Act of 2019 (links to House and Senate versions) – To protect the right of the American public under the First Amendment to the Constitution of the United States to receive news and information from disparate sources by regulating the use of automated software programs intended to impersonate or replicate human activity on social media.
- The Deceptive Practices and Voter Intimidation Prevention Act of 2019:
- This bill generally prohibits deceptive practices, false statements, and voter interference regarding federal elections. Specifically, the bill prohibits any person, within 60 days before an election, from communicating, causing to be communicated, or producing for communication certain information on voting, if the person (1) knows such information to be materially false, and (2) has the intent to impede or prevent another person from exercising the right to vote in an election.
- The bill also prohibits false statements regarding public endorsements and hindering, interfering with, or preventing voting or registering to vote.
- A private right of action for preventive relief is established for persons aggrieved by violations of these prohibitions. Criminal penalties are also established for violations. If the Department of Justice (DOJ) receives a credible report that materially false information has been or is being communicated in violation of these prohibitions, DOJ must communicate to the public accurate information designed to correct the materially false information.
- The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019 – To combat the spread of disinformation through restrictions on deep-fake video alteration technology.
- The Deep Fake Detection Prize Competition Act – To authorize the Director of the National Science Foundation to establish prize competitions related to deep fake detection technology.
- The Platform Accountability and Transparency Act – a multi-pronged bill that creates new mechanisms to increase transparency around social media companies’ internal data:
- Under PATA, independent researchers would be able to submit proposals to the National Science Foundation. If the requests are approved, social media companies would be required to provide the necessary data, subject to certain privacy protections.
- Companies that failed to comply would be subject to enforcement from the Federal Trade Commission (FTC) and face the potential loss of immunity under Section 230 of the Communications Decency Act.
- Additionally, the bill would give the FTC the authority to require that platforms proactively make certain information available to researchers or the public on an ongoing basis, such as a comprehensive ad library with information about user targeting and engagement./li>
- The proposal would also protect researchers from legal liability that may arise from automatically collecting platform information if they comply with various privacy safeguards.
- The Social Media Privacy Protection and Consumer Rights Act:
- This bill requires online platform operators to inform a user, prior to a user creating an account or otherwise using the platform, that the user’s personal data produced during online behavior will be collected and used by the operator and third parties. The operator must provide a user the option to specify privacy preferences, and an operator may deny certain services or complete access to a user if the user’s privacy elections create inoperability in the platform.
- The operator must (1) offer a user a copy of the personal data of the user that the operator has processed, free of charge, and in an electronic format; and (2) notify a user within 72 hours of becoming aware that the user’s data has been transmitted in violation of the security platform.
- A violation of the bill’s privacy requirements shall be considered an unfair or deceptive act or practice under the Federal Trade Commission Act. The Federal Trade Commission (FTC) may enforce this bill against common carriers regulated by the Federal Communications Commission under the Communications Act of 1934 and nonprofit organizations. Currently, common carriers regulated under that act are exempt from the FTC’s enforcement authority, and nonprofit organizations are subject to FTC enforcement only if they provide substantial economic benefit to their for-profit members.
- A state may bring a civil action in federal court regarding such violations.
In order to combat online disinformation and protect the integrity of the 2020 elections, Senate Republicans must work with Democrats to pass these crucial reforms. If you have one, call your Republican Senator(s) at (202) 224-3121 and ask them to support these crucial pieces of legislation.
Social media (scorecard)
Social media has quickly become a major source of news for Americans. In a 2022 survey, half of Americans reported getting news from social media often or sometimes—including 31% of Americans from Facebook, 25% from YouTube, and 14% from Twitter.1
Despite huge numbers of Americans turning to social media for news, major social media companies haven’t done enough to take responsibility for information quality on their sites. As quickly as these sites have become destinations for news seekers, they have just as quickly become large propagators of disinformation and propaganda.
The DNC is working with major social media companies to combat platform manipulation and train our campaigns on how best to secure their accounts and protect their brands against disinformation. While progress has been made since the 2016 elections, social media companies still have much to do to reduce the spread of disinformation and combat malicious activity. Social media companies are ultimately responsible for combating abuse and disinformation on their systems, but as an interested party, we’ve compiled this comparative policy analysis to present social media companies with additional potential solutions.
While this analysis compares major social media companies, we also call on all other online advertisers and publishers to undertake similar efforts to promote transparency and combat disinformation on their sites.
1Social Media and News Fact Sheet, Pew Research Center, September 2022
1. Promote authoritative news over highly engaging news in content algorithms
Social media algorithms are largely responsible for determining what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to shape the beliefs and actions of billions of users around the world. Most social media algorithms are designed to maximize time spent on a site, which when applied to news tends to create a “race to the bottom of the brain stem”—boosting the most sensational, outrageous, and false content.
- Facebook’s News Feed, responsible for the vast majority of content consumed on Facebook, prioritizes user engagement as its primary signal for algorithmic boosts—which tends to amplify fear-mongering, outrage-bait, and false news content on the site. “ClickGap an initiative rolled out in 2019, and partnerships with independent fact-checkers have reduced the reach of some very low-quality webpages, but the effective application of rules and policies has been repeatedly subject to political intervention from Facebook’s government relations team. The company temporarily employed News Ecosystem Quality (NEQ) ratings to promote high-quality news in November 2020, but has no public plans to permanently integrate the scores into News Feed rankings. Facebook began prioritizing articles and domains doing original reporting in mid-2020 and began generally reducing all political content in News Feed in 2021. These changes were not applied to Instagram.
- Twitter has employed anti-spam measures but has made little attempt to establish domain or account authority in algorithms.
- YouTube and Google introduced PageRank into its news search and recommendation algorithm, raising the search visibility of authoritative news sources. Despite these efforts, YouTube still regularly elevates misinformation from disreputable sources via its “Up Next” algorithm.
- Twitter has employed anti-spam measures but has made little attempt to establish domain or account authority in algorithms, often boosting disinformation’s virality.
- Snap’s Discover tab is open to authorized publishers only. Media partners publishing to Discover are vetted by Snap’s team of journalists and required to fact-check their content for accuracy. Within Discover, Snap does not elevate authoritative news sources over tabloid and viral news & opinion sites.
- Reddit relies on users voting to rank content within feeds. While the company doesn’t directly promote authoritative news, the site’s downvoting functionality allows users to demote untrustworthy news sources—resulting in a higher quality news environment than exists on many other social media sites.
- LinkedIn’s editorial team highlights trusted sources in its “LinkedIn News” panel, but the company has made no attempt to establish domain or account authority in feed algorithms.
- Nextdoor disables accounts found to repeatedly post harmful content, including misinformation. The company has not made attempts to establish account authority in feed rankings.
- TikTok uses information centers, public service announcements, and content labeling to elevate authoritative information, but does not prioritize authoritative sources in its “For You” algorithm.
- Twitch uses human curation to highlight creators on its home page. Channel recommendation algorithms do not take source authoritativeness into account.
2. Enforce a comprehensive political misinformation policy, with progressively severe penalties for repeat offenders
While social media companies have been reluctant to evaluate the truthfulness of content on their websites, it’s critical that these companies acknowledge misinformation when it appears. Establishing and enforcing progressively severe penalties for users that frequently publish misinformation is an important way to deter its publication and spread.
- Facebook has partnered with independent, IFCN-certified third parties to fact-check content and has employed a remove/reduce/inform strategy to combat misinformation. Ad content that is determined to be false by one of Facebook’s third-party fact-checking partners is removed. Organic content determined to be false is covered with a disclaimer, accompanied by a fact-check, and its reach is reduced by content algorithms. Users attempting to post or share previously fact-checked content are prompted with the fact-check before they post, and users who shared the content before it was fact-checked receive a notification to that effect. Unfortunately, Facebook’s third-party fact-checking program has largely failed to scale to the size of the site’s misinformation problem, including allowing misinformers the ability to freely repost previously fact-checked misinformation. The fact-checking program has not been extended to WhatsApp, and has been kneecapped by the company’s Republican-led government relations team. Content produced by elected officials and Facebook-approved candidates are exempt from Facebook’s fact-checking policy—a loophole exploited by some of the most prolific sources of misinformation on the site –and Facebook users can individually choose to opt-out of fact-checking demotions.
- Twitter has developed a crowdsourced fact-checking program,“Community Notes” that adds additional context to some potentially misleading tweets. Notes are applied using a “bridging” algorithm that has been criticized for prioritizing consensus over accuracy. Twitter has not provided details on how it plans to prevent misinformation from appearing in paid advertisements on the site. Changes to Twitter’s verification process in 2022 and 2023 have resulted in a surge of misleading impersonator accounts on the site.
- YouTubeintroduced news panels for breaking news events and some conspiracy content in 2018, and employs fact-check panels in YouTube Search for some popular searches debunked by IFCN fact-checkers. The company has established a policy against “demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” in advertising content, a standard many bad actors have been able to evade. In December 2020, YouTube belatedly developed a policy and strike system against misinformation aimed at undermining confidence in democratic elections, which the company only begain enforcing after the January 6 insurecction at the US Capitol and later reversed. YouTube does not apply fact-checks to videos published on its site.
- Snap uses an in-house team of fact-checkers to enforce rules against “false information that causes harm or is malicious.” Accounts that repeatedly post false or misleading information are subject to account restrictions.
- Reddit prohibits deceptive, untrue, or misleading claims in political advertisements and misleading information about the voting process under its impersonation policy. Enforcement of misinformation rules for organic posts is inconsistent, as subreddits like /r/conspiracy regularly post political misinformation with significant reach.
- LinkedIn has partnered with third-party fact-checkers to prohibit political and civic misinformation on its site. The company does not penalize accounts that repeatedly publish misinformation.
- Nextdoor prohibits election and political misinformation “designed to make that candidate seem unfit or ineligible for office” on its app. The company disables accounts that repeatedly author misinformation. Data access restrictions, however, have made it difficult to measure the enforcement of misinformation rules.
- TikTok has partnered with third-party fact-checkers to prohibit videos containing misinformation about elections or other civic processes, but does not enforce rules against other political misinformation.
- Twitch has partnered with anti-misinformation groups to remove serial misinformers under its Harmful Misinformation Actorspolicy. The company does not list non-civic political misinformation under the policy nor does it evaluate individual claims made in streams.
3. Remove maliciously edited media and deepfakes
- Facebook announced changes to its policy on manipulated media in January, banning deepfake videos produced using artificial intelligence or machine learning technology that include fabricated speech (except parody videos). The policy change does not cover other types of deepfakes or manipulated media, which, when not posted by an elected official or Facebook-approved political candidate, are subject to fact-checking by Facebook’s third-party fact-checkers.
- In February 2020, Twitter announced a policy to label manipulated media on the site. Under the policy, only manipulated media that is “likely to impact public safety or cause serious harm” will be removed. Manipulated or AI-synthesized videos on Twitter may be labeled by the sites’ Community Notes program.
- YouTube’s stated policy prohibits manipulated media that “misleads users and may pose a serious risk of egregious harm.” Enforcement of the policy, however, has been inconsistent.
- Snap requires publishers on its Discover product to fact-check their content for accuracy. Snap enables user sharing only within users’ phone contacts lists and limits group sharing to 32 users at a time –making it difficult for maliciously manipulated media to spread on the app.
- Reddit prohibits misleading deepfakes and other manipulated media under its impersonation policy.
- LinkedIn prohibits deepfakes and other manipulated media intended to deceive.
- Nextdoor prohibits manipulated media under its misinformation policies.
- TikTok’s stated policy prohibits harmful synthetic and manipulated media under its isinformation policy. Enforcement of the policy, however, has been inconsistent.
- Twitch does not prohibit manipulated media in its community guidelines, but manipulated media can be the basis for channel removal under the company’s Harmful Misinformation Actors.
Media manipulation and deepfake technology employed against political candidates can have the potential to seriously warp voter perceptions, especially given the trust social media users currently place in video. It’s critical that social media companies acknowledge the heightened threat of manipulated media and establish policies to prevent its spread.
4. Enforce rules on hate speech consistently and comprehensively
- Facebook’s automated hate speech detection systems have steadily improved since they were introduced in 2017, and the company has made important policy improvements to remove white nationalism, militarized social movements, and Holocaust denial from its site. Unfortunately, many of these policies go poorly enforced, and hateful content — especially content demonizing immigrants and religious minorities — still goes unactioned. In 2016, the company quashed internal findings suggesting that Facebook was actively promoting extremist groups on the site, only to end all political group recommendations after the site enabled much of the organizing behind the January 6, 2021 Capitol insurrection. The company also repeatedly went back on its promise to enforce its Community Standards against politicians in 2020.
- Twitter’s new ownership welcomed a surge in hate content and harassment on the site in 2022. Hate content pervades much of Twitter’s discourse and has been directly linked to offline violence. Twitter’s “Who to Follow” recommendation engine also actively prompts users to follow prominent sources of online hate and harassment.
- In 2020, YouTube took steps to remove channels run by hate group leaders and developed creator tools to reduce the prevalence of hate in video comment sections. Despite this progress, virulently racist and bigoted channels are still recommended by YouTube’s “Up Next” algorithms and monetized via YouTube’s Partner Program.
- Snap’s safety-by-design approach limits the spread of hate or harassing speech, but hate and bigotry has repeatedly spread widely on the site.
- Reddit has made significant changes since a 2015 Southern Poverty Law Center article called the site home to “the most violently racist” content on the internet –introducing research into the impacts of hate speech and providing transparency into its reach on the site. Despite progress, researchers observed a spike in users experiencing harassment on the site in 2023.
- As a professional networking site, LinkedIn has established professional behavior standardsreport experiencing sustained abuse on the app.
- Nextdoor announced significant decreases in reports of harmful conduct after introducing reforms to its community guidelines policies starting in 2020. The company has not released data on the overall prevalence of hate speech on its app, and complaints of vigilantism on Nextdoor continue.
- TikTok strengthened its anti-hate policies in 2023, and has cultivated a strong community of social media users typically targeted with online hate. Despite progress, race, religion, and gender-based harassment are common aspects of TikTok’s comment sections and the app has even been found to be a potentially radicalizing force.
- Twitch considers a broad range of hateful conduct – including off-site behavior by streamers – to make enforcement decisions and has developed innovative approaches to chat moderation. Despite company efforts, Twitch has struggled to address abuse and “hate raid” brigading on the site – often targeting LGBTQ+ streamers and streamers of color in particular. Streamers from marginalized groups have also been targets of dangerous “swattings” by Twitch users.
Hate speech, also known as group defamation or group libel, is employed online to intimidate, exclude, and silence opposing points of view. Hateful and dehumanizing language targeted at individuals or groups based on their identity is harmful and can lead to real-world violence. While all major social media companies have developed extensive policies prohibiting hate speech, enforcement of those rules has been inconsistent. Unsophisticated, often outsourced content moderation resources is one cause of this inconsistency, as moderators lack the cultural context and expertise to accurately adjudicate content.
For example,reports have suggested that social media companies treat white nationalist terrorism differently than Islamic terrorism. There are widespread complaints from users that social media companies rarely take action on reported abuse. Often, the targets of abuse face consequences, while the perpetrators go unpunished. Some users, especially people of color, women, and members of the LGBTQIA community, have expressed the feeling that the “reporting system is almost like it’s not there sometimes.” (These same users can often be penalized by social media companies for highlighting hate speech employed against them.)
5. Prohibit the use of scaled automation
One way bad actors seek to artificially boost their influence online is through the use of automation. Using computer software to post to social media is not malicious in itself, but unchecked automation can allow small groups of people to artificially manipulate online discourse and drown out competing views.
- Facebook has anti-spam policies designed to thwart bad actors seeking to artificially boost their influence through high-volume posting (often via automation). The website does not have a policy preventing artificial amplification of content posted via its Pages product, which is used by some bad actors to artificially boost reach.
- Twitter employs its own automation designed to combat coordinated manipulation and spam
- YouTube has scaled automation and video spam detection, thwarting efforts at coordinated manipulation.
- Snap’s Discover tab is open to verified publishers only, whose content is manually produced.
- Reddit has employed anti-spam measures and prohibits vote manipulation.
- LinkedIn has employed anti-spam measures to combat coordinated manipulation.
- Nextdoor prohibits spam and inauthentic accounts on its app.
- TikTok prohibits coordinated inauthentic behaviors to exert influence or sway public opinion.
- Twitch has employed anti-spam measures to combat coordinated manipulation.
6. Require account transparency & authenticity
Requiring social media users to be honest and transparent about who they are limits the ability of bad actors to manipulate online discourse. This is especially true of foreign bad actors, who are forced to create inauthentic or anonymous accounts to infiltrate American communities online. While there may be a place for anonymous political speech, account anonymity makes deception and community infiltration fairly easy for bad actors. Sites choosing to allow anonymous accounts should consider incentivizing transparency in content algorithms.
- Facebook has established authenticity as a requirement for user accounts and employed automation and human flagging to improve its fake account detection. The company’s authenticity policy does not apply to Facebook’s Pages (a substantial portion of content) or Instagram products, which have been exploited by a number of foreign actors, both politically and commercially motivated.
- Twitter has policies on impersonation and misleading profile information, but does not require user accounts to represent authentic humans. Changes to Twitter’s verification process in 2022 and 2023 have resulted in a surge of misleading impersonator accounts on the site.
- YouTube allows pseudonymous publishing and commenting, and does not require accounts producing or interacting with video content to be authentic. Google Search and Google News considers anonymous/low transparency publishers low quality and downranks them in favor or more transparent publishers.
- Snap’s “Stars” and “Discover” features are available to select, verified publishers only. Snap allows pseudonymous accounts via its “chat” and “story” function, but makes it difficult for such accounts to grow audiences.
- Reddit prohibits impersonation, but encourages pseudonymous user accounts and does not require user accounts to represent authentic humans.
- LinkedIn requires users to use their real names on the site.
- Nextdoor requires users to use their real names on the site.
- TikTok has policies on impersonation and misleading profile information, but does not require user accounts to represent authentic humans or organizations.
- Twitch allows pseudonymous publishing and commenting, and does not require accounts producing or interacting with video content to be authentic.
7. Restrict the distribution of accounts posting plagiarized and unoriginal content
Plagiarism and aggregation of content produced by others is another way foreign actors are able to infiltrate domestic online communities. Foreign bad actors often have a limited understanding of the communities they want to target, which means they have difficulty producing relevant original content. To build audiences online, they frequently resort to plagiarizing content created within the community.
For example, Facebook scammers in Macedonia have admitted in interviews to plagiarizing their stories from U.S. outlets. In a takedown of suspected Russian Internet Research Agency accounts on Instagram, a majority of accounts re-posted content originally produced and popularized by Americans. While copyright infringement laws do apply to social media companies, current company policies are failing to prevent foreign bad actors from infiltrating domestic communities through large-scale intellectual property theft.
- Facebook downranks domains with stolen content and prioritizes original news reporting. Originality has also been incorporated into ranking algorithms on Instagram.
- Twitter has removed content farm accounts and identical content across automated networks, but does not encourage original content production.
- Google Search/News/YouTube considers plagiarism, copying, article spinning, and other forms of low-effort content production to be “low quality” and downranks it heavily in favor of publishers of original content.
- Snap’s Discover tab is open to verified sources publishing original content only.
- Reddit has introduced an “OC” tag for users to mark original content and relies on community moderators and users to demote duplicate content, with mixed success.
- Nextdoor prohibits templated, repetitive, or unoriginal content on its app.
- TikTok doesn’t recommend duplicated content in its #ForYou feed.
- Twitch prohibits creators from stealing content and posting content from other sites.
8. Make content recommendations transparent to journalists/academics
Social media algorithms are largely responsible for deciding what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to manipulate the beliefs and actions of billions of users around the world. Understanding how social media algorithms work are crucial to understanding what these companies value and what types of content are benefitting from powerful algorithmic boosts.
- While Facebook provides some in-product algorithmic transparency and has enabled some access to academic research through its FORT initiative, the company has released few details on the inner workings of its “NewsFeed” algorithm, which is responsible for the vast majority of content consumed on the site. What details are known about the algorithm’s design and behavior have largely come from public whistleblowers.
- Twitter released its “Home Timeline” and “Community Notes” algorithm code in 2022, and published a pair of internal studies into aspects of their algorithms’ behavior. The disclosures fall short of a full picture of how the company’s algorithms behave.
- Google’s Search and News ranking algorithms (integrated into YouTube search) have been thoroughly documented. YouTube has provided little detail into its content recommendations, however, which power 70% of content consumed on the site.
- Snap has provided a very basic overview of how it ranks content on its Discover tab. Details of the algorithm and how the company chooses publishers it contracts with have not been made available.
- Reddit’s ranking algorithms are partially open source, with remaining non-public aspects limited to efforts to prevent vote manipulation.
- LinkedIn has provided little transparency into how it ranks content in feeds.
- Nextdoor has provided little transparency into how it ranks content in feeds.
- TikTok has opened a transparency center in Los Angeles and published a basic overview of the company’s “For You” algorithm on its website. Journalists who have visited the transparency center reported learning little about the algorithm or its behavior.
- Twitch has provided little transparency into how it recommends channels.
9. Make public content and engagement transparent to journalists/academics
Understanding the health of social media discourse requires access to data on that discourse. Social media companies have taken very different approaches to transparency on their sites.
- Facebook (Meta) rolled out an initial version of its FORT research effort in November of 2021, which provides some text-based public content data to select academic researchers.The company announced plans to shutter its CrowdTangle content transparency tool in 2022.
- Twitter’s API pricing, introduced in 2023, has made data access prohibitively expensive for most academics and journalists.
- YouTube allows approved academic researchers to study its content and engagement. The company has also developed a public API that allows for public analysis of video summary information and engagement. Video transcripts are not available via the public API.
- All public content is available to users via the Snapchat app. Snap does not make Discover interaction data public.
- Reddit’s open API allows content and engagement to be easily analyzed by independent researchers, journalists, and academics.
- LinkedIn has not made public content or engagement data available to researchers.
- Nextdoor has made some content data available to select academic researchers, but data access for research purposes is largely unavailable.
- TikTok launched a research API limited to approved academic research projects in 2023.
- Twitch has developed a public API that allows for public analysis of video summary information and engagement. Video transcripts are not available via the public API.
10. Make advertising content related to political issues easily accessible and transparent
Political ads are not limited to just those placed by candidates. Political action committees and interest groups spend large sums of money advocating for issues in a way that affects political opinion and the electoral choices of voters. Public understanding of these ad campaigns is crucial to understanding who is trying to influence political opinion and how.
The issue of opacity in political ads is compounded when combined with precise, malicious microtargeting and disinformation. Current policies at some major social media companies allow highly targeted lies to be directed at key voting groups in a way that is hidden from public scrutiny.
- Facebook has taken a broad definition of political speech and included issue ads in its political ad transparency initiative.
- Twitter announced it would begin relaxing its political ad ban in 2023, launching an ineffective “by-request” ad transparency tool.
- Google and YouTube announced a new Ads Transparency Center in March 2023 that includes a comprehensive library of political and non-political ads.
- Snap has taken a broad definition of political speech and included issue and campaign-related ads in its political ad library.
- Reddit has taken a broad definition of political speech and included issue and campaign-related ads in its political ad library.
- LinkedIn does not allow political ads.
- Nextdoor does not allow political ads.
- TikTok purports to ban political ads, but experiments with the app suggest the company’s enforcement has been ineffective.
- Twitch does not allow political ads.
11. Fully disclose state-backed influence operation content
When state-sponsored influence operations are detected, it’s crucial that social media companies be transparent about how their sites were weaponized. Fully releasing datasets allows the public and research community to understand the complete nature of the operation, its goals, methods, and size. Full transparency also allows independent researchers to further expound on company findings, sometimes uncovering media assets not initially detected.
- Twitter’s new ownership has suspended its moderation research consortium.
- Facebook provides sample content from state-backed information operations via third-party analysis.
- Google/YouTube provides only sample content from state-backed information operations.
- There is no public evidence of state-backed information operations on Snap.
- Reddit has disclosed full datasets of state-backed information operations.
- LinkedIn has not disclosed content from state-backed information operations.
- There is no public evidence of state-backed information operations on Nextdoor.
- TikTok provides summary statistics of its enforcement actions against state-backed influence operations. The company is owned by ByteDance, Inc. – a Chinese internet technology company. Public reporting suggests that TikTok’s moderation decisions may be influenced by the Chinese government.
- There is no public evidence of state-backed influence operations on Twitch.
12. Label state-controlled media content
State-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased in favor of the controlling government (i.e. propaganda).
- Facebook has introduced labeling for state-controlled media.
- YouTube has introduced labeling on all state-controlled media on its site.
- Twitter removed labels for state-controlled media in 2023.
- Snap has not introduced labeling of content from state-controlled media.
- Reddit has banned links to Russian state-controlled media but has not introduced labeling for content from other state-controlled media outlets.
- LinkedIn has not introduced labeling of content from state-controlled media.
- To date, Nextdoor has not allowed state-controlled media to author content on its site.
- TikTok has introduced labeling for some state-controlled media content.
- Twitch has banned Russian state media but has not introduced labeling for other state-controlled media content.
13. End partnerships with state-controlled media.
As outlined in policy #2, state-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased towards the goal of the controlling government (i.e. propaganda). Allowing these outlets to directly place content in front of Americans via advertising and/or profit from content published on their sites enhances these outlets’ ability to propagandize and misinform the public.
- Facebook has ended its business relationships with Russian state-controlled media and announced it would temporarily ban all state-controlled media advertising in the US ahead of the November 2020 election.
- Twitter has banned advertising from state-controlled media.
- YouTube has removed Russian state-controlled media channels but allows other state-controlled media to advertise and participate in its advertising programs.
- Snap partners with state-controlled media in its “Discover” tab.
- To date, Reddit has not allowed state-controlled media to advertise on its site. Politics-related news advertisements are subject to the company’s political ad policy.
- To date, LinkedIn has not partnered with state-controlled media.
- To date, Nextdoor has not partnered with state-controlled media.
- TikTok is owned by ByteDance, Inc. –a Chinese internet technology company. Public reporting suggests that TikTok’s moderation decisions may be influenced by the Chinese government.
- Twitch has removed Russian state-controlled media channels but allows other state-controlled media to advertise and participate in its advertising programs.
14. Restrict the distribution of hacked materials
One particularly damaging tactic used by nation-state actors to disinform voters is “hack and dump” operations, where stolen documents and selectively published, forged, and/or distorted. While much of the burden of responsibly handling hacked materials falls on professional media, social media companies can play a role in restricting the distribution of these materials.
- Facebook’s “newsworthiness” exception to its hacked materials policy renders it ineffective against political hack-and-dump operations.
- Twitter’s hacked materials policy> prohibits the direct distribution of hacked materials by hackers, but does not prohibit the publication of hacked materials by other Twitter users.
- Google and YouTube have established a policy banning content that facilitates access to hacked materials and demonetizes pages hosting hacked content
- Snap’s semi-closed Discover product makes it near-impossible for hackers or their affiliates to publish hacked materials for large audiences directly to its app.
- Reddit’s rules prohibit the sharing of personal or confidential information.
- LinkedIn prohibits the sharing of personal or sensitive information.
- Nextdoor prohibits the publication of private materials.
- TikTok does not prohibit the publication of hacked materials.
- Twitch prohibits any unauthorized sharing of private information.
Online disinformation is a whole-of-society problem that requires a whole-of-society response. Please visit democrats.org/disinfo for our full list of counter-misinformation recommendations.
1Social Media and News Fact Sheet, Pew Research Center, September 2022
News media (Stanford Policy Center guidelines)
10 Guidelines and a Template for Every Newsroom
The run-up to the 2016 U.S. presidential election illustrated how vulnerable our most venerated journalistic outlets are to a new kind of information warfare. Reporters are a targeted adversary of foreign and domestic actors who want to harm our democracy. And to cope with this threat, especially in an election year, news organizations need to prepare for another wave of false, misleading, and hacked information. Often, the information will be newsworthy. Expecting reporters to refrain from covering news goes against core principles of American journalism and the practical business drivers that shape the intensely competitive media marketplace. In these cases, the question is not whether to report but how to do so most responsibly. Our goal is to give journalists actionable guidance.
Specifically, we recommend that news organizations:
- Adopt a playbook—we present one below—of core principles and standards for reporting on newsworthy events involving false, misleading, and hacked information;
- Establish a repeatable, enterprise-wide process for implementing the playbook;
- Commit their senior leaders to ensuring the success of these initiatives across vast newsrooms siloed into distinct areas of responsibility—from political coverage to national security reporting to social media teams.
Read more from the Stanford Policy Center here.