SOCIAL MEDIA MISINFORMATION SCORECARD - DNC

Comparative Social Media Policy Analysis

Social media has quickly become a major source of news for Americans.  In a 2022 survey, half of Americans reported getting news from social media often or sometimes—including 31% of Americans from Facebook, 25% from YouTube, and 14% from Twitter1. Despite huge numbers of Americans turning to social media for news, major social media companies haven’t done enough to take responsibility for information quality on their sites. As quickly as these sites have become destinations for news seekers, they have just as quickly become large propagators of disinformation and propaganda.

The DNC is working with major social media companies to combat platform manipulation and train our campaigns on how best to secure their accounts and protect their brands against disinformation. While progress has been made since the 2016 elections, social media companies still have much to do to reduce the spread of disinformation and combat malicious activity. Social media companies are ultimately responsible for combating abuse and disinformation on their systems, but as an interested party, we’ve compiled this comparative policy analysis to present social media companies with additional potential solutions. 

While this analysis compares major social media companies, we also call on all other online advertisers and publishers to undertake similar efforts to promote transparency and combat disinformation on their sites.

1Social Media and News Fact Sheet, Pew Research Center, September 2022

Explanation of Policy Determinations

1. Promote authoritative news over highly engaging news in content algorithms
Social media algorithms are largely responsible for determining what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to shape the beliefs and actions of billions of users around the world. Most social media algorithms are designed to maximize time spent on a site, which when applied to news tends to create a “race to the bottom of the brain stem”—boosting the most sensational, outrageous, and false content.

Facebook’s News Feed, responsible for the vast majority of content consumed on Facebook, prioritizes user engagement as its primary signal for algorithmic boosts—which tends to amplify fear-mongering, outrage-bait, and false news content on the site. “ClickGap,” an initiative rolled out in 2019, and partnerships with independent fact-checkers have reduced the reach of some very low-quality webpages, but the effective application of rules and policies has been repeatedly subject to political intervention from Facebook’s government relations team. The company temporarily employed News Ecosystem Quality (NEQ) ratings to promote high-quality news in November 2020, but has no public plans to permanently integrate the scores into News Feed rankings. Facebook began prioritizing articles and domains doing original reporting in mid-2020 and began generally reducing all political content in News Feed in 2021. These changes were not applied to Instagram.

Twitter has employed anti-spam measures but has made little attempt to establish domain or account authority in algorithms.

 YouTube and Google introduced PageRank into its news search and recommendation algorithm, raising the search visibility of authoritative news sources. Despite these efforts, YouTube still regularly elevates misinformation from disreputable sources via its “Up Next” algorithm. 

Twitter has employed anti-spam measures but has made little attempt to establish domain or account authority in algorithms, often boosting disinformation’s virality.

Snap’s Discover tab is open to authorized publishers only. Media partners publishing to Discover are vetted by Snap’s team of journalists and required to fact-check their content for accuracy. Within Discover, Snap does not elevate authoritative news sources over tabloid and viral news & opinion sites.

Reddit relies on users voting to rank content within feeds. While the company doesn’t directly promote authoritative news, the site’s downvoting functionality allows users to demote untrustworthy news sources—resulting in a higher quality news environment than exists on many other social media sites.

LinkedIn’s editorial team highlights trusted sources in its “LinkedIn News” panel, but the company has made no attempt to establish domain or account authority in feed algorithms.

Nextdoor disables accounts found to repeatedly post harmful content, including misinformation. The company has not made attempts to establish account authority in feed rankings.

TikTok uses information centers, public service announcements, and content labeling to elevate authoritative information, but does not prioritize authoritative sources in its “For You” algorithm.

Twitch uses human curation to highlight creators on its home page. Channel recommendation algorithms do not take source authoritativeness into account.

2. Enforce a comprehensive political misinformation policy, with progressively severe penalties for repeat offenders
While social media companies have been reluctant to evaluate the truthfulness of content on their websites, it’s critical that these companies acknowledge misinformation when it appears. Establishing and enforcing progressively severe penalties for users that frequently publish misinformation is an important way to deter its publication and spread.

Facebook has partnered with independent, IFCN-certified third parties to fact-check content and has employed a remove/reduce/inform strategy to combat misinformation. Ad content that is determined to be false by one of Facebook’s third-party fact-checking partners is removed. Organic content determined to be false is covered with a disclaimer, accompanied by a fact-check, and its reach is reduced by content algorithms. Users attempting to post or share previously fact-checked content are prompted with the fact-check before they post, and users who shared the content before it was fact-checked receive a notification to that effect. Unfortunately, Facebook’s third-party fact-checking program has largely failed to scale to the size of the site’s misinformation problem, including allowing misinformers the ability to freely repost previously fact-checked misinformation. The fact-checking program has not been extended to WhatsApp, and has been kneecapped by the company’s Republican-led government relations team. Content produced by elected officials and Facebook-approved candidates are exempt from Facebook’s fact-checking policy—a loophole exploited by some of the most prolific sources of misinformation on the site –and Facebook users can individually choose to opt-out of fact-checking demotions.

Twitter has developed a crowdsourced fact-checking program, “Community Notes” that adds additional context to some potentially misleading tweets. Notes are applied using a “bridging” algorithm that has been criticized for prioritizing consensus over accuracy. Twitter has not provided details on how it plans to prevent misinformation from appearing in paid advertisements on the site. Changes to Twitter’s verification process in 2022 and 2023 have resulted in a surge of misleading impersonator accounts on the site.

 YouTube introduced news panels for breaking news events and some conspiracy content in 2018, and employs fact-check panels in YouTube Search for some popular searches debunked by IFCN fact-checkers. The company has established a policy against “demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” in advertising content, a standard many bad actors have been able to evade. In December 2020, YouTube belatedly developed a policy and strike system against misinformation aimed at undermining confidence in democratic elections, which the company only begain enforcing after the January 6 insurecction at the US Capitol and later reversed. YouTube does not apply fact-checks to videos published on its site.

Snap uses an in-house team of fact-checkers to enforce rules against  “false information that causes harm or is malicious.” Accounts that repeatedly post false or misleading information are subject to account restrictions.

Reddit prohibits deceptive, untrue, or misleading claims in political advertisements and misleading information about the voting process under its impersonation policy. Enforcement of misinformation rules for organic posts is inconsistent, as subreddits like /r/conspiracy regularly post political misinformation with significant reach.

 LinkedIn has partnered with third-party fact-checkers to prohibit political and civic misinformation on its site. The company does not penalize accounts that repeatedly publish misinformation.

Nextdoor prohibits election and political misinformation “designed to make that candidate seem unfit or ineligible for office” on its app. The company disables accounts that repeatedly author misinformation. Data access restrictions, however, have made it difficult to measure the enforcement of misinformation rules. 

TikTok has partnered with third-party fact-checkers to prohibit videos containing misinformation about elections or other civic processes, but does not enforce rules against other political misinformation.

Twitch has partnered with anti-misinformation groups to remove serial misinformers under its Harmful Misinformation Actors policy. The company does not list non-civic political misinformation under the policy nor does it evaluate individual claims made in streams.

3. Remove maliciously edited media and deepfakes
Media manipulation and deepfake technology employed against political candidates can have the potential to seriously warp voter perceptions, especially given the trust social media users currently place in video. It’s critical that social media companies acknowledge the heightened threat of manipulated media and establish policies to prevent its spread.

Facebook announced changes to its policy on manipulated media in January, banning deepfake videos produced using artificial intelligence or machine learning technology that include fabricated speech (except parody videos). The policy change does not cover other types of deepfakes or manipulated media, which, when not posted by an elected official or Facebook-approved political candidate, are subject to fact-checking by Facebook’s third-party fact-checkers.

In February 2020, Twitter announced a policy to label manipulated media on the site. Under the policy, only manipulated media that is “likely to impact public safety or cause serious harm” will be removed. Manipulated or AI-synthesized videos on Twitter may be labeled by the sites’ Community Notes program.

YouTube’s stated policy prohibits manipulated media that “misleads users and may pose a serious risk of egregious harm.” Enforcement of the policy, however, has been inconsistent.

Snap requires publishers on its Discover product to fact-check their content for accuracy. Snap enables user sharing only within users’ phone contacts lists and limits group sharing to 32 users at a time –making it difficult for maliciously manipulated media to spread on the app.

Reddit prohibits misleading deepfakes and other manipulated media under its impersonation policy.

LinkedIn prohibits deepfakes and other manipulated media intended to deceive.

Nextdoor prohibits manipulated media under its misinformation policies.

TikTok’s stated policy prohibits harmful synthetic and manipulated media under its misinformation policy. Enforcement of the policy, however, has been inconsistent.

Twitch does not prohibit manipulated media in its community guidelines, but manipulated media can be the basis for channel removal under the company’s Harmful Misinformation Actors.

4. Enforce rules on hate speech consistently and comprehensively

Hate speech, also known as group defamation or group libel, is employed online to intimidate, exclude, and silence opposing points of view. Hateful and dehumanizing language targeted at individuals or groups based on their identity is harmful and can lead to real-world violence. While all major social media companies have developed extensive policies prohibiting hate speech, enforcement of those rules has been inconsistent. Unsophisticated, often outsourced content moderation resources is one cause of this inconsistency, as moderators lack the cultural context and expertise to accurately adjudicate content. 

For example, reports have suggested that social media companies treat white nationalist terrorism differently than Islamic terrorism. There are widespread complaints from users that social media companies rarely take action on reported abuse. Often, the targets of abuse face consequences, while the perpetrators go unpunished. Some users, especially people of color, women, and members of the LGBTQIA community, have expressed the feeling that the “reporting system is almost like it’s not there sometimes.” (These same users can often be penalized by social media companies for highlighting hate speech employed against them.)

Facebook’s automated hate speech detection systems have steadily improved since they were introduced in 2017, and the company has made important policy improvements to remove white nationalism, militarized social movements, and Holocaust denial from its site. Unfortunately, many of these policies go poorly enforced, and hateful content — especially content demonizing immigrants and religious minorities — still goes unactioned. In 2016, the company quashed internal findings suggesting that Facebook was actively promoting extremist groups on the site, only to end all political group recommendations after the site enabled much of the organizing behind the January 6, 2021 Capitol insurrection. The company also repeatedly went back on its promise to enforce its Community Standards against politicians in 2020.
against politicians.

Twitter’s new ownership welcomed a surge in hate content and harassment on the site in 2022. Hate content pervades much of Twitter’s discourse and has been directly linked to offline violence. Twitter’s “Who to Follow” recommendation engine also actively prompts users to follow prominent sources of online hate and harassment.

 In 2020, YouTube took steps to remove channels run by hate group leaders and developed creator tools to reduce the prevalence of hate in video comment sections. Despite this progress, virulently racist and bigoted channels are still recommended by YouTube’s “Up Next” algorithms and monetized via YouTube’s Partner Program.

Snap’s safety-by-design approach limits the spread of hate or harassing speech, but hate and bigotry has repeatedly spread widely on the site.

Reddit has made significant changes since a 2015 Southern Poverty Law Center article called the site home to “the most violently racist” content on the internet –introducing research into the impacts of hate speech and providing transparency into its reach on the site. Despite progress, researchers observed a spike in users experiencing harassment on the site in 2023.

As a professional networking site, LinkedIn has established professional behavior standards to limit hate and harassment on its site – with some success. Some women’s groups, however, still report experiencing sustained abuse on the app.

Nextdoor announced significant decreases in reports of harmful conduct after introducing reforms to its community guidelines policies starting in 2020. The company has not released data on the overall prevalence of hate speech on its app, and complaints of vigilantism on Nextdoor continue.

TikTok strengthened its anti-hate policies in 2023, and has cultivated a strong community of social media users typically targeted with online hate. Despite progress, race, religion, and gender-based harassment are common aspects of TikTok’s comment sections and the app has even been found to be a potentially radicalizing force.

Twitch considers a broad range of hateful conduct – including off-site behavior by streamers – to make enforcement decisions and has developed innovative approaches to chat moderation. Despite company efforts, Twitch has struggled to address abuse and “hate raid” brigading on the site – often targeting LGBTQ+ streamers and streamers of color in particular. Streamers from marginalized groups have also been targets of dangerous “swattings” by Twitch users.

5. Prohibit the use of scaled automation
One way bad actors seek to artificially boost their influence online is through the use of automation. Using computer software to post to social media is not malicious in itself, but unchecked automation can allow small groups of people to artificially manipulate online discourse and drown out competing views.

Facebook has anti-spam policies designed to thwart bad actors seeking to artificially boost their influence through high-volume posting (often via automation). The website does not have a policy preventing artificial amplification of content posted via its Pages product, which is used by some bad actors to artificially boost reach.

Twitter employs its own automation designed to combat coordinated manipulation and spam.

YouTube has scaled automation and video spam detection, thwarting efforts at coordinated manipulation.

Snap’s Discover tab is open to verified publishers only, whose content is manually produced.

Reddit has employed anti-spam measures and prohibits vote manipulation.

LinkedIn has employed anti-spam measures to combat coordinated manipulation.

Nextdoor prohibits spam and inauthentic accounts on its app.

TikTok prohibits coordinated inauthentic behaviors to exert influence or sway public opinion.

Twitch has employed anti-spam measures to combat coordinated manipulation.

6. Require account transparency & authenticity
Requiring social media users to be honest and transparent about who they are limits the ability of bad actors to manipulate online discourse. This is especially true of foreign bad actors, who are forced to create inauthentic or anonymous accounts to infiltrate American communities online. While there may be a place for anonymous political speech, account anonymity makes deception and community infiltration fairly easy for bad actors. Sites choosing to allow anonymous accounts should consider incentivizing transparency in content algorithms.

 Facebook has established authenticity as a requirement for user accounts and employed automation and human flagging to improve its fake account detection. The company’s authenticity policy does not apply to Facebook’s Pages (a substantial portion of content) or Instagram products, which have been exploited by a number of foreign actors, both politically and commercially motivated.

Twitter has policies on impersonation and misleading profile information, but does not require user accounts to represent authentic humans. Changes to Twitter’s verification process in 2022 and 2023 have resulted in a surge of misleading impersonator accounts on the site.

YouTube allows pseudonymous publishing and commenting, and does not require accounts producing or interacting with video content to be authentic. Google Search and Google News considers anonymous/low transparency publishers low quality and downranks them in favor or more transparent publishers.

Snap’s “Stars” and “Discover” features are available to select, verified publishers only. Snap allows pseudonymous accounts via its “chat” and “story” function, but makes it difficult for such accounts to grow audiences.

Reddit prohibits impersonation, but encourages pseudonymous user accounts and does not require user accounts to represent authentic humans.

 LinkedIn requires users to use their real names on the site.

Nextdoor requires users to use their real names on the site.

TikTok has policies on impersonation and misleading profile information, but does not require user accounts to represent authentic humans or organizations.

Twitch allows pseudonymous publishing and commenting, and does not require accounts producing or interacting with video content to be authentic.

7. Restrict the distribution of accounts posting plagiarized and unoriginal content
Plagiarism and aggregation of content produced by others is another way foreign actors are able to infiltrate domestic online communities. Foreign bad actors often have a limited understanding of the communities they want to target, which means they have difficulty producing relevant original content. To build audiences online, they frequently resort to plagiarizing content created within the community. 

For example, Facebook scammers in Macedonia have admitted in interviews to plagiarizing their stories from U.S. outlets. In a takedown of suspected Russian Internet Research Agency accounts on Instagram, a majority of accounts re-posted content originally produced and popularized by Americans. While copyright infringement laws do apply to social media companies, current company policies are failing to prevent foreign bad actors from infiltrating domestic communities through large-scale intellectual property theft.

Facebook downranks domains with stolen content and prioritizes original news reporting. Originality has also been incorporated into ranking algorithms on Instagram.

Twitter has removed content farm accounts and identical content across automated networks, but does not encourage original content production.

 Google Search/News/YouTube considers plagiarism, copying, article spinning, and other forms of low-effort content production to be “low quality” and downranks it heavily in favor of publishers of original content.

Snap’s Discover tab is open to verified sources publishing original content only.

Reddit has introduced an “OC” tag for users to mark original content and relies on community moderators and users to demote duplicate content, with mixed success.

Nextdoor prohibits templated, repetitive, or unoriginal content on its app.

 TikTok doesn’t recommend duplicated content in its #ForYou feed.

Twitch prohibits creators from stealing content and posting content from other sites.

8. Make content recommendations transparent to journalists/academics
Social media algorithms are largely responsible for deciding what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to manipulate the beliefs and actions of billions of users around the world. Understanding how social media algorithms work are crucial to understanding what these companies value and what types of content are benefitting from powerful algorithmic boosts.

While Facebook provides some in-product algorithmic transparency and has enabled some access to academic research through its FORT initiative, the company has released few details on the inner workings of its “NewsFeed” algorithm, which is responsible for the vast majority of content consumed on the site. What details are known about the algorithm’s design and behavior have largely come from public whistleblowers.

Twitter released its “Home Timeline” and “Community Notes” algorithm code in 2022, and published a pair of internal studies into aspects of their algorithms’ behavior. The disclosures fall short of a full picture of how the company’s algorithms behave.

Google’s Search and News ranking algorithms (integrated into YouTube search) have been thoroughly documented. YouTube has provided little detail into its content recommendations, however, which power 70% of content consumed on the site.

Snap has provided a very basic overview of how it ranks content on its Discover tab. Details of the algorithm and how the company chooses publishers it contracts with have not been made available. 

Reddit’s ranking algorithms are partially open source, with remaining non-public aspects limited to efforts to prevent vote manipulation.

LinkedIn has provided little transparency into how it ranks content in feeds.

Nextdoor has provided little transparency into how it ranks content in feeds.

TikTok has opened a transparency center in Los Angeles and published a basic overview of the company’s “For You” algorithm on its website. Journalists who have visited the transparency center reported learning little about the algorithm or its behavior.

Twitch has provided little transparency into how it recommends channels.

9. Make public content and engagement transparent to journalists/academics
Understanding the health of social media discourse requires access to data on that discourse. Social media companies have taken very different approaches to transparency on their sites.

Facebook (Meta) rolled out an initial version of its FORT research effort in November of 2021, which provides some text-based public content data to select academic researchers. The company announced plans to shutter its CrowdTangle content transparency tool in 2022.

 Twitter’s API pricing, introduced in 2023, has made data access prohibitively expensive for most academics and journalists.

YouTube allows approved academic researchers to study its content and engagement. The company has also developed a public API that allows for public analysis of video summary information and engagement. Video transcripts are not available via the public API.

All public content is available to users via the Snapchat app. Snap does not make Discover interaction data public.

Reddit’s open API allows content and engagement to be easily analyzed by independent researchers, journalists, and academics.

LinkedIn has not made public content or engagement data available to researchers.

Nextdoor has made some content data available to select academic researchers, but data access for research purposes is largely unavailable.

TikTok launched a research API limited to approved academic research projects in 2023.

Twitch has developed a public API that allows for public analysis of video summary information and engagement. Video transcripts are not available via the public API.

10. Make advertising content related to political issues easily accessible and transparent
Political ads are not limited to just those placed by candidates. Political action committees and interest groups spend large sums of money advocating for issues in a way that affects political opinion and the electoral choices of voters. Public understanding of these ad campaigns is crucial to understanding who is trying to influence political opinion and how.

The issue of opacity in political ads is compounded when combined with precise, malicious microtargeting and disinformation. Current policies at some major social media companies allow highly targeted lies to be directed at key voting groups in a way that is hidden from public scrutiny.

Facebook has taken a broad definition of political speech and included issue ads in its political ad transparency initiative.

Twitter announced it would begin relaxing its political ad ban in 2023, launching an ineffective “by-request” ad transparency tool

  Google and YouTube announced a new Ads Transparency Center in March 2023 that includes a comprehensive library of political and non-political ads.

Snap has taken a broad definition of political speech and included issue and campaign-related ads in its political ad library.

Reddit has taken a broad definition of political speech and included issue and campaign-related ads in its political ad library.

ⁿ/ₐ  LinkedIn does not allow political ads.

ⁿ/ₐ  Nextdoor does not allow political ads.

 TikTok purports to ban political ads, but experiments with the app suggest the company’s enforcement has been ineffective.

ⁿ/ₐ Twitch does not allow political ads. 

11. Fully disclose state-backed influence operation content
When state-sponsored influence operations are detected, it’s crucial that social media companies be transparent about how their sites were weaponized. Fully releasing datasets allows the public and research community to understand the complete nature of the operation, its goals, methods, and size. Full transparency also allows independent researchers to further expound on company findings, sometimes uncovering media assets not initially detected.

Twitter’s new ownership has suspended its moderation research consortium.

Facebook provides sample content from state-backed information operations via third-party analysis.

Google/YouTube provides only sample content from state-backed information operations.

ⁿ/ₐ  There is no public evidence of state-backed information operations on Snap.

Reddit has disclosed full datasets of state-backed information operations.

LinkedIn has not disclosed content from state-backed information operations.

ⁿ/ₐ There is no public evidence of state-backed information operations on Nextdoor.

TikTok provides summary statistics of its enforcement actions against state-backed influence operations. The company is owned by ByteDance, Inc. – a Chinese internet technology company. Public reporting suggests that TikTok’s moderation decisions may be influenced by the Chinese government.

ⁿ/ₐ There is no public evidence of state-backed influence operations on Twitch.

12. Label state-controlled media content
State-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased in favor of the controlling government (i.e. propaganda).

Facebook has introduced labeling for state-controlled media.

YouTube has introduced labeling on all state-controlled media on its site.

Twitter removed labels for state-controlled media in 2023.

Snap has not introduced labeling of content from state-controlled media.

Reddit has banned links to Russian state-controlled media but has not introduced labeling for content from other state-controlled media outlets.

LinkedIn has not introduced labeling of content from state-controlled media.

ⁿ/ₐ To date, Nextdoor has not allowed state-controlled media to author content on its site.

TikTok has introduced labeling for some state-controlled media content.

 Twitch has banned Russian state media but has not introduced labeling for other state-controlled media content. 

13. End partnerships with state-controlled media.
As outlined in policy #2, state-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased towards the goal of the controlling government (i.e. propaganda). Allowing these outlets to directly place content in front of Americans via advertising and/or profit from content published on their sites enhances these outlets’ ability to propagandize and misinform the public.

Facebook has ended its business relationships with Russian state-controlled media and announced it would temporarily ban all state-controlled media advertising in the US ahead of the November 2020 election.

Twitter has banned advertising from state-controlled media.

YouTube has removed Russian state-controlled media channels but allows other state-controlled media to advertise and participate in its advertising programs.

Snap partners with state-controlled media in its “Discover” tab.

ⁿ/ₐ To date, Reddit has not allowed state-controlled media to advertise on its site. Politics-related news advertisements are subject to the company’s political ad policy.

ⁿ/ₐ To date, LinkedIn has not partnered with state-controlled media.

ⁿ/ₐ To date, Nextdoor has not partnered with state-controlled media.

 TikTok is owned by ByteDance, Inc. –a Chinese internet technology company. Public reporting suggests that TikTok’s moderation decisions may be influenced by the Chinese government.

Twitch has removed Russian state-controlled media channels but allows other state-controlled media to advertise and participate in its advertising programs.

14. Restrict the distribution of hacked materials
One particularly damaging tactic used by nation-state actors to disinform voters is “hack and dump” operations, where stolen documents and selectively published, forged, and/or distorted. While much of the burden of responsibly handling hacked materials falls on professional media, social media companies can play a role in restricting the distribution of these materials.

Facebook’s “newsworthiness” exception to its hacked materials policy renders it ineffective against political hack-and-dump operations.

Twitter’s hacked materials policy prohibits the direct distribution of hacked materials by hackers, but does not prohibit the publication of hacked materials by other Twitter users.

Google and YouTube have established a policy banning content that facilitates access to hacked materials and demonetizes pages hosting hacked content.

ⁿ/ₐ Snap’s semi-closed Discover product makes it near-impossible for hackers or their affiliates to publish hacked materials for large audiences directly to its app. 

Reddit’s rules prohibit the sharing of personal or confidential information.

LinkedIn prohibits the sharing of personal or sensitive information.

Nextdoor prohibits the publication of private materials.

TikTok does not prohibit the publication of hacked materials.

Twitch prohibits any unauthorized sharing of private information.

 

Online disinformation is a whole-of-society problem that requires a whole-of-society response. Please visit democrats.org/disinfo for our full list of counter-misinformation recommendations.

1Social Media and News Fact Sheet, Pew Research Center, September 2022