DNC RECOMMENDATIONS FOR COMBATING ONLINE DISINFORMATION

Comparative Social Media Policy Analysis

Social media platforms have quickly become a major source of news for Americans. In a 2018 survey¹, over two-thirds of Americans reported getting at least some of their news from social media — including 43% of Americans from Facebook, 21% from YouTube, and 12% from Twitter. In this new media landscape, social media platforms are the first line of defense against digital disinformation. 

The DNC is working with major social media companies to combat platform manipulation and train our campaigns on how best to secure their accounts and protect their brands against disinformation. While progress has been made since the 2016 elections, platforms still have much to do to reduce the spread of disinformation and combat malicious activity. We believe platforms are ultimately responsible for combating abuse and disinformation on their systems, but as an interested party, we’ve compiled this comparative policy analysis to present social media companies with additional potential solutions.

While this analysis compares the three major social media companies, we also call on all other online advertisers and social media companies to undertake similar efforts to promote transparency and combat disinformation on their platforms.

Explanation of Policy Determinations

1. Promote authoritative news over highly engaging news in content algorithms
Social media algorithms are largely responsible for deciding what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to manipulate the beliefs and actions of billions of users around the world. Most social media algorithms are designed to maximize time spent on a site, which when applied to news tends to create a “race down the brain stem” – boosting the most attention-grabbing, outrageous, and false content.

Facebook’s NewsFeed prioritizes user engagement as its primary signal for algorithmic boosts, which tends to amplify fear-mongering, outrage-bait, and false news content on the site. “ClickGap,” an initiative rolled out in 2019, and partnerships with independent fact-checkers, have reduced the reach of some very low-quality webpages on the platform, but NewsFeed has made no systematic effort to actively promote authoritative news sources over non-trustworthy ones. Facebook’s News Tab uses human curation to promote authoritative news sources, but the feature has yet to roll out to all users and is likely to receive minimal traffic when it fully deploys.

YouTube has partnered with its sister-company Google to introduce PageRank into its news search and recommendation algorithm, raising the search visibility of authoritative news sources. Google uses a number of objective signals to determine page authority (ex. ranking news sources that do original reporting higher and sites that steal content lower). These signals have also been integrated into YouTube’s “Up Next” algorithm, which is responsible for ~70% of watch time on the site.

 Twitter has employed anti-spam measures but has made little attempt to establish domain or account authority in algorithms, often boosting disinformation’s virality on the platform.

Snap’s Discover tab is open to authorized publishers only. Media partners publishing to Discover are vetted by Snap’s team of journalists and required to fact-check their content for accuracy. Within Discover, Snap does not elevate authoritative news sources over tabloid and viral news & opinion sites.

2. Enforce rules on hate speech consistently and comprehensively
Hate speech, also known as group defamation or group libel, is employed online to intimidate, exclude, and silence opposing points of view. Hateful and dehumanizing language targeted at individuals or groups based on their identity is harmful and can lead to real-world violence. While all major social media platforms have developed extensive policies prohibiting hate speech, enforcement of those rules has been inconsistent.

For example, reports have suggested that tech platforms treat white nationalist terrorism differently than Islamic terrorism. There are widespread complaints from users that platforms rarely take action on reported abuse. Often, the targets of abuse face consequences, while the perpetrators go unpunished. Some users have expressed the feeling that the “reporting system is almost like it’s not there sometimes.”

Facebook’s automated hate speech detection systems have steadily improved since they were introduced in 2017. Hateful content, especially demonizing content targeted at immigrants and religious minorities, however, still goes unactioned on the platform. Recent reporting found that the company quashed findings that suggested Facebook was actively promoting extremist groups on the site, and the company also recently went back on its promise to enforce its Community Standards against politicians.

Twitter expanded its hate speech policy in September 2018 to cover dehumanizing language, but hate content still pervades much of Twitter’s discourse and has been linked to real-world violence. Twitter’s world leader’s policy, furthermore, exempts President Trump from hate speech rules — applying labels where other users would face account lockouts or suspensions.

In June 2019, YouTube announced that algorithmic changes had reduced recommendations of borderline hate content by 50%. Despite this progress, virulently racist and bigoted content creators still produce content recommended by YouTube’s “Up Next” algorithms and even earn revenue via YouTube’s Partner Program. Public studies have suggested that YouTube actively facilitates radicalization by enabling easy movement from mainstream to extreme content and incentivizing content creators to adopt increasingly extreme positions. Comment sections on YouTube videos and live streams are also consistently filled with hate content.

Snap’s safety-by-design approach prevents the viral spread of hate or harassing speech on its platform. Snap has not exempted politicians from its rules on hate speech and incitement to violence, as other social media companies have.

3. Establish a scaled policy enforcing authenticity
Requiring social media users to be honest and transparent about who they are limits the ability of bad actors to manipulate online discourse. This is especially true of foreign bad actors, who are forced to create inauthentic accounts to infiltrate American communities online. While there may a place for anonymous political speech on major platforms, account anonymity makes deception and community infiltration fairly easy for bad actors. Platforms choosing to allow anonymous accounts should consider incentivizing transparency in content algorithms.

Facebook has established authenticity as a requirement for user accounts on its platform and employed automation and human flagging to detect fake accounts. This policy does not apply to Facebook’s Pages or Instagram products, which have been exploited by a number of foreign actors, both politically and commercially motivated.

Twitter has policies on impersonation and misleading profile information, but does not require user accounts to represent authentic humans nor does the platform enforce these policies at scale. Likewise, large numbers of anonymous and deceptive accounts operate on the platform.

YouTube allows anonymous publishing and commenting, and does not require authenticity of accounts producing or interacting with video content on its platform. Google Search and Google News considers anonymous/low transparency publishers low quality and downranks them in favor or more transparent publishers.

Snap’s Discover tab is open to verified publishers only. Snap allows pseudonymous accounts via its chat function, but limits the distribution of content published by users through its limits on group size.

4. Restrict the distribution of accounts posting plagiarized and unoriginal content
Plagiarism and aggregation of content produced by others is another way foreign actors are able to infiltrate domestic online communities. Foreign bad actors often have a limited understanding of the communities they want to target, which means they have difficulty producing relevant original content. To build audiences online, they frequently resort to plagiarizing content created within the community.

For example, Facebook scammers in Macedonia have admitted in interviews to plagiarizing their stories from U.S. outlets. In a recent takedown of suspected Russian Internet Research Agency accounts on Instagram, a majority of accounts re-posted content originally produced and popularized by Americans. While copyright infringement laws do apply to social media platforms, current platform policies are failing to prevent foreign bad actors from infiltrating domestic communities through large-scale intellectual property theft.

Twitter has a policy to remove intellectual property violations and identical content across automated networks, but does not encourage original content production.

Facebook and Instagram do not meaningfully downrank accounts or domains that steal content. Facebook has a policy against unoriginal content in Pages but has not publicly announced enforcement. Facebook downranks domains with stolen content, but the policy fails to prevent stolen content from regularly appearing in top article lists.

 Google Search/News/YouTube considers plagiarism, copying, article spinning, and other forms of low-effort content production to be “low quality” and downranks it heavily in favor of publishers of original content.

Snap’s Discover tab is open to verified sources publishing original content only.

5. Establish a policy against scaled automation
One way bad actors seek to artificially boost their influence online is through the use of automation. Using computer software to post to social media is not malicious in itself, but unchecked automation can allow small groups of people to artificially manipulate online discourse and drown out competing views.

Facebook has anti-spam policies designed to thwart bad actors seeking to artificially boost their influence through high-volume posting (often via automation). The platform does not have a policy preventing artificial amplification of content posted via its Pages product, however, which is used by some bad actors to artificially boost reach.

 Twitter employs its own automation designed to combat coordinated platform manipulation and spam.

 YouTube has scaled automation and video spam detection preventing coordinated manipulation of its platform.

Snap’s Discover tab is open to verified publishers only, whose content is manually produced.

6. Develop political disinformation policy
While platforms have been reluctant to evaluate the truthfulness of content on their platforms, it’s critical that these platforms acknowledge disinformation when it appears. Establishing and enforcing penalties for content and users that frequently publish disinformation is an important way to deter its spread.

Facebook has partnered with independent, IFCN-certified third-parties to fact-check content and has employed a remove/reduce/inform strategy to combat disinformation. Ad content that is determined to be false by one of Facebook’s third-party fact-checking partners is removed. Organic content determined to be false is covered with a disclaimer, accompanied by a fact-check, and its reach is reduced by content algorithms. Users attempting to post or share previously fact-checked content are prompted with the fact-check before they post, and users who shared the content before it was fact-checked receive a notification to that effect. Content produced by elected officials and Facebook-approved candidates are exempt from Facebook’s fact-checking policy.

Twitter has introduced search features to promote authoritative health information but has no policy in place to combat political disinformation. Reporting from NBC News in February suggested Twitter was experimenting with community moderation concepts that might impose costs on political disinformation, but no plans to roll out this feature have been publicly announced.

YouTube has introduced news panels for breaking news and some conspiracy content, and recently announced fact-check panels in YouTube Search for popular claims debunked by IFCN fact-checkers. The company has established a policy against “demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” in advertising content, a standard some bad actors have been able to evade. YouTube does not apply fact-checks to individual videos posted organically to its site, nor are there any penalties for creators who repeatedly spread misinformation.

Snap uses an in-house team of fact-checkers to enforce rules against “content that is misleading, deceptive, impersonates any person or entity” in political ads, and requires publishers on its Discover product to fact-check their content for accuracy. Fact-checking rules apply to all publishers on the site, including politicians.

7. Remove maliciously edited media and deepfakes
Media manipulation and deepfake technology have advanced rapidly over the past few years. The prospect of a well-timed “deepfake” video or less sophisticated “shallowfake” of a major party candidate posted to social media before an election has to potential to seriously warp voter perceptions, especially given the trust social media users currently place in video.

Facebook announced changes to its policy on manipulated media in January, banning deepfake videos produced using artificial intelligence or machine learning technology that include fabricated speech (except parody videos). The policy change does not cover other types of deepfakes or manipulated media, which, when not posted by an elected official or Facebook-approved political candidate, are subject to fact checking by Facebook’s third-party fact-checkers.

In February, Twitter announced a policy to label manipulated media on the site. Under the policy, only manipulated media that is “likely to impact public safety or cause serious harm” will be removed.

 YouTube has removed manipulated media targeting politicians in the past, but has not yet established a formal policy.

Snap requires publishers on its Discover product to fact-check their content for accuracy. Snap enables user sharing only within users’ phone contacts lists and limits group sharing to 32 users at a time –making it difficult for maliciously manipulated media to spread on the app.

8. Make content recommendations transparent to journalists/academics
Social media algorithms are largely responsible for deciding what appears in our feeds and how prominently it appears. The decisions made by these algorithms are incredibly consequential, having the power to manipulate the beliefs and actions of billions of users around the world. Understanding how social media algorithms work are crucial to understanding what platforms value and what types of content are benefitting from these powerful algorithmic boosts.

Facebook has provided very little transparency into the inputs and weights that drive its “NewsFeed” algorithm’s ranking decisions, which are responsible for the vast majority of content consumed on the platform. Facebook announced an algorithm change in January 2018 designed to reduce the role of publishers and increase “time well spent” person-to-person interactions on the platform. Public analysis of the change, however, found that algorithm actually increased the consumption of articles around divisive topics.

Twitter has provided little transparency into how its algorithms determine what appears in top search results, ranked tweets, and “in case you missed it” sections of its platform. Major algorithm changes designed to reduce spam and harassment on the platform were announced in June 2018, and Twitter has previously indicated that algorithm changes happen on a daily or weekly basis. Twitter also rolled out a feature in December 2018 allowing users to choose to see tweets from accounts they follow appear in chronological order –effectively disabling Twitter’s ranking algorithm.

Google’s Search and News ranking algorithms (integrated into YouTube search) have been thoroughly documented. YouTube has provided very little transparency into its content recommendations, however, which power 70% of content consumed on that platform.

Snap has provided little transparency into how Discover ranks content for users nor how the company chooses publishers it contracts with.

9. Make public content and engagement transparent to journalists/academics
Understanding the health of social media discourse requires access to data on that discourse. Social media companies have taken very different approaches to transparency on their platforms.

 Twitter’s open API allows content and engagement to be easily analyzed by independent researchers, journalists, and academics.

Facebook has acquired CrowdTangle to provide some transparency to journalists and academic researchers. The platform does not provide full transparency into account behavior on its site nor has the CrowdTangle tool been made available publicly. Publicly available data on Instagram is extremely limited.

YouTube has developed a public API that allows for public analysis of video summary information and engagement. Video transcripts are not available via the public API.

All public content is available to users via the Snapchat app. Snap does not make Discover interaction data public.

10. Include political issue ads in ad transparency initiatives
Political ads are not limited to just those placed by candidates. Political action committees and interest groups spend large sums of money advocating for issues in a way that affects political opinion and the electoral choices of voters. Public understanding of these ad campaigns is crucial to understanding who is trying to influence political opinion and how.

The issue of opacity in political ads is compounded when combined with precise, malicious microtargeting and disinformation. Current policies at some major platforms allow highly targeted lies to be directed at key voting groups in a way that is hidden from public scrutiny.

Facebook has taken a broad definition of political speech and included issue ads in its political ad transparency initiative.

ⁿ/ₐ Twitter announced it would stop placing political ads as of November 15, 2019.

Google/YouTube has limited its ad transparency initiative to ads naming “U.S. state-level candidates and officeholders, ballot measures, and ads that mention federal or state political parties”. Political issue ads not directly tied to ballot measures have been exempted from the ad library, and the library’s current update schedule allows ads to run on Google platforms free of scrutiny for up to a week.

Snap has taken a broad definition of political speech and included issue and campaign-related ads in its political ad library.

11. Fully disclose state-backed information operation content
When state-sponsored information operations are detected, it’s crucial for platforms to be transparent about how their platforms were weaponized. Fully releasing datasets allows the public and research community to understand the complete nature of the operation, its goals, methods, and size. Full transparency also allows independent researchers to further expound on platform findings, sometimes uncovering media assets not initially detected.

Twitter has disclosed full datasets of state-backed information operations.

Facebook provides sample content from state-backed information operations via third party analysis.

Google/YouTube provides only sample content from state-backed information operations.

ⁿ/ₐ Snap’s safety-by-design approach has made it difficult for state-backed information operations to have success on its platform. Snap was not one of the apps used by the Russian Internet Research Agency to interfere in American elections in 2016, but has committed to transparency should such an operation be detected.

12. Label state-controlled media content
State-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased in favor of the controlling government (i.e. propaganda).

Facebook has introduced labeling for state-controlled media.

YouTube has introduced labeling on all state-controlled media on its platform.

Twitter has not introduced labeling of content from state-controlled media.

Snap has not introduced labeling of content from state-controlled media.

13. End ad partnerships with state-controlled media.
As outlined in policy #2, state-controlled media combines the opinion-making power of news with the strategic backing of government. Separate from publicly funded, editorially independent outlets, state-controlled media tends to present news material in ways that are biased towards the goal of the controlling government (i.e. propaganda). Allowing these outlets to directly place content in front of Americans via advertising and/or profit from content published on platforms enhances come of these outlets’ ability to propagandize and misinform the public.

Twitter has banned advertising from state-controlled media.

YouTube allows state-controlled media to advertise and partners with state-controlled media in its advertising programs, providing those outlets with a significant revenue stream.

Facebook has announced it will temporarily ban state-controlled media advertising in the US ahead of November’s election.

Snap allows state-controlled media to advertise on its app.

 

14. Establish a policy against the distribution of hacked materials

Facebook has not publicly established a policy restricting the distribution of hacked materials on its platform.

Twitter has established a policy banning posts of hacked materials or direct links to hacked materials on its site.

YouTube has not publicly established a policy restricting the distribution of hacked materials on its platform.

ⁿ/ₐ Snap’s semi-closed Discover product makes it near-impossible for bad actors to publish hacked materials directly to its app.

 

Online disinformation is a whole-of-society problem that requires a whole-of-society response. Please visit democrats.org/disinfo for our full list of counter disinformation recommendations.

1News Use Across Social Media Platforms 2018, Pew Research Center, September 2018   
2Where News Tab has been made available, it usually appears as a mobile-only, relatively hidden bookmark, under a “See More” sub-menu. Even if full deployment of the tab places it more prominently in the app, use of the tab is likely to be minimal. Facebook’s most popular non-News Feed tab, “Watch,” typically receives less than 10% of Facebook traffic. “News” is unlikely to approach even that limited reach.