MEMO: Facebook’s Unkept Promises, Uneven Policies

To: Interested Parties

From: Democratic National Committee

Date: July 7, 2020

Re: Facebook’s Unkept Promises, Uneven Policies

Unkept Promises, Uneven Policies

Following the 2016 election, Facebook made a number of public promises of change.  As the company makes new commitments in response to renewed public criticism, it is worth reviewing carefully how the company’s actions measure up to its words.  In many cases, as documented below, Facebook failed to keep its promises.  In others, it has adopted underdeveloped policies that it has applied unevenly.


UNKEPT PROMISES

Over the past several years, Facebook promised progress on the following issues, describing each as important.  Each initiative began as an effort to “make sure that Facebook is a force for good in democracy,” because, as Mark Zuckerberg has stated, no one should “use [Facebook’s] tools to undermine democracy.” [CNBC, 09.21.17]

In addressing these issues, Zuckerberg stressed that he would be “bringing the same focus and rigor” that he “brought to product challenges like shifting our services to mobile,” i.e., the most important business transition in the company’s history.  [Facebook, 9.13.18]

  1. Promise to Limit Sensationalism and Hyperpolarization on its News Feed. 

The algorithm that underpins Facebook’s Newsfeed is geared to promote sensational content that achieves vitality, thereby prioritizing hyperpartisan (and sometimes false) content over reported news sources. In 2018, Facebook acknowledged this problem and stated publicly that addressing it was among the company’s highest priorities. But plans for change have been abandoned and the problem persists.

  • In November 2018, Mark Zuckerberg described Facebook as “focused on” limiting “click-bait and misinformation.”  One key component of that focus was banning fake accounts.  The other most “effective strategy [was] reducing its distribution and virality.”  Zuckerberg himself  described “these efforts on the underlying incentives in our systems [as] some of the most important work we’re doing across the company.” [Facebook, 11.15.18]

  • But, even as Facebook was making this public commitment to stop incentivising disinformation, the company was privately doing the opposite.  The company’s internal “Common Ground” team had found that disinformation and hyperpolarization came “disproportionately from a small pool of hyperpartisan users” whose reach was amplified by the platform; and, further, that a disproportionately larger number of these voices came from the far right.  The “Common Ground” team proposed that these purveyors of disinformation no longer should be amplified by Facebook’s algorithm.  In response, rather than adopting that reform, Facebook rejected it and disbanded the team.  [WSJ, 05.26.20]

  • In April 2019, Facebook promised to develop the News “tab,” focusing on curating reported sources, targeting the tens of millions of Americans who use Facebook as a news source.  [CNBC, 4.1.19]  This, too, was a company initiative against disinformation. The program has languished, and, to date, it continues to be a “test” available to only a small fraction of Facebook users. It will not be rolled out fully before the November 2020 election.  [CNBC, 04.1.2019, Facebook, 10.25.2019]

In the wake of recent public criticism, Facebook again has committed to change.  On June 5, 2020, Mark Zuckerberg stated: “I believe our platforms can play a positive role in helping to heal the divisions in our society, and I’m committed to making sure our work pulls in this direction.”  [Facebook, 06.05.20].  Facebook then announced in an unsourced Axios article last week that it would be promoting reported sources in News Feed.  [Axios, 6.30.20]  No details were provided.  But at the very same time Facebook announced this initiative, it announced that the change would not have a significant effect on the traffic of Facebook’s pages.  [Tech Crunch, 06.30.2020]

  1. Promise to Implement a Comprehensive Fact Checking Program. 

Immediately following the 2016 election, Facebook announced its fact-checking program, as a central component in its efforts to mitigate disinformation. Nearly four years later, Facebook refuses to publicly share material details about the program, and instead has begun to downplay its importance, describing its fact-checkers as only able to“really catch the worst of the worst stuff.”  [CNBC, 5.28.20].  Even when content is flagged by the fact-checkers, Facebook struggles to identify and label content throughout its platform; and the company does not publicly provide any data regarding how false content is demoted.

  • In December 2016, immediately following the 2016 election, “Facebook announced its Third Party Fact-Checking Project.  Independent organizations would debunk false news stories, and Facebook would make the findings obvious to users, down-ranking the relevant post in its News Feed.”  [CJR, 8.2.19].  In announcing the initiative, Facebook stated that “we need to take responsibility for the spread of fake news on our platform.”  [NY Times, 12.15.16]

  • As of July 6, 2020, the following material questions remain unanswered about the program, though they have been repeatedly asked of the company:

  • How many people at Facebook’s fact-checking partners are working on the effort?

  • How, if at all, is the third-party fact-checking work prioritized?

  • What percentage of material flagged for fact-checkers is reviewed?

  • When content is checked and determined to be false, how is it downgraded on Facebook’s platform?

  • What metrics does Facebook use to determine the efficacy of its fact-checking initiative?

  • Even in instances when content is flagged as false by third-party fact-checkers, the process takes days and the reach of that material has not been limited as Facebook promises that it will be.

  • Just last week, Facebook explained that content that falsely suggested that Black Lives Matter protesters had defaced the Vietnam Memorial would be demoted in its algorithm. But weeks after fact-checkers labeled the story false, it was the third-ranked post in Facebook’s News Feed.

  • Recently, content flagged as false has taken Facebook days to identify across its platform, allowing copy-cat material to go viral.

  • Facebook has allowed bad actor publishers to classify their content as “satire” — via a disclaimer tacked onto each page, out of sight for those not seeking it, stating that “nothing on this page is real” — even where any objective viewer would view the material to be classic disinformation.  [Washington Post, 11.17.18]

  1. Promise of Transparency Concerning the Reach of Disinformation.  

Facebook has promised to use transparency as a primary tool against disinformation, indicating that both Facebook itself and the sources of disinformation would be held accountable by providing more information about the reach of viral content and enforcement against disinformation.  [Facebook, 11.15.18] But the part of the platform that has had the most significant growth in recent years is Facebook’s private “groups” — which are entirely opaque and not subject to fact-checking.

  • Transparency has been a core element of how Facebook describes its efforts against disinformation.  It markets “transparency reports”; promises “transparency into how our systems are performing”; and assures the public that it will speak loudly and clearly concerning the prevalence of disinformation, how often it is actionable, and the scope of its reach before it is found.  [Facebook, 11.15.18].

  • At the same time Facebook has been promoting its transparency efforts, it has been encouraging users to join closed, members-only groups that it considers “living rooms” rather than “town squares.”  [Washington Post, 7.5.19; CNN 6.20.20]

  • These groups are off-limits to Facebook’s fact-checkers, even when membership reaches the tens and hundreds of thousands.  As one of Facebook’s fact-checkers explained to Mashable:  “If it’s a closed group, I can’t look at it.  Private pages, private groups: fact-checkers don’t see.”  This, according to the same fact-checker, is a critical problem:  “Groups are very instrumental in making something go viral.”  [Mashable, 6.11.20]  For example, “Facebook Groups were integral in spreading COVID-19 conspiracy theories.”  And these groups have likewise become influential in the anti-vaccine movement. [Mashable, 6.11.20]

  • These “living rooms” have become bastions of disinformation. Groups formed this spring to promote economic reopening in certain states have become amplifiers of conspiracy theories, including that George Floyd is not dead and, instead, that the video of his death was fiction.  [CNN, 6.20.20]

  • Facebook makes it difficult to “spot when multiple groups and pages are managed by the same accounts” that may be interested in spreading disinformation.  Instead, the platform gauges a user’s interest in one such group and suggests other, similar groups that are operated by the same managers  — thereby building vectors of disinformation.  [Wired, 6.17.20]

  • All of this occurs out of the public eye.  In recent weeks, these groups were the petri dish of the lie that ANTIFA was providing stacks of bricks to protesters throughout the country that was viewed by millions of Americans, including  he president of the United States.  [Mashable 6.11.20; Washington Post, 7.5.20]

  1. Promise of Access to Data for Academic Research to Hold Facebook Accountable.    

In early 2018, Facebook set up an independent election research program, noting the “benefits of independent analysis to understand all the facts and to ensure we’re accountable for our work.”  [Facebook, 9.13.18; Facebook, 4.9.18].  Nearly 18 months later, Facebook ended the initiative.  Now, in 2020, Facebook continues to shield its own data from objective and rigorous academic studies about disinformation and hyperpartisan content.

  • The independent research initiative was described by Facebook as part of the”real progress [the company had made] since Brexit and the 2016 presidential election in fighting fake news,” by “enabl[ing] Facebook to learn from the advice and analysis of outside experts so [the company] can make better decisions — and faster progress.”  [Facebook, 4.9.18]

  • The research sponsored by this effort — which was funded by nonprofit foundations — was “designed to help people better understand social media’s impact on democracy — and [to help] Facebook ensure that it has the right systems in place.  For example, will our current product roadmap effectively fight the spread of misinformation and foreign interference?”   By fostering this independent research, Facebook intended “help people understand the broader impact of social media on democracy — as well as improve our work to protect the integrity of elections.” [Facebook, 4.9.18]

  • Facebook proactively told the public that it had addressed all user privacy concerns with the research initiative.  Addressing privacy considerations himself, Zuckerberg stated:  “I decided that the benefits of enabling this kind of academic research outweigh the risks.”  [Facebook, 9.13.18].

  • Eighteen months later, after failing to provide researchers with the promised data, Facebook made a different decision.  The company would not even “provide academics with anonymized data about the links that users share on the platform — crucial to understanding how falsities spread.”  Doing so, Facebook stated, could in theory harm or identify individual users, even though it had reached the opposite conclusion in launching the program.  [Fast Company, 10.28.19]

In recent weeks, Facebook has taken to claiming that because it is receiving criticism from throughout the political spectrum, it must be doing something right.  These assertions omit the fact that Facebook had promised objective research into the efficacy of its fact-checking and democracy-protecting measures — and then abruptly stopped that work before it ever began.

UNDERDEVELOPED & UNEVENLY APPLIED POLICIES 

  1. Disinformation About November’s Election.  

Before the 2018 midterm elections, Facebook announced that it was “broadening our policies against voter suppression,” banning content “designed to deter or prevent people from voting.”  [Facebook, 10.15.18]  These policies bar “misrepresentation of the dates, locations, times and methods for voting or voter registration,” and “misrepresentation of who can vote, qualifications for voting, and whether a vote will be counted.”  [Facebook, 10.21.19]  Late last year, Facebook announced that it was ramping up efforts to quickly detect “violating content.”   [Facebook, 10.21.19].  Facebook also stated that it had taken into consideration the results of its civil rights audit, and would be taking steps against the promotion of content that “suggests voting is useless or meaningless, or advises people not to vote.” [Facebook, 10.21.19].

As applied, however, these policies have uniformly allowed President Trump to lie about methods of voting in the 2020 election, even though the lies violate the clear text of the policy.  Trump has:

  • Pronounced that “there is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent,” because they are being sent “to millions of people, anyone living in [California], no matter who they are or how they got there” and “professionals [will be] telling all of these people . . . how and for whom to vote.”

  • Declared that “MILLIONS OF MAIL-IN BALLOTS WILL BE PRINTED BY FOREIGN COUNTRIES.”

  • Stated that “who knows where” millions of mail ballots “are going[] and to whom?”; and

  • Claimed that “mail boxes” containing ballots “will be robbed.”

Each of these posts was allowed to remain on the platform, even though each contains false information about voting, who is eligible to vote, how a voter may participate, and whether votes cast will be counted.  Explaining these decisions over email to the DNC, Facebook went so far as to state that there were no authoritative sources that could disprove allegations of widespread ballot theft from mailboxes; and, therefore, that the post was appropriate.

Moreover, Facebook appears to be interpreting its policy narrowly, such that disinformation about particular voting precincts is prohibited (if it can be found), while claims that the election as a whole is compromised or fraudulent are allowed to stay (even when made by the incumbent president). [Washington Post, 7.2.20]. How this squares with Facebook’s articulated policy to prohibit content “designed to deter or prevent people from voting” remains unexplained.

  1. Facebook’s Shifting Policy Against False or Misleading Advertisements.  

Facebook’s “Community Standards” that govern the advertising on its platform bar material that “contain[s] deceptive, false, or misleading content, including deceptive claims, offers, or methods.”  Relatedly, company standards likewise bar content “dehumanizing or denigrating entire groups of people and using frightening and exaggerated rumors of danger.”  The extent to which these standards apply to political advertisements has been a moving target.

  • In the 2018 midterm election, Facebook applied these standards to advertisements placed by politicians and commercial entities alike. Days before the election, Facebook removed an advertisement placed by the Trump campaign, which portrayed an oncoming “invasion” of immigrants.  [CNBC, 11.5.18].

  • Even when Facebook rolled out its new policy for the 2020 election cycle that allowed politicians wide latitude in posting on the platform, it noted an exception:  “where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.”  In that circumstance, if someone — even a politician — “chooses to post an ad on Facebook, they must still fall within our Community Standards and advertising policies.” [Facebook, 9.24.19]

  • Less than two weeks later, in early October 2019, the Trump campaign paid to place an advertisement on Facebook containing assertions that Facebook’s fact-checking partners had already determined to be false.  [Popular Info, 10.3.19]

  • Facebook’s policy then changed.  Now, politicians may lie in advertisements, too, and Facebook will allow its targeting tools to be used to ensure those advertisements can be targeted at whichever particular voters the campaign chooses.  [Vox, 10.21.19]

  • Facebook assured the public that, despite this change, advertisements by outside groups like Super PACs and c4 dark money groups still would be subjected to the platform’s third-party fact-checkers.  But Facebook still allows advertisements placed by these groups to run until they are evaluated by fact-checkers, thus allowing outside groups — so often responsible for negative advertising — to lie unabated in paid advertising for a period of days.  A right-wing Super PAC took advantage of this delay during the Democratic primary, attempting to sew lies in the days leading up to early-state contests and then taking down the advertisements before they could be fact-checked.

  1. Prohibition of Incitement

Facebook has long prohibited all users, including politicians, from inciting violence.  This is a significant exception from Facebook’s general approach that permits nearly all other forms of content from politicians.  But the most prominent example of incitement by a national politician in recent history was nonetheless allowed to remain on Facebook, undisturbed. Rather than removing the post, Facebook placed a call to the White House, lamenting that the company had been put in a difficult position.

  • As explained by Facebook in a widely promoted September 2019 speech, content posted by politicians that “endangers people” is not permitted on the platform.  As [VP of Global Affairs and Communications] Clegg explained:  “It’s not new that politicians say nasty things about each other — that wasn’t invented by Facebook.  What is new is that they can reach people with far greater speed and at a greater scale.  That’s why we draw the line at any speech which can lead to real world violence and harm.”  [Facebook, 9.24.19].

  • Zuckerberg sounded the same note in testimony before Congress:  “If anyone, including a politician, is saying things that can cause, that is calling for violence or could risk imminent physical harm . . . we will take that content down.”

  • Then, on May 29, President Trump posted to Facebook that “when the looting starts, the shooting starts.”  [Washington Post, 5.29.20]  Rather than enforcing its clear policies against such content, Facebook chose to allow the post to stay up — calling the White House to so inform them, rather than adhering to its clear policy.  [Washington Post, 6.28.20]

  • Following the controversy over Trump’s incitement, Facebook rolled out another policy, indicating that henceforth it would consider labeling such posts as “newsworthy” rather than removing them.  [Facebook, 6.26.20].  The same day, however, Facebook reiterated that the new policy would have had no effect on Trump’s post, which still remains up without any label. [Mike Isaac, NY Times, via Twitter, 6.26.20]

  1. “Deepfakes,” Narrowly Defined.    

In January 2020, Facebook announced a new policy prohibiting “deepfake” videos.   [Washington Post, 01.07.20; Wired, 01.07.20].  But a simple review of the policy indicates it is wholly inapplicable to the vast majority of manipulated media on its site — and particularly political manipulated media.

  • The policy expressly does not apply to video that is “edited solely to omit or change the order of words”; nor does it apply to amateur video manipulation.

  • Instead, it bars only professional videos that are the “product of artificial intelligence or machine learning” — in other words, very advanced and highly-technical manipulated content.

  • Political manipulated media, however, is primarily via so-called “cheapfake” content created by simple video splicing or unsophisticated manipulation, outside the ambit of Facebook’s policy.  [Slate, 6.12.19] Videos portraying Vice President Biden or Speaker Pelosi as disoriented fall into this category — placed in the third-party fact-checking queue with all other content while they go viral.

  1. Coordinated Inauthentic Behavior 

One of the lessons of 2016 is that multiple, automated social media accounts working together — sometimes under the control of one person or a number of allied individuals — may artificially amplify the reach of misleading content.  [SSCI, 10.8.2019]  So, Facebook prohibits the use of fake accounts or undisclosed networks of accounts used to “artificially boost the popularity of content.”

  • Ben Shapiro’s Daily Wire has flouted this prohibition for a number of years, working with a network of more than a dozen pages to ensure that the reach of his hyperpartisan content far exceeds that of reported news sources.  [Popular Info, 10.28.19]

  • Shapiro’s hyperpartisan material is among the top-10 most engaged with material on Facebook, and is extensively promoted by Facebook’s algorithm.

  • Last week, documentation of further coordination — paid promotions on pages controlled by Mad World News —  became so comprehensive that Facebook finally took action.  [Popular Info, 7.2.20].  But Facebook chose to penalize only the pages controlled by Mad World News.  No meaningful action was taken against the Daily Wire itself.  Facebook chose to punish the conspirators while excusing the principal — whose inauthentic page network continues to manipulate News Feed unabated.  In fact, on July 5, the Daily Wire had four of the 10 top performing Facebook posts. [Popular Info, 7.2.20; Kevin Roose, 7.6.20]