Social media plays an important role in building community and connecting people with the wider world. At the same time, the private rules that govern access to this service can result in divergent experiences across different populations. While social media companies dress their content moderation policies in the language of human rights, their actions are largely driven by business priorities, the threat of government regulation, and outside pressure from the public and the mainstream media. footnote1_D-2SBh23EWN-MaMW5IzeWTLkzCaF7Z6Hk4DCOCDxHeU_vfsc7te2J1sl1Chinmayi Arun, “Facebook’s Faces,” Harvard Law Review 135 (forthcoming), https://ssrn.com/abstract=3805210. As a result, the veneer of a rule-based system actually conceals a cascade of discretionary decisions. Where platforms are looking to drive growth or facilitate a favorable regulatory environment, content moderation policy is often either an afterthought or a tool employed to curry favor. footnote2_8n—90LaVAHejIpUhUhqzYJoSkjNlDbE1miKgzChe0_tSUBbEs4bk4l2See, e.g., Newley Purnell and Jeff Horwitz, “Facebook’s Hate-Speech Rules Collide with Indian Politics,” Wall Street Journal, August 14, 2020, https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346. All too often, the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.
This report demonstrates the impact of content moderation by analyzing the policies and practices of three platforms: Facebook, YouTube, and Twitter. footnote3_K3CwjMED4WEW35ne1EUut3NWNx1AvIOq08Ha4VHkRk4_rNe2o5qj9sr73Facebook’s Community Guidelines also cover Instagram, as the Instagram Community Guidelines regularly link to and incorporate Facebook’s rules regarding hate speech, bullying and harassment, violence and incitement, and dangerous organizations and individuals (among others). See Instagram Community Guidelines, accessed July 6, 2021, https://help.instagram.com/477434105621119?ref=ig-tos. This report does not address alternative content moderation models such as community moderation, which have had comparative success at a smaller scale on platforms like Reddit. We selected these platforms because they are the largest and the focus of most regulatory efforts and because they tend to influence the practices adopted by other platforms. Our evaluation compares platform policies regarding terrorist content (which often constrict Muslims’ speech) to those on hate speech and harassment (which can affect the speech of powerful constituencies), along with publicly available information about enforcement of those policies.footnote4_qEdmKP06npvIP7TsMvAUHVxcgm84jG1ttui6Qc-EHg_rZaantRx0pq74See, e.g., See Joseph Cox and Jason Koebler, “Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too,” Vice, August 25, 2019, https://www.vice.com/en/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too (Describing the statement of a technical employee at Twitter who works on machine learning and artificial intelligence (AI) issues noting at an all-hands meeting on March 22, 2019: “With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said. In separate discussions verified by Motherboard, that employee said Twitter has not taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.”).
In section I, we analyze the policies themselves, showing that despite their ever-increasing detail, they are drafted in a manner that leaves marginalized groups under constant threat of removal for everything from discussing current events to calling out attacks against their communities. At the same time, the rules are crafted narrowly to protect powerful groups and influential accounts that can be the main drivers of online and offline harms.
Section II assesses the effects of enforcement. Although publicly available information is limited, we show that content moderation at times results in mass takedowns of speech from marginalized groups, while more dominant individuals and groups benefit from more nuanced approaches like warning labels or temporary demonetization. Section II also discusses the current regimes for ranking and recommendation engines, user appeals, and transparency reports. These regimes are largely opaque and often deployed by platforms in self-serving ways that can conceal the harmful effects of their policies and practices on marginalized communities. In evaluating impact, our report relies primarily on user reports, civil society research, and investigative journalism because the platforms’ tight grip on information veils answers to systemic questions about the practical ramifications of platform policies and practices.
Section III concludes with a series of recommendations. We propose two legislative reforms, each focused on breaking the black box of content moderation that renders almost everything we know a product of the information that the companies choose to share. First, we propose a framework for legally mandated transparency requirements, expanded beyond statistics on the amount of content removed to include more information on the targets of hate speech and harassment, on government involvement in content moderation, and on the application of intermediate penalties such as demonetization. Second, we recommend that Congress establish a commission to consider a privacy-protective framework for facilitating independent research using platform data, as well as protections for the journalists and whistleblowers who play an essential role in exposing how platforms use their power over speech. In turn, these frameworks will enable evidence-based regulation and remedies.
Finally, we propose a number of improvements to platform policies and practices themselves. We urge platforms to reorient their moderation approach to center the protection of marginalized communities. Achieving this goal will require a reassessment of the connection between speech, power, and marginalization. For example, we recommend addressing the increased potential of public figures to drive online and offline harms. We also recommend further disclosures regarding the government’s role in removals, data sharing through public-private partnerships, and the identities of groups covered under the rules relating to “terrorist” speech.
End Notes
-
footnote1_D-2SBh23EWN-MaMW5IzeWTLkzCaF7Z6Hk4DCOCDxHeU_vfsc7te2J1sl
1
Chinmayi Arun, “Facebook’s Faces,” Harvard Law Review 135 (forthcoming), https://ssrn.com/abstract=3805210. -
footnote2_8n—90LaVAHejIpUhUhqzYJoSkjNlDbE1miKgzChe0_tSUBbEs4bk4l
2
See, e.g., Newley Purnell and Jeff Horwitz, “Facebook’s Hate-Speech Rules Collide with Indian Politics,” Wall Street Journal, August 14, 2020, https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346. -
footnote3_K3CwjMED4WEW35ne1EUut3NWNx1AvIOq08Ha4VHkRk4_rNe2o5qj9sr7
3
Facebook’s Community Guidelines also cover Instagram, as the Instagram Community Guidelines regularly link to and incorporate Facebook’s rules regarding hate speech, bullying and harassment, violence and incitement, and dangerous organizations and individuals (among others). See Instagram Community Guidelines, accessed July 6, 2021, https://help.instagram.com/477434105621119?ref=ig-tos. This report does not address alternative content moderation models such as community moderation, which have had comparative success at a smaller scale on platforms like Reddit. -
footnote4_qEdmKP06npvIP7TsMvAUHVxcgm84jG1ttui6Qc-EHg_rZaantRx0pq7
4
See, e.g., See Joseph Cox and Jason Koebler, “Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too,” Vice, August 25, 2019, https://www.vice.com/en/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too (Describing the statement of a technical employee at Twitter who works on machine learning and artificial intelligence (AI) issues noting at an all-hands meeting on March 22, 2019: “With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said. In separate discussions verified by Motherboard, that employee said Twitter has not taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.”).