Skip Navigation
Analysis

Meta’s Oversight Board Needs Access to Facebook’s Algorithms to Do Its Job
 

Without transparency there will be no accountability on content moderation.

June 17, 2024
View the entire Artificial Intelligence and Civil Liberties & Civil Rights collection

This article first appeared at Just Security.

The Oversight Board, established in 2020, was meant to provide accountability for Meta’s decisions about what speech should be allowed on Facebook and Instagram. It has weighed in on several highly consequential cases and addressed major flashpoints of disagreement around the world, some referred by the company and others selected by the Board itself. The Board reviewed the deplatforming of former President Trump for his posts relating to the January 6 attack on the U.S. Capitol and weighed in on Meta’s approach to removing COVID-19 misinformation. Relying on an international human rights framework, the Board has routinely overturned the company’s decisions on a variety of topics, including the removal of posts from India and Iran that Facebook had interpreted as threatening violence; the deletion of a documentary videothat revealed the identities of child victims of sexual abuse and murder from Pakistan in the 1990s; and a video of a woman protesting the Cuban government that described men in dehumanizing terms. The Board’s decisions have provided the public with important information about Meta’s policies and practices and considered evaluation of competing human rights values. Its policy recommendations continually push Meta toward more specific and equitable rules and greater transparency.

But the Board’s capacity is limited: it normally takes between 15–30 cases each year and the influence of these decisions on the content moderation rules used in Meta’s broader rule set seems quite limited because implementation of those rules in like cases is essentially controlled by the company. According to the Board’s charter, “where Facebook identifies that identical content with parallel context remains on Facebook, it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well.” Moreover, the company insists on keeping the Board in the dark about a critical aspect of its operations: the algorithms that control the “overwhelming majority” of decisions about whether to remove posts, amplify them, or demote them. As a result, the Board is confined to influencing only a small portion of Meta’s millions of daily decisions about speech on Facebook, Instagram, and now Threads. It is also hampered in properly evaluating cases because it cannot, for example, fully analyze the threat to safety posed by a post without information about how algorithms have amplified or suppressed the post. For the Board to fully serve its function of providing accountability for Meta’s regulation of speech on its platforms, it must have access to the algorithms at the heart of this system.

While the Oversight Board is restricted in the types of cases it can review, its charter gives it flexibility in other ways to exert its influence. The Board has made the most of this by, for example, adopting an expansive view of its authority to issue policy advice; it has released 251 recommendations in the last three years. A 2021 article by Edward Pickup in the Yale Journal of Regulation Bulletin convincingly argues that the Board’s mandate also encompasses a “dormant power” to review Facebook’s algorithms.

The authority to review algorithms is central to the Board’s decision-making in many types of cases. According to the Board’s charter, “[f]or cases under review, Facebook will provide information, in compliance with applicable legal and privacy restrictions, that is reasonably required for the board to make a decision.” (A similar provision in the section outlining the Board’s powers gives it authority to “request that Facebook provide information reasonably required for board deliberations in a timely and transparent manner”). In the Trump case, the company declined to answer questions about how its platform design, algorithms, and technical features may have amplified Trump’s posts or contributed to the events of January 6. But as the Board has explained, information about the reach of Trump’s posts was clearly relevant to the Board’s evaluation of key issues: the risk of violence posed by Trump and whether less restrictive measures were possible. In another recent case, the Board reversed the company’s takedown of a post about COVID-19 misinformation, because Facebook did not show how the post “contributed to imminent harm,” in part because it had not provided information on “the reach” of the post.

Another potential avenue for obtaining relevant algorithms is the Board’s explicit authority to “[i]nterpret Facebook’s Community Standards and other relevant policies (collectively referred to as “content policies”) in light of Facebook’s articulated values” for cases properly before the Board. In the context of analyzing the Board’s authority to issue advisory opinions, Pickup convincingly argues that algorithms are part of the bundle of “content policies” because they are a set of rules that are applied to particular cases. Indeed, the point could be taken further: algorithms are in fact a manifestation in code of the Community Standards that they are meant to reflect. Thus, for the Board to fulfill its explicit mandate of interpreting Facebook’s content policies to decide cases, it must logically have access to this code. This is not the same as wide-ranging authority to review the company’s algorithms. But the Board has creatively used its authority to make recommendations to Meta in the course of reviewing cases and tracking the company’s responses, which creates an aperture for reviewing the algorithms that drive content decisions.

The Board recognizes that weighing in on algorithms is central to the success of its mission. One of its 2024 Strategic Priorities is “how automated enforcement should be designed and reviewed, the accuracy and limitations of automated systems, and the importance of greater transparency in this area.” And it has in several cases made general recommendations relating to algorithms, exhorting Meta to: commission a “human rights impact assessment on how its newsfeed, recommendation algorithms, and other features amplify harmful health misinformation and its impacts;” improve “automated detection of images;” and conduct an internal audit to “continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes.”

Meta should cooperate with Board efforts to delve further into algorithms and provide it with access to the information it needs to do so, with necessary protections for confidentiality. The company—along with other social media platforms—has long been criticized for using algorithms that drive extreme and inflammatory content that keeps users engaged and ad dollars flowing. So far it has managed to keep these systems secret. But the pressure for algorithmic transparency that has been growing over the last several years, including in Congress, is unlikely relent. The Oversight Board provides an important avenue for Meta to satisfy the public clamor to understand how these systems work.