Skip Navigation
Analysis

Trump’s Executive Order to Retaliate Against Twitter’s Fact-Checking

The order attempts to trigger regulatory scrutiny in a way that can only be read as an attempt to punish Twitter, an obvious violation of the First Amendment.

May 29, 2020

This originally appeared in Just Security.

President Donald Trump has been spoiling for a fight with Twitter and Facebook for a while now. Both companies have been under attack for suppressing conservative viewpoints, although there is scant evidence for this allegation. In the face of this pressure, they have been wary of stepping on Trump’s toes, even going so far as to develop policies that essentially exempt politicians from the types of constraints that they impose on the rest of us.

Twitter broke with tradition this week by slapping a label on two Trump tweets in which he falsely claimed that voting by mail would lead to fraud (a violation of Twitter’s rules against election misinformation). Trump vowed strong action to stop this form of what he considers private censorship and yesterday afternoon issued an executive order on “Preventing Online Censorship.”

Undeterred, Twitter this morning took even stronger action on a White House tweet retweeting a Trump message that took aim at activists protesting the killing of George Floyd in Minnesota. The White House tweet declared the military would take control and shoot looters. Twitter covered the tweet with a screen indicating that the contents violated the company’s rules against glorifying violence (although users can click through and see it) and limited the reach of the Trump missive by blocking sharing, replies, and “likes.”

Trump’s executive order taps into a range of concerns – from across the political spectrum – about the power and failings of social media companies. The order capitalizes on such complaints to try to trigger regulatory scrutiny in a way that can only be read as an attempt to punish Twitter for fact-checking Trump’s tweet, an obvious violation of the First Amendment.  The order attempts to re-write a statute passed by Congress and repeatedly interpreted and applied by courts to limit the liability of social media companies for their decisions to allow or remove posts. It seeks to keep federal agencies from advertising on “biased” social media platforms, while the president’s own re-election campaign spends millions on political ads on the very same platforms.

On some level, it looks like the president is throwing things at the wall to see what sticks. But even if the order does not actually lead to action, the threat of regulatory pressure is aimed at bullying social media companies into continuing their hands-off approach to Trump.

Parts of the order echo themes that people writing about content moderation have long sounded. Free speech is a fundamental value. Social media is the new public square. The major platforms have an outsized impact on public discourse. Platforms need to be transparent and accountable. The order frames all this in the language of right-wing grievance, but those of us concerned about the suppression of minority voices and disadvantaged groups have raised these concerns as well.

The order first takes aim at Section 230 of the Communications Decency Act, which gives companies like Facebook and Twitter immunity from civil liability both for hosting and restricting access to content produced by their users. This law, too, has been the target of critics from the left and right and many in between. House Speaker Nancy Pelosi has said that tech companies are using Section 230 to avoid taking responsibility for misinformation and hate speech. Last year, Senator Josh Hawley (R-MO) introduced a bill that would condition Section 230 immunity for big social media companies on their ability to demonstrate that their content-moderation policies and practices were not politically biased.

Two provisions of Section 230 (reproduced below) are at play, and the order aims to combine them in a way that is both contrary to the statutory language and designed to be maximally threatening to social media platforms.

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher   or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

            (A) any action voluntarily taken in good faith to restrict access to or availability of            material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material       is constitutionally protected; or

            (B) any action taken to enable or make available to information content providers or       others the technical means to restrict access to material described in paragraph (1).

Section 230 provides two types of immunity for covered entities. Under Section 230(c)(1), companies like Facebook and Twitter shall not to be treated as publishers or speakers of content created by their users. This immunizes the platforms from liability for failing to remove unlawful content, which is critical to their ability to operate. Free speech advocates are fans because absent such protection, the platforms would be incentivized to remove a broad swath of posts and tweets out of fear of liability. But Section 230(c)(1) does not require the platforms to act in good faith to avoid liability.

Section 230(c)(2)(a), on the other hand, protects platforms against wrongful takedown claims if they have acted in good faith to restrict access to content that they consider to be objectionable. Ironically, Trump has benefited from the immunity created by Section 230(c)(1); in its absence, platforms would be much more aggressive in deleting posts that could implicate liability.

draft of the executive order that was leaked earlier in the day yesterday said it is the “policy” of the United States that a company that doesn’t meet the good-faith requirement of (c)(2) is acting in an editorial capacity and loses immunity under (c)(1). While this argument is not new, it directly contradicts the statutory language. In 2018, when Congress set out to force platforms to crack down on sex trafficking, it passed the Fight Online Sex Trafficking Act (FOSTA), which explicitly removed Section 230(c)(1) immunity for publishing third-party content related to sex trafficking. When courts have been asked to apply the “good faith” requirement of (c)(2) to the immunity provisions of (c)(1), they have declined to do so, citing the clear text of the statute.

The final version of the order pulls back from this position, instead stating that when a company “removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct” and as a matter of U.S. policy, “should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.”

But the order doesn’t give up on its goal of importing the good-faith requirement into the (c)(1) liability shield. It directs the commerce secretary, acting in consultation with the attorney general, to petition the Federal Communications Commission (FCC) to expeditiously propose regulations:

[T]o clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions.

In other words, if a company is not acting in good faith in removing content, can it also be shielded from liability when it doesn’t remove content? Relatedly, the FCC is asked to weigh in on what it means for platforms to act “in good faith” when removing content under Section 230(c)(2), with the order highlighting the role of “pretextual removals” and process concerns as relevant to this determination.

As many have pointed out, the FCC is an independent agency and not directly under Trump’s control. It has touted a hands-off stance on internet regulation, famously undoing the Obama administration’s net neutrality order. And it can’t really tell Twitter how and when to remove content without running afoul of the First Amendment’s prohibition on government interference with private speech. Any attempt to do so would surely lead to a court challenge, hamstringing action by the FCC.

The order also seeks to trigger action by the Federal Trade Commission (FTC), suggesting that it investigate platforms’ implementation of content-moderation rules as “unfair or deceptive practices” and whether large platforms (specifically calling out Twitter) are violating laws. In evaluating complaints, the order says, the FTC should refer to an earlier section of the order that makes a breathtakingly broad claim – that it is “the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech.” Of course, platforms restrict protected speech all the time. That is the essence of content moderation, and Section 230 gives them the ability to do so without incurring liability. It is difficult to see the FTC jumping into this fraught issue, especially given that it has shown little appetite for jumping into content moderation issues.

Opening yet another front, the order directs the attorney general to set up a working group with state attorneys general to consider the potential enforcement of state statutes prohibiting unfair and deceptive practices and produce model legislation for states that don’t have such laws on the books. The working group also is directed to collect information about a range of conservative grievances, such as the reliance on third-party fact checkers with “indicia of bias,” the demonetization of accounts that traffic in misinformation, and the perceived downranking of conservative content. While the FCC and FTC seem like the big guns in this fight, this provision might give new impetus to some states’ attempts to take on social media companies on the basis of claims of bias, despite the provisions of Section 230.

Finally, the order directs Attorney General William Barr to propose federal legislation that would accomplish the order’s policy objectives, essentially inviting Barr to expand on his previous attacks on Section 230.

None of the actions contemplated in the order may come to pass. But that is almost beside the point.

In the lead-up to an election that is widely predicted to be marred by a maelstrom of misinformation and when Trump’s ability to hold in-person rallies is limited by the coronavirus pandemic, the executive order seeks to clear the way for his ability to share his mix of misinformation and inflammatory content without pushback or fact-checking from social media platforms.

Even before the executive order took effect, Facebook’s Mark Zuckerberg took to Fox News to argue, controversially, that he doesn’t believe platforms should be “arbiters of truth” for politicians who are widely fact-checked by traditional media. Twitter seems to have taken the opposite approach, doubling down on applying its rules to the president, as well as other world leaders. It remains to be seen how well Trump’s attempt at bullying his favorite social media sites will work.