Digital Platforms’ Power Over Speech Should Not Go Unchecked – Knight Foundation

Digital Platforms’ Power Over Speech Should Not Go Unchecked

A new Gallup/Knight report looks at how Americans weigh free expression online against the threats posed by harmful content. Gallup and Knight invited several experts to weigh in on these findings and to place them within the broader context of public debates about online media and free expression. Their views are offered in a personal capacity and do not reflect the views of Gallup, Knight Foundation, or the organizations with which they are affiliated.

Trust in major institutions is at a low point. Americans have little faith that corporate executives and politicians will prioritize their interests. The “New Governors” (as Kate Klonick calls them[1]) of online speech are no exception. As the Gallup/Knight poll finds, a majority of Americans do not trust social media companies to make the right decisions about users’ harmful activities. However, most Americans also want government far away from decisions about whose speech is prominently displayed, blocked, removed or muted.

Skepticism about government control over speech is a deeply rooted American tradition. A bedrock principle underlying the First Amendment is that government cannot censor the expression of an idea because society finds the idea itself offensive or distasteful. We distrust government to pick winners and losers in the realm of ideas because it might silence those threatening its power. But our distrust is not always warranted. Concerns about government censorship are misplaced for online activity that amounts to conduct rather than expression. As Mary Anne Franks and I have argued, the internet is not a “speech conversion machine.”[2] Many online activities have little to do with speech, and their offline analogues would not be viewed as “speech” for First Amendment purposes. Even if online activity has First Amendment salience, it may warrant little or no protection, as is the case for true threats, speech integral to criminal conduct, defamation of private people, fraud, obscenity, and the imminent and likely incitement of violence.

Distrust of tech companies is having a moment on Capitol Hill. Social media companies have been criticized for removing too little and too much user-generated content. For instance, Ted Cruz (R-TX) and other conservative lawmakers rail against major technology companies that include Twitter, Facebook and Google, contending they are censoring conservative voices.[3] Some lawmakers favor treating online platforms as quasi-governmental actors with a commitment to viewpoint neutrality.

Legally mandated platform neutrality would jeopardize — not reinforce — free speech values. Social media companies could not ban spam, doxing, threats, harassment, nonconsensual pornography or deep fakes. They could not combat cyber mob attacks that chase people offline. They could not mitigate the damage wrought by sexual-privacy invasions by filtering or blocking them. It is desirable for online platforms to combat online abuse that imperils people’s ability to enjoy life’s crucial opportunities, including the ability to engage with others online. Empirical evidence shows that cyber harassment has chilled the intimate, artistic and professional expression of women and people from marginalized communities.[4] Over the past ten years, I have worked with tech companies (without compensation) to help them address online abuse, which they are free to do as non-state actors. Those efforts have been important to victims of cyberstalking and invasions of sexual privacy.[5]

At the same time, there is good reason to worry about tech companies’ influence over the ability of people to express themselves. The power online platforms have over digital expression should not proceed unchecked, as it does in crucial respects today. Federal law has ensured little risk of liability for user-generated content and it has no requirement of responsible content moderation.[6] Under Section 230 of the Communications Decency Act, providers or users of interactive computer services enjoy a shield from legal liability for under- or-over-filtering user-generated content with a few narrow exceptions. Section 230’s legal shield has been broadly interpreted in the courts. It has immunized sites that have encouraged users to engage in illegality or that have designed their sites to enhance the visibility of content that would obviously involve harmful and illegal activity. In short, Section 230’s immunity has allowed platforms to monetize destructive online activity without having to bear the costs wrought by their operations. It has also removed any leverage that victims might have had to get harmful content taken down.

Section 230, as currently interpreted, is not a clear win for free speech. As Benjamin Wittes and I have argued, “[i]t gives an irrational degree of free speech benefit to harassers and scofflaws but ignores important free speech costs to victims.”[7] A robust culture of free speech online can be achieved without shielding from liability sites whose business model is abuse.

Federal lawmakers have expressed interest in a statutory fix Wittes and I have proposed to condition Section 230’s legal shield on reasonable content moderation practices. Under that proposal, platforms would enjoy immunity from liability if they could show that their content moderation practices writ large are reasonable. The revision (in bold italics) to Section 230(c)(1) would read as follows:

“No provider or user of an interactive computer service that takes reasonable steps to address unlawful uses of its service that clearly create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.”

It is time to reform Section 230 to ensure that tech companies’ power over user-generated content is wielded responsibly. Section 230 should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today. That would help solve at least some of the trust gap that Americans have with tech companies today.

Danielle Citron is professor of law at Boston University School of Law and the vice president of the Cyber Civil Rights Initiative. She is a 2019 MacArthur Fellow.


Image (top) by Life Matters on Pexels.