Solve the Underlying Problem: Treat Social Media as Ad-Driven Companies, Not Speech Platforms

A new Gallup/Knight report looks at how Americans weigh free expression online against the threats posed by harmful content. Gallup and Knight invited several experts to weigh in on these findings and to place them within the broader context of public debates about online media and free expression. Their views are offered in a personal capacity and do not reflect the views of Gallup, Knight Foundation, or the organizations with which they are affiliated.

Most Americans did not need the White House’s recent Executive Order, a deeply flawed stunt, to learn that users have grown conflicted about social media. The Gallup/Knight survey results indicate that, while users believe that online platforms are important “places of open expression,” they also are warier than ever about the ways those companies distribute misleading health policy information, disinformation about elections, bigoted trolling and other harmful content.

Social media companies have been responsive to these concerns. Twitter and Facebook, for example, recently announced new user tools to control the ways in which trolls and bigots slide into online “conversations” and user-generated groups. And they have used their constitutionally protected editorial prerogative to flag lawful user content that it finds hateful or dangerously misleading, including the posts of a politically craven President. The most far-reaching reform, however, is Facebook’s recent launch of its highly anticipated Oversight Board, which the company created to help guide content moderation decision-making on controversial issues. The board’s decisions about the company’s takedown efforts will be final, apparently.

These reforms are important, but users evidently remain conflicted. On the one hand, users want social media to be “places of open expression,” free from direct government regulation. This is presumably why a majority would retain the exceptional legal immunity that social media enjoy under Section 230. They would rather internet companies regulate themselves than allow policymakers to do it. On the other hand, users also do not want intermediaries to facilitate pedophiles, fraudsters, bigots and trolls. In this regard, a majority believe Section 230 has done more harm than good to the extent it shields companies from liability for distributing illicit content.

This is the paradox that Section 230 doctrine has given us. It sits in the breach, as it aims to encourage expressive conduct online while also incentivizing intermediaries (not policymakers) to regulate that content. Courts or policymakers should reform the law to compel intermediaries to do more.

We get there by recognizing that social media are not merely platforms through which users make genuine connections. Rather, most popular intermediaries design and administer their services above all to sustain deep user engagement. They do this by, among other things, amplifying provocative content that friends and other in-network users post and like or dislike.  The Wall Street Journal recently reported, for example, that an internal assessment at Facebook in 2018 concluded that its social media service’s algorithms feed users “more and more divisive content in an effort to gain user attention & increase time on the platform” if left unchecked.[1] This is to say nothing of the other unknown “black box” variables on which their automated decision-making systems rely.[2]

Consider the way, last month, a flawed “preprint” Stanford research paper circulated broadly on social media before scientists could vet its findings. The paper purported to show a fatality rate for COVID-19 that is far lower than the prevailing research shows. Within a week after it first appeared on Twitter, knowledgeable medical researchers blasted the paper’s methodology and findings. However, that was only after a phalanx of conservative activists and media personalities cited the paper’s findings to mobilize #ReopenAmerica and #BackToWork, which took off like wildfire.[3] This spread occurred even after Twitter and Facebook initiated efforts to label or remove misleading posts about COVID-19[4] and enlisted fact-checkers from across the world under a new $1 million grant program.[5]

There is much to credit in these specific efforts since disinformation on COVID-19 is a life-and-death concern. But, as with similar recent reforms, they do not remedy the real problem. Pseudoscience about medicine and vaccines[6] and disinformation about politics and elections continue to spread virally. Facebook’s distribution and targeting techniques continue to post unlawfully discriminatory ads[7] in housing and job markets, even after the company promised to stop over a year ago. To say that social media are mere “places for open expression” in light of all of this substantially obscures the underlying political economy at work.[8] This fact is as plain as day in the context of Facebook, which has denied the Oversight Board the authority to make any binding decisions about advertising policy.

Policymakers would have an array of lawful regulatory tools available to them if they recognized social media for what they are: ad-driven companies. They might, as in Europe, for example, impose restrictions on how intermediaries may market or leverage user data with advertisers. Or they might empower users themselves to limit certain commercial uses of their personal data. These noncontent-based reforms would create healthy friction in the ad-based business model, which would also have the salutary effect of de-emphasizing provocative content. More pertinently, policymakers could require intermediaries to internalize some of the harms and social costs of their content moderation systems by narrowing the protection under Section 230. After all, we allow consumers to sue practically all other kinds of companies for secondary liability to promote fairness and regulatory efficiency. I have argued elsewhere that, when evaluating whether an online intermediary materially contributes to illicit or illegal content, for example, courts should attend to the ways in which it designs its services and targets certain audiences.[9]

The mood for reform is palpable, but I would not start with the plan set out in the Executive Order. Never mind that it was animated by the President’s aim to bully Twitter into letting his outrageous posts circulate freely. It is flawed because it orders the Commerce Department to petition the Federal Communications Commission, an independent agency, to promulgate a narrow interpretation of a specific provision under Section 230 through which most legal experts doubt the FCC may issue binding legal rules.

No, policymakers’ most serious reforms to Section 230 sit in Congress. The most prominent among them, however, is likely unconstitutional[10] or may undermine end-to-end encryption[11] for user messaging. Legislators could avoid these pitfalls if, as Danielle Citron has proposed,[12] they simply condition Section 230 immunity on intermediaries’ reasonable efforts to block illicit content.

It is good to see that policymakers are beginning to see past the romantic idea that social media are mere “places of open expression.” The Gallup/Knight survey results suggest that users are not there yet, but at least one step closer.

Olivier Sylvain is professor of law and director of the McGannon Center for Communications Research at Fordham University.


Image (top) by Lilly Rum on Unsplash.