Reflections on Lessons from the First Internet Ages

In November 2021, Knight Foundation held a major conference titled Lessons from the First Internet Ages. I co-organized the event with Mary Anne Franks from the University of Miami and the Knight team. It was a remarkable collection of experts discussing some of the thorniest issues in Internet law and policy, and I learned a lot from them.

This post examines a few standout statements that participants made (as paraphrased by me).

“No one wants to return to the Internet of old.” ––Safiya Umoja Noble

I disagree. I see widespread efforts across the political spectrum to restore some of the 1990s Internet’s worst elements. 

First, Internet services do a lot more content moderation now than they did in the 1990s, and those content moderation efforts are essential to the integrity of the modern Internet. Yet there is substantial support today, mostly in conservative circles, to reduce or stop content moderation. Florida and Texas even passed laws (currently enjoined) limiting content moderation by internet services. These regulatory efforts would functionally dial back content moderation to 1990s levels.

Second, there is substantial interest, mostly in liberal circles, in discouraging services from using algorithms to decide what user-generated content (UGC) to highlight, in favor of “non-algorithmic” content ordering approaches like reverse chronological order (RCO). RCO was the 1990s default method for ordering content, because better options didn’t exist. RCO does not work for most UGC services because it’s easily gamed by malefactors and does a terrible job of satisfying reader priorities. Yet many of the anti-algorithm proposals would take us back to the lousy content ordering of the 1990s.

Third, we’re seeing a resurgence of services offering professionally produced content behind paywalls, similar to the business models of commercial online services in the 1990s. As part of the Web 2.0 revolution, the 1990s walled-garden content publishers were eclipsed by services making UGC available to readers for free. Many legislative proposals would make UGC financially unsustainable, driving services back to professionally produced content available behind paywalls (or out of the industry entirely). These regulatory efforts would resurrect the publication ecosystem of the 1990s, when information was scarce and expensive—with many unwanted distributional consequences.

If Prof. Noble had said that no one should want to return to the Internet of old, I would enthusiastically agree. However, as a descriptive matter, many people are dedicated to trying to take the Internet back to the 1990s.

“How can we create the equivalent of online parks?” ––Sarita Schoenebeck

Parks play a special role in our society as taxpayer-funded community gathering spaces; and as such, the law treats them as free-speech zones (in Constitutional parlance, “traditional public forums”). Superficially, an online equivalent sounds attractive: a digital park would provide a haven for free speech without content moderation being done by for-profit entities. However, because the courts protect free speech in parks so vigorously, the government couldn’t do much content moderation itself without running into First Amendment problems. As a result, “digital parks” would quickly turn into cyber-cesspools dominated by trolls, spammers and malefactors—an unhelpful “alternative” to the existing content moderation functions provided by UGC services today.

“Insurers can teach Internet services how to avoid and manage their legal risk.” ––Esther Dyson

Insurance companies currently play little role in establishing UGC standards or moderation practices because Section 230 largely eliminates the legal risks that require insurance. Esther’s point was that if Section 230 were carved back, insurers would actively help UGC services set their risk management standards as conditions of issuing insurance. But do we want insurance companies setting the de facto standards for Internet discourse? The social value of free expression doesn’t factor into insurers’ bottom line.

Also, insurance doesn’t cover criminal liability. Without Section 230, thousands of state legislators, county supervisors, city councils and other regulatory bodies would be free to create an infinite variety of uninsurable criminal laws that would overwhelm any risk management benefits that insurance companies might provide.

“The problem is the people, not the technology.” ––Esther Dyson

The Internet enables humans to interact with each other. For millennia, human interactions have led to conflict. Given this history, it’s unreasonable to expect that there will be zero conflict or antisocial behavior on the Internet. No human system has ever achieved that. Instead, the quantum of antisocial behavior online should be compared against the quantum of antisocial behavior that takes place offline. Otherwise, we are blaming the technology for people doing what people do.

“Can capitalism produce the kind of social media we want?” ––A distillation of many comments

Many conference critiques riffed on a core theme: Can capitalism produce the kind of online discourse that benefits society? After all, companies chasing profits will prioritize their self-interest over society’s. Especially when it comes to health and safety issues, unregulated capitalist activity can create negative externalities that can harm or imperil society. For that reason, governments must set guardrails in capitalist economies. Many of the conference participants questioned whether we have enough guardrails for capitalist UGC services.

 The analogy only holds in part. When the product is “speech,” government-set guardrails can turn into censorship—and, inevitably, governments will set the rules to entrench its own power more deeply. This makes the regulation of “speech markets” qualitatively different from the regulation of products and services that don’t potentially pose threats to the government’s incumbency.