Fmr. Rep. Chris Cox
Knowing what you know now about the internet and how your venture turned out, what do you wish you had done differently from the beginning?
As the coauthor of Section 230, with my then-colleague Democratic Representative Ron Wyden, knowing what I know now about the Internet, what do I wish I had done differently?
To answer this question, first a word about what we knew then. Twenty-five years ago, at the signing ceremony for the sweeping telecommunications legislation that included what today we know as Section 230, President Bill Clinton quoted President Thomas Jefferson: “He who receives an idea from me receives instruction himself without lessening mine.” Our shared vision for the internet of the future was of a virtual forum for exchanging information and ideas on an unprecedentedly broad scale that would enlighten the planet.
In Congress, we marveled at the possibilities. Students would have access to the world’s knowledge. Citizens would have myriad opportunities for self-expression and civic participation. Entrepreneurs could address markets far from home, at nearly zero cost. Physicians and scientists could collaborate more deeply and solve problems faster than had ever been possible in human history.
It is this vision that animated Section 230, and that motivated our admirably bipartisan cooperation. Clinton was not dispensing mere rhetoric when he congratulated Republicans and Democrats for our “work together in a spirit of genuine cooperation to advance the public interest and bring us to a brighter future.”
The Cox-Wyden bill, first known as the Internet Freedom and Family Empowerment Act before it was folded into the Telecommunications Act of 1996 and rechristened as Section 230, was an exemplar of this bipartisanship. Its two authors, one a Republican and the other a Democrat, joined with the overwhelming majority of our colleagues on both sides of the aisle in adapting for the internet age what Clinton called “outdated laws, designed for a time when there was one phone company, three TV networks, [and] no such thing as a personal computer.”
In legacy telecommunications, one provider published content over the airwaves or cable to millions of passive consumers. The internet flipped that model: now, millions of internet users could produce content for display on individual platforms. What’s more, that content could all be shared in real time. No internet platform could reasonably be required to take legal responsibility for what millions and ultimately billions of people were sharing in real time. If the essential features of the internet were to be preserved, that responsibility would need to rest with the person in the best position to prevent harmful content: the content creator. Section 230, therefore, makes content creators primarily liable. Platforms become liable, too, whenever they contribute to the development of others’ content, even if only in part.
What we know now is that on a scale barely imaginable even to the internet’s strongest champions back then, the medium has come to be defined by user-created content. Wikipedia, which did not commence until after Section 230 was enacted, now contains over 56 million articles, written and edited entirely by users at the rate of 1.9 individual edits per second. It is free to everyone with an internet connection. More than 1.7 billion unique visitors consult it every month. E-commerce is now driven by user content; a parade of marketing studies has established that the vast majority of retail consumers rely on customer reviews. Hundreds of the most popular social media websites connect friends and family exchanging news and views. By 2020 it was thought that the billions of social media content creators had finally reached the saturation point; but then the global pandemic hit, making those virtual connections more important than ever. Tens of millions of websites featuring content created by their users cater to every conceivable human interest. Google and other online search engines continuously crawl the world’s websites and, in response to almost any question we can think of, instantaneously link us to all of this user-created content.
We also know that by empowering billions of people to speak their minds, we have unleashed the whirlwind. The law that gives to anyone and everyone the opportunity to say what they will—limited only by what the platforms hosting this speech find objectionable—has come with costs in the form of obnoxious speech, dangerous speech, hate speech and violent speech. Do the unquestioned benefits of user-created internet content outweigh these very real costs?
Two years ago, Jeff Kosseff, a professor of cyber law at the United States Naval Academy, published a book with the catchy title The Twenty-Six Words That Created the Internet. In it, he posits that without Section 230, the internet would be reduced to little more than words, pictures and videos provided by companies, bereft of interaction among users. Repeal would take us back to the days when the average citizen’s only means of public expression was writing a letter to the editor, hoping against hope it might get published. Losing Section 230 would deprive us of much in the way of knowledge and new ideas that help realize Jefferson’s ideal of people helping others to become wiser. But can Section 230 be improved? After twenty-five years of experience with the law, the answer is yes, but only if lawmakers are careful not to undo the essential elements that make Section 230 work.
Congress likely did not intend—I certainly did not—that when a platform is notified by a court that material on its site has been adjudged defamatory, the platform could thumb its nose and refuse to take down the offending content. The law as written does not shield this conduct, nor should it. But over vigorous dissent, California’s highest court has held otherwise. Even when a court has adjudged content to be libelous, and state law would normally require taking it down, the state-court justices ruled that Section 230 protects a platform’s decision to do nothing—even in the face of a court order. If I were transported back to 1996 and still holding the pen, I would tweak Section 230 to ensure it could not be interpreted to produce unjust results such as this.
Another judicially created problem is that platforms have sometimes been shielded from legal liability even when they themselves were involved in creating or developing the content at issue. That is certainly not what the statute says. In 2008, the United States Court of Appeals for the Ninth Circuit corrected this interpretive error in a landmark case, and since then most other courts have avoided this error as well. Had I seen this coming in 1996, I would have added a few more words to Section 230 to state even more clearly that platforms can be content creators or developers themselves—and when they are, they have no protection from liability.
A handful of other issues that have arisen around Section 230 over the last quarter century are spurious. It is frequently asserted that Section 230 shields a platform when it exercises purely political bias. No court has said this. So, even knowing what we know now, I would not necessarily do anything differently were I somehow transported back to 1996 like Marty McFly. Because the First Amendment gives wide latitude to private platforms that choose to prefer their own political viewpoints, Congress can (in the words of the First Amendment) “make no law” to change this result.
The same holds true for demands that Section 230 be interpreted to require “viewpoint neutrality.” Section 230 governs millions of websites that host user-created content. Asking them all to follow government neutrality guidelines goes beyond impracticability to impossibility. May the Democratic National Committee and Republican National Committee no longer have websites featuring user-created content? Must they host political speech they disagree with? Section 230 does not require political neutrality because, as is stated in its preamble, the internet we envisioned consists of a “vibrant and competitive free market” that, far better than government speech controls, is capable of enabling “a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”
There are other tweaks I might be tempted to make to my own legislative handiwork if I were to find myself back in 1996, still holding the pen. But then again, I might restrain myself, knowing as I do now the many aspects of the modern internet that we have come to take for granted, and that are dependent upon Section 230’s protections. Even though it is possible to imagine that a “perfect” bill could improve Section 230 while preserving its benefits, I’ve seen the sausage-making process up close. It’s just as likely that instead of perfecting Section 230, opening the door for more changes would threaten its essential elements that we know now have made it work. I’d offer that same advice to my quarter-century younger self, and to those in Congress today who are now wrestling with these very problems.
Lessons From the First Internet Ages
What is the future of the internet? Thirty years after the creation of the first web page, what have we learned about the impact of the internet on communication, connection, and democracy? Join the Knight Foundation for Lessons from the First Internet Ages, a virtual symposium that will explore and evaluate what key figures in the development […]