Nicole Wong – Knight Foundation

Knowing what you know now about the internet and how your venture turned out, what do you wish you had done differently from the beginning?

We have lived in the shadow of Facebook’s once revered, and now increasingly damning, mantra “move fast and break things” for more than a decade. A rallying cry to Silicon Valley’s engineers and entrepreneurs, it encouraged speed to market and disdain for orthodoxies and institutions on the path to a better future. This ethos of rebellious optimism has its roots in the early years of the internet. In 1996, internet icon (and Grateful Dead lyricist) John Perry Barlow penned “A Declaration of the Independence of Cyberspace,” and proclaimed self-sovereignty for the new virtual world. He wrote: “We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge. Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule.”

Looking back at nearly twenty-five years working in the tech sector, I remember that early period imbued with a sense of freedom and possibility. We believed the broad reach of the internet would empower previously unheard or ignored minorities. It would improve democracy and circumvent authoritarians. New modes for connection and sharing would bring us closer together. In this new landscape, some, like Barlow, believed we would not need nor want governments to regulate the internet.

Much of this has been true, and still, it seems terribly naive now.

In part, this is because the early internet was not really connecting all the people of the world. It was being nurtured in the laboratory of Western democracies, which were structurally, politically and culturally aligned. It was also largely populated by the more educated, more white and more male segments of those societies. When social media platforms, including Facebook, YouTube and Twitter sought to “make the world more open and connected” across countries and cultures, they moved fast and broke some things. We are still discovering the scope and severity of the damage, not least of which has been the fraying of democratic institutions. As in the physical world, we have not figured out a way for diverse and demanding voices online to exist together harmoniously. Instead, we have a virtual Tower of Babel.

Reining in the harms borne of the global internet economy, like many hard problems, will require an all-of-society, multifaceted approach. Recalling Lawrence Lessig’s “pathetic dot theory,” we must look to the constraints imposed by laws, norms, markets and code. We need nuanced legislation that restrains large tech players, punishes exploitative practices and lets nascent competitors thrive. We must figure out how to fund a healthier ecosystem, one that does not rely on exploiting people’s data, attention and worst instincts. We need to adopt and build on technical design principles that support such an ecosystem. Most of all, we must demand more of our public discourse, and work to shift and align our norms of communication in a global community.

A little history

The first ten years of the commercial World Wide Web were characterized by creativity, energy and, above all, optimism. That early period was also largely developed and enjoyed in the United States, Canada, Western Europe, Australia and Japan—democracies that are similar in structure and substance on rule of law, free expression and privacy. In the mid-2000s, however, the internet population changed. Starting around 2005, we witnessed the rise of the internet in a new set of countries, including China, India, Thailand, Russia, Turkey, Egypt, Saudi Arabia and Brazil. These countries were different from the “first generation” internet countries in terms of social norms, political institutions and legal restraints on individual expression and autonomy.

At the same time, increasing computing power spurred development of new online image and video services, like Flickr and YouTube. The growing global user base could communicate instantaneously and cheaply. Freed from physical and language barriers, images told stories with the ability to delight, disgust and sometimes inflame people in faraway places. The mostly US-based tech companies became the arbiters of international disagreements across an ever-expanding array of countries, people and cultures. A video uploaded in the United States might outrage someone in Turkey, and the complaint might be reviewed by a contract worker based in the Philippines.

By 2009, Google’s services had been blocked in twenty-five countries because of objections to the content available on products like Blogger, YouTube, the social networking platform Orkut and, of course, Google Search. I was deputy general counsel at Google at the time, responsible for the regulatory compliance of our products around the world, and this was a catastrophe.

These government-directed blocks sought to smother the protests of citizens against authoritarian rulers, like the Saffron Revolution in Myanmar, the Green Movement in Iran and, later, the Arab Spring stretching from Tunisia to Bahrain.

Yet even with these episodic challenges, online products and services grew dramatically over the last decade and so did the demographic breadth of online users, particularly for mobile and social media platforms. In the United States in 2005, just 5 percent of adults used social media. Today, 72 percent of Americans use social media to connect with one another, engage with news and entertainment content and share information. New people and new voices online have given the spark and sustenance to movements like #BlackLivesMatter and #MeToo, serving as more evidence that these platforms can empower the vulnerable and the previously unheard.

As the early internet developers hoped, the platforms have been a virtual hub for connection and the creation of common purpose. They help victims of abuse find one another and find solace. They help cancer patients link up with researchers to advance medicine. They help survivors of disaster with rescue and aid. And as we have poignantly discovered during this time of COVID-19lockdowns, they enable the everyday grace of families and friends staying in touch. With all of these needs seemingly bridged by technology, we thought the worst thing that could happen—the greatest threat to internet companies and the communities they serve—was to be blocked in a country or prevented from making our platforms and their content available freely.

What we’ve discovered in the last few years, however, is that being blocked may not be the worst thing that can happen. Being turned into a weapon against our users and against our own government may be worse. So, here we are.

I did not foresee the broad and coordinated weaponization of these open and free spaces that we built and advocated for. For a period, the bad actors could be managed or minimized. But, over time, these spaces have become playgrounds for trolls. They have read our terms of service, and they come right up to the line of acceptable behavior and then dominated these platforms for the lulz. The ecosystem that we hoped would encourage the vulnerable to speak freely and help communities gather around common interests is used by bad actors to bully, harass, threaten and take up so much space on the platform that they push other users off the service.

As has also become evident in the last few years, these spaces have been infiltrated by malicious state actors and self-identified insurrectionists. They use the same trolling techniques, not just for entertainment, but to undermine our institutions, our communities and our trust in one another, in known facts and in our democracy.

Defining the problems

Let me start by saying that we (particularly policymakers, the media and the public) need to recognize the wider societal dynamics that tech did not create and cannot fix. We are experiencing the lowest point of trust in institutions since the 1960s, and the policy debates and failures of government—from infrastructure to income inequality—existed long before the internet. According to a recent Pew Research Center study taking into account the years 1958 to 2021, the public’s trust in government peaked in 1964 at 77 percent, and has been on a long decline since that time. Since 2007, the share of Americans who say they can trust the government “always or most of the time” has not risen above 30 percent. It is currently at 24 percent. The discontent and rancor expressed on social media today is not the simple product of internet trolls, filter bubbles or opaque algorithms, but is rooted in our fundamental failure to deliver on the real needs of the country.

Likewise, the constellation of intolerance and hatred we’re living in—the misogyny, homophobia and transphobia, racism and anti-Semitism—are not the creations of technology. Certainly, the social media platforms, the algorithms and the use and abuse of the data all play a role that must be addressed. But the destructive and systemic biases in our society are on all of us, and no tech regulation or new product offering alone is going to fix it. This is not to say that we should not demand change and additional effective regulations for the tech sector. Indeed, we in the tech community must be honest about the troubling shift in how our platforms and products are used. We should not minimize the dangers created on these platforms that we enable or encourage. We likewise should not overstate what we can solve or make better. We should work urgently and creatively to build a better ecosystem.

As we address the swamp of bad online content and the exploitative practices of the tech industry, we should be crisp about the nature of these problems and, as Daphne Keller has urged in her work, we should strive to be targeted in crafting solutions. Much of the current policy discussion is not well informed about either the problems or the technology, and largely fails to consider the international landscape, notwithstanding existing global human rights models. Instead, various proposals in the United States and abroad are moving toward policies that require the voluminous content uploaded to be prescreened, quickly removed or not removed at all. Many of these proposals are simultaneously overly broad and too narrow, contradictory in goals and, in some cases, resemble the blunt tools of authoritarian governments not worthy of our democracy.

Building the world we want to live in

I have spent a career helping to build what I hoped would be the most democratized communications platform in human history, allowing us to reach across oceans and cultures and experiences for some common purpose and benefit. I still believe in that vision, but I think we have forgotten that we have to fight for it in every feature, in every new technology or algorithm and in every business model. Instead, it seems that we are building systems that let some of our worst instincts and excesses run over us. But it doesn’t have to be that way.

When I started at Google in 2004, the pillars for web search were comprehensiveness, relevance and—most of all—speed. Comprehensiveness to make sure that the breadth of information available on the web could be searched. Relevance to deliver results that responded to the user’s query, and were usefully ranked in terms of importance and accuracy. And speed, which we found makes a measurable difference in whether someone will use the service or look elsewhere.

In the mid-2000s, with the rise of both social networks and behavioral profiling on the web, the pillars for product design changed in important ways. Primacy was given to understanding more about a user and his or her network of friends, activities and interests, and began to drive both product and business models. The pillars shifted to personalization (tailoring content to what we already know about the user), engagement (encouraging and measuring our success and profit based on how long a user stayed on the service) and, once again, speed. As we now know, the combination of these new pillars has been a rocket ship for the most inflammatory, polarizing content. It is a toolset for manipulation.We won’t change the nature of today’s internet and create healthy conversations just by taking down more content, having more rules or being more transparent about how we handle complaints. All of those practices are good, but none of them fundamentally change the product that is operating as designed: encouraging engagement in part by promoting highly viral content, filtering for personalization so users narrow rather than expand their world, and fetishizing the speed at which we deliver information.

What if we decided that’s not the world we want to live in? What if, like the Slow Food movement that originated in Italy in the 1980s, we demand a change in the norms and values of what we consume? In information services, what if we design products to optimize for authenticity, accuracy and context? We are starting to see social media companies experiment in small feature changes, like Twitter’s warning labels and friction created for retweeting, or Facebook and Instagram hiding like counts. It is unclear whether these changes actually have a positive impact on user behavior and, with the countervailing pressures of ad-driven revenue and user engagement, unclear that they will ever get past beta status.

Or, we might look at the wider internet ecosystem, like Ethan Zuckerman has done in his work “Reimagining the Internet.” What if we supported a digital public infrastructure that was designed to fill the civic void left by the current social media landscape? Free Press and others have suggested a tax on services supported by targeted advertising in order to fund public service media and local journalism. What if we build an online public commons on the principles of openness, participation and resilience? I put these options forward only to illustrate that the design pillars can and do change, and a different world is possible. But we must do the hard work of envisioning the world that we want to live in, bringing along the broad swath of people who will live there with us, and then build it.


Lessons From the First Internet Ages

What is the future of the internet? Thirty years after the creation of the first web page, what have we learned about the impact of the internet on communication, connection, and democracy?   Join the Knight Foundation for Lessons from the First Internet Ages, a virtual symposium that will explore and evaluate what key figures in the development […]

October 11, 2021
Lessons From the First Internet Ages