Journalism

What happens when corporations are put in charge of free speech online?

Emily May is co-founder and executive director of Hollaback!, which receives Knight support. She is the latest contributor to a Knight Foundation series on the First Amendment. Knight recently announced the launch of the Knight First Amendment Institute at Columbia University to promote free expression in the digital age.

The internet is the new battleground over free speech—and the battle lines between free speech and online harassment are being drawn not by the legal system, but by social media companies.

Last year, my colleagues and I went on a mission to try and make sense of how social media companies defined online harassment. We worked with five social media companies to develop safety guides on online harassment that would be easy for people who are being actively harassed to navigate.

What we found was a sea of definitions with little to no consistency. For example, on Twitter, what counts as abuse “must fit one or more criteria: reported accounts sending harassing messages; one-sided harassment that includes threats; incitement to harass a particular user; or sending harassing messages from multiple accounts to a single user.” Twitter then cites examples such as impersonation, doxxing—where someone searches for and maliciously publishes private information about someone—or threats.

Meanwhile on reddit, the definition of harassment hinges on the expectations  of a “reasonable person.” Reddit defines harassment as a “systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person 1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or 2) fear for their safety or the safety of those around them.” But who is reasonable—and who isn’t—is subject to interpretation, and neither of these definitions offer insight into how reddit trains moderators to draw the line.  

We found more insight in asking reddit what is “not harassment.” They said they “do not consider incidents in which the parties involved are provoking one another as harassment.” Similarly, “reddit does not consider attacks on an individual’s beliefs as harassment.” At face value, this makes sense and is something we’ve heard echoed in our conversation with other social media companies: Many reports of harassment come in to social media companies when two people are in an online fight and one gets sick of it and reports it. But what if that debate is about whether or not the folks at the Pulse in Orlando “deserved what they got” and you are LBGTQ? Or if slavery should be reinstituted and you are black? In these cases, there is a fine line between opinion and abuse. These types of opinions can cause very real trauma, and side effects include depression, anxiety and post-traumatic stress disorder.

When companies tell their users that these types of opinions are protected “free speech,” not harassment, they—intentionally or not—create space for those types of opinions to be spoken freely. Meanwhile, those who dissent lose their protections from reddit when they go into battle—and the conversation moves from an “opinion” to “two people provoking each other. Interestingly, from a trauma perspective “fighting back” is one of the few things shown to reduce the long-term impacts of trauma. It’s in these moments we’re left asking, “Free speech for whom?”

As private companies, social media sites can define and enforce policies without regulation or oversight. With sites like Facebook, Twitter, reddit, and others becoming the new town square, these decisions are moving to the center of the free speech debate. Can a profit-making company—whose interests lie with its shareholders first and foremost—truly value democracy?

Most governments have yet to decide what their role is in this new terrain; however, just recently, the European Commission and leading tech companies released a new “code of conduct” that indicates their intentions. The announcement, which was aimed at combating hate speech and terrorist propaganda, was made in partnership with Facebook, Twitter, YouTube and Microsoft, and includes an explicit statement that companies will “take the lead” in policing controversial speech online. The implication here is that the governments of the European Union will not, and they are freely giving the reigns of free speech to private companies.

For their part, companies too are faced with a challenge. The line between free speech and hate speech regularly shifts within a United States context, and globally it means wildly different things. In places like France, the line between free speech and hate speech creates little room for discrimination. In 2007, the French courts considered a remark by a comedian that said Jews were “a sect, a fraud,” and determined it an “insult to a group defined by their place of origin.” Meanwhile in China, the government uses sensors on social media to detect, and remove, any mention of the 1989 Tiananmen Square crackdown or the recent Hong Kong pro-democracy protests, for example.

So how does a corporation decide to define—and moderate—free speech? A Pew Center survey of people in 38 countries finds that, “for many provocative forms of speech, such as sexually explicit statements or calls for violent protests, most people draw a line between protected speech and speech that goes too far. And compared with the rest of the world, Americans generally are more accepting of free speech of any kind.” Furthermore, there have been reports of increased levels of online harassment in places like Africa and India, often with an increased likelihood of escalation to physical harassment and abuse.  

Where we draw the line between online harassment and free speech matters, because free speech doesn’t mean anything if we are not free from harassment and abuse. According to the Pew Research Center, 40 percent of internet users have experienced online harassment, and 73 percent have witnessed someone else being harassed online. The experience of being online is increasingly unavoidable and increasingly violent, especially for women of color. New research from the think tank Demos shows that every 10 seconds someone calls a woman a “slut” or a “whore” on Twitter.

To be sure, neither governments or corporations can alone tackle the problem of online harassment. In the largest global study on gender-based violence in history (70 countries, four decades), research shows that movement-building is the most effective strategy for ending gender-based violence. Movement building is at its root about community accountability: What can you—a regular, everyday, harassment-hating human—do?

In January, our team launched HeartMob, a new platform where people who are harassed can ask for support, and a community of bystanders can help them out. It’s one solution in a sea of unanswered questions, and if it’s successful, it will reduce trauma for those who are harassed and change the culture that makes harassment OK to begin with.

In the battle for free speech, online harassment is winning. The question is: Who will save us? Corporations and the government have a role to play for sure—but I’m still banking on you.

Follow Emily May on Twitter @emilymaynot.

Recent Content