Can you tweet that? Columbia panel explores the exercise of First Amendment rights on social platforms


Above: Panelists discuss social media and rights of free expression. Photo by  Tanisha A. Sykes.

The First Amendment in the digital age applies to platforms that the Founding Fathers couldn’t have imagined.

So what are the challenges and opportunities presented by the changing dynamics of the media against the backdrop of free speech?

Last Friday media experts gathered at the Tow Center for Digital Journalism at Columbia University in New York to discuss the issues arising from the collision of modern media with bedrock principles of our democracy. The panel, “A First Amendment for Social Platforms,” included BuzzFeed Editor-in-Chief Ben Smith, BuzzFeed Assistant General Counsel Nabiha Syed, and Stuart Karle, North Base Media general counsel and a Columbia adjunct. Emily Bell, director of the Tow Center, which receives Knight Foundation funding, moderated the discussion.

Knight and Columbia also recently announced the formation of the Knight First Amendment Institute which will support research, education and litigation to protect freedom of expression and freedom of the press. Disruption in the traditional business model of journalism has made it less likely that media companies will sue to protect First Amendment rights, and the institute will help fill the gap.

The panel discussion tackled how those rights should be applied and protected on social platforms. It continued a conversation that Smith and Syed started in an article they wrote on Medium that called for more transparency in how social platforms manage what people say.

“If the most important actors of our time are the Facebooks, the Twitters, the YouTubes and the Snapchats, then how do we make sure they abide by standards that we consider the bedrock of free speech principles?” asked Syed. “We don’t know why Facebook takes down some posts and not others, or why Twitter disables some accounts and not others. One of the major things that we call for in the piece is transparency so we can get to that accountability.”

Referencing Chuck Johnson, an infamous Twitter troll who was banned from the platform last year after asking for donations to “take out” DeRay McKesson, a prominent Black Lives Matter activist, Smith said: “We have not heard from him or seen him since. He was not my favorite reporter, but it makes me think, “Wow, this platform is powerful.”

Smith remarked while executives are making decisions about vile trolling and harassment that are “totally appropriate,” they are also reactive. He wants to know “who” is making these decisions and “why,” noting that there are powerful distinctions in the law about public figures that are at play.

For example, Boris Johnson, the flamboyant former mayor of London, who led the charge on the recent Brexit vote to leave the European Union, “is having a lot of four-letter words thrown at him right now and that strikes me as fine,” said Smith. “Whereas if you’re a private citizen and that’s happening, [the perpetrator] should be thrown off the platform. It’s a hard distinction and it’s complicated.”

“The platforms are doing exactly what the law incentivizes them to do and that’s the rub,” Karle added. “Historically, free speech law protected the most obnoxious speech. Years ago, to reduce the amount of child pornography online, Congress passed Section 230 of the Communications Decency Act, which says that you’re a distributor, not a publisher of information, so you have no responsibility for the information that you are distributing.”

As a result, there is little to no consequence for some platforms from a legal perspective. “If someone is stalking on Facebook, the law says it’s not Facebook’s problem because it’s just the distributor of information,” said Karle.

He added that much of First Amendment law was established because of cases involving people with controversial and even reprehensible views, from pornographers to anti-Semites. But while all of these people can speak on these platforms, once free speech becomes disrespectful, even hateful, under the guise of the First Amendment, it’s not enough to make the person “disappear” into a black hole, from a social network. “If you’re trying to ban people from Facebook, there is strategic ambiguity to enforce these bans,” said Smith. “You shadow ban [making a user’s contribution invisible to others, but visible to himself], mute trolls and try to shift to technical systems where people who are bad actors don’t even know they are being banned and that’s a really good way to stop trolling.”

Before sites are backed into a corner over how to address free speech on their platforms, Syed offers a possible solution: “Look down the road and decide that you will be good actors in this space. Set up a system that makes you transparent and accountable, and lets you put forth these principles and their application in a way that diffuses that situation.”

Tanisha A. Sykes is a New York-based writer and editor. Follow her on Twitter @tanishastips.