Examining the Role of Punishment in Social Media Governance
People engage in a range of harmful behaviors online, from hate speech to insults to misinformation to nonconsensual sharing of sexual photos. In this brief provocation, I want to reflect on prevailing assumptions about how social media companies respond to this behavior by responding to this question: Should we punish people who engage in harmful behavior online? The short answer might be, yes, of course we should. Punishment is a form of accountability and society without accountability would be chaos. The more complicated questions then arise: Under what conditions should we punish people online? Is it because they deserve it? Because the harm they caused was severe? Because it will be important for the safety and health of the community? Because we believe it will rehabilitate them? Because victims want it? Social media platforms have embraced punitive models of governance and implemented them on scales of billions of users, but without evaluation of the efficacy of those models. We should take some time to consider when punishment might be an appropriate or effective remedy for online harms.
Punitive models of justice are anchored on the principle that people who commit an offense should receive a proportionate punishment. Punitive frameworks are pervasive in schools, workplaces, families and, most prominently, criminal legal systems. The vast majority of legal systems in countries throughout the world presume that if a person is found guilty of an offense, they should receive a punishment, which can range from a monetary fine to imprisonment or even death. Though punishment can be an effective mechanism for deterrence and for protecting community safety, it can also be retributive and discriminatory. In legal systems, as well as in schools and workplaces, punitive systems are often disproportionately enacted on social groups who have already been oppressed. In light of these concerns, scholars and advocates have argued for justice frameworks that shift away from carceral systems toward alternatives that emphasize accountability and repair. These arguments draw on restorative or transformative justice frameworks, which advocate for recognition of harms and repair of harms through individual and structural changes.[1] Restorative and transformative frameworks may be useful in social media governance, where platforms govern billions of posts a day but apply a narrow set of punitive remedies that are inadequate for governing a wide array of behaviors.
Remedies for violations of community guidelines typically involve removing content or banning users. These remedies are punitive in nature—they punish rather than repair. I do not believe we should replace punitive remedies entirely, but that we should supplement them with a broader menu of options.[2] Many parents try to reason with their children rather than sending them straight to their room; schools practice restorative justice mediation as alternatives to suspensions; judicial systems allow impact statements to give voice to injured parties. Why, then, would we not consider broader remedies online? The problem of remedies is not a new one, but the stakes are much higher. Nearly thirty years ago, when Julian Dibbell described LambdaMOO’s debate whether or not to toad (i.e., remove a user’s account), the debate was not so much about the decision to toad—most people present supported it—but about the lack of procedure and policy undergirding that decision.[3] Around the same time, Lisa Nakamura described the Lambda community’s debate over a petition to penalize people who engaged in racist hate speech (they decided not to).[4] Omar Wasow, who founded BlackPlanet in 1999, now reflects that “part of the fascination with the internet was that it was this disembodied space and by extension, therefore, somehow identity free.”[5] As Wasow and Nakamura observed from the get-go, this was a flawed understanding of the Internet—one that presumed people could be objects in a database removed from sociohistorical contexts in the world.
Yet, that orientation persists today, in which online behavior has been largely governed at the content level—social media platforms assess individual pieces of content and assess whether that content violates their guidelines. But focusing on content overlooks the parties involved, previous or subsequent behaviors, and what kinds of remedies might be effective. Without attending to who is involved and their needs, there is little chance of repair for people and communities who experience harm. Additionally, many people throughout the world rely on their online profiles for economic opportunities, through livestreaming, advertisements or commerce—these are often people who already experience economic precarity. Banning those accounts compromises their livelihood, and we should keep in mind economic harm more seriously when considering remedies.
My proposal is to expand the range of possible remedies and to consider that in some cases, alternatives to punitive responses may be more effective for the immediate and long-term goals of a community. Drawing from the values of restorative and transformative justice, platforms could encourage accountability from people who cause harm. This might be enacted via a menu of interactions that allows users who violate guidelines to express their point of view, acknowledge harm they caused, express remorse or share future intent—all critical steps in rehabilitation processes. Users or communities who experience harm could make a statement of harm or escalate protections to their account. These models allow people to have a voice and be heard—both critically important experiences for human dignity and in democratic societies. At the same time, as Lindsay Blackwell and I wrote in a recent article, when offending users have an opportunity to make amends for their behavior and fail to, more severe punitive responses, such as IP address-based or device-level bans, can be applied with more legitimacy.[6]
One counterargument to this proposal is the problem of how it will scale. But this is an odd question under the current trajectory, which has tens of thousands of content moderators reviewing content across platforms daily with no apparent plan for abatement. Another counterargument is efficacy: But will this work? That, I agree, is imperative to answer. Any governance system should be evaluated with metrics like recidivism, community deterrence, procedural transparency, discrimination, healing and safety. The proposal for expanded remedies should also include a commitment to empirical evaluation of those remedies. Punishment has a place in society, but it may not be as big as we thought and the availability of data on social media makes this a testable hypothesis. This is neither a radical nor disconnected proposition, it in fact aligns closely with emerging calls to expand remedies, consider issues beyond speech and consider context rather than strictly content.
[1] See Mariame Kaba, Naomi Murakawa, and Tamara K. Nopper, We Do This ’Til We Free Us (Chicago: Haymarket Books, 2021); Danielle Sered, until We Reckon: Violence, Mass Incarceration, and a Road to Repair (New York: New Press, 2019).
[2] For a taxonomy of remedies, see Eric Goldman, “Content Moderation Remedies,” Michigan Technology Law Review (2021).
[3] Julian Dibbell, My Tiny Life: Crime and Passion in a Virtual World (London: Fourth Estate, 1998).
[4] Lisa Nakamura, “Race In/For Cyberspace: Identity Tourism and Racial Passing on the Internet,” Works and Days 13, nos. 1–2 (1995), 181–93.
[5] Ethan Zuckerman and Omar Wasow, “Omar Wasow (Pomona College, Black Planet),” Public Infrastructure, September 30, 2021, https://publicinfrastructure.org/podcast/34-omar-wasow.
[6] Sarita Schoenebeck and Lindsay Blackwell, “Reimagining Social Media Governance: Harm, Accountability, and Repair,” Yale Journal of Law and Technology 13 (2021).