• The Ethics of Social Media: Why Content Moderation is a Moral Duty. Forthcoming article by Jeffrey Howard

    The Ethics of Social Media: Why Content Moderation is a Moral Duty. Forthcoming article by Jeffrey Howard

    Posted by Liz Sanders on 2024-03-14


Abstract: This article defends platforms’ moral responsibility to moderate wrongful speech posted by users. Several duties together ground and shape this responsibility. First, platforms have duties to defend others from harm when they can do so at reasonable cost. Second, platforms have a moral duty to avoid complicity with users’ wrongfully harmful or dangerous speech. I will argue that one can be complicit in wrongs committed by others by supplying them with a space in which they will foreseeably commit them. For platforms, proactive content moderation is required to avoid such complicity. Further, platforms have an especially stringent complicity-based duty not to amplify users’ wrongful speech, thereby increasing its harm or danger. Finally, platforms have a duty not to enable new wrongs by amplifying otherwise innocuous speech that becomes wrongfully harmful only through amplification. I close by considering an objection—that content moderation by platforms constitutes an objectionable form of private censorship—explaining how it can be answered.


Keywords: Social media, content moderation, free speech


1. Introduction

Should Facebook ban videos depicting graphic violence? Should Twitter remove posts that spread misinformation about Covid-19? Should YouTube downrank videos that promote conspiracy theories? Should TikTok ban war propaganda? These questions, which have flooded the news headlines over the past several years, concern the ethics of content moderation—the systems through which platforms govern the speech of their users. These rules are made by trust-and-safety teams working within the companies, who legislate a vast array of policies concerning threats, incitement, graphic content, hate speech, sexual content, misinformation, bullying and harassment, spam, self-harm content, and much else.[1] Once their feasibility is tested by engineers, these rules are subsequently enforced by a complex bureaucracy of content moderation workers and artificial intelligence systems.[2]

This private governance of online speech is hugely consequential; billions of people use social media, and millions upon millions of posts are removed each year.[3] As Evelyn Douek has aptly noted, “Facebook alone makes more speech decisions every day, perhaps even every hour, than the Supreme Court ever has in its entire history.”[4] Some have argued that this form of content moderation is an illicit form of private censorship, violating the moral rights of speakers and their prospective audiences to communicate.[5] Yet a burgeoning consensus holds that platforms indeed ought to engage in content moderation.[6] There has been substantial debate as to how such moderation should be subjected to greater oversight and accountability, as with the EU Digital Services Act and the UK Online Safety Act. Even so, a core foundational issue has not, in my view, been addressed with sufficient philosophical precision: why platforms have a duty to engage in content moderation in the first place. Without an adequate philosophical theory to answer that question, we are poorly positioned to defend content moderation against its critics who would prefer the internet to be a wild west of unfettered speech.

This article seeks to analyze the grounds of platforms’ obligations to moderate content. I will defend the thesis that social media companies (SMCs) indeed have a moral duty to engage in content moderation of wrongfully harmful or dangerous speech. I will argue that one can be complicit in wrongs committed by others by supplying them with a space in which they will foreseeably commit them, unless one takes reasonable measures to minimize those wrongs’ occurrence. Content moderation is required to avoid such complicity. I also argue that platforms have an especially stringent complicity-based duty not to further amplify users’ wrongful speech, which increases the (risks of) harm it threatens—enabling it to reach larger audiences, and crowding out ameliorative counter-speech. Finally, while my main concern is with speech that would be wrongful independent of its broader amplification, I argue that platforms have a duty not to impose new wrongful harms by amplifying otherwise innocuous speech that becomes wrongfully harmful only through amplification.

In Part 2, I will begin to elaborate my case that platforms have a moral duty to combat this speech. Parts 3 and 4 then focus on duties to refrain from complicity with others’ wrongful speech. Part 5 focuses on non-complicity duties not to create new harms by aggregating and amplifying speech that would otherwise be innocuous. Finally Part 6 concludes by addressing the objection from free speech.

Before proceeding, a final preliminary point is in order. In this article I will explore when and why platforms have a moral duty to remove wrongfully harmful or dangerous speech on their platforms. I take it for granted that some speech, such as racist bullying, causes wrongful harm; other speech, like encouraging violence against innocent persons, wrongfully endangers others (whether or not it ultimately eventuates in harm). For example, elsewhere I argue that one such category is dangerous advocacy, speech that endangers others by advocating clearly wrongful harms.[7] Another is the closely related (and overlapping) category of hate speech, which defames members of socially vulnerable groups as inferior or dangerous. A further category is dangerous disinformation or misinformation, which endangers people by communicating falsehoods (usually either by fabricating a non-existent threat, or denying the existence of a real threat).[8] Plausibly, speech encouraging or otherwise promoting grievous self-harm—at least that directed toward vulnerable audiences such as children—is also wrongfully dangerous. No doubt there are other categories to be added to this list, as with forms of bullying, harassment, and credible threats of wrongful harm, as well as criminally instructional speech (e.g., bomb-making recipes). I will use wrongful speech as a placeholder term of art for all such categories.[9] The list is illustrative, but it is not exhaustive. Moreover, those who disagree with me about what speech is wrongfully harmful or dangerous, or the conditions under which it is[10], can nevertheless accept the general framework I offer in what follows.


2. Platforms’ Defensive Duties

The public debate on content moderation has historically proceeded as a quarrel about whether digital intermediaries like SMCs should be analyzed as publishers or platforms. The standard assumption was that if we view them as publishers, they are to be held morally and so potentially legally responsible for the content posted by their users, whereas if they are mere platforms, they are not to be held responsible for the content their users post. But this debate was misguided from the start. Even if we reject the status of platforms as publishers, as I think we should, it does not follow that they escape obligation. The question instead concerns what it is reasonable to demand of platforms.

The basic insight from which to begin is that social media companies—like all corporations—are corporate agents[11], and as agents, they have a range of positive and negative duties assigned to them in virtue of that fact. Like all agents, they have positive duties to defend others from wrongful harms, and negative duties not to cause or contribute to wrongful harms. These austere premises, I will argue, have enormous explanatory power in accounting for the moral demands that we rightly place on social media companies.

The least controversial and most straightforward duty with which SMCs are saddled is a duty to defend (or rescue) people from (risks of) harm. Platforms are simply in the right place at the right time, with the right capacity, to protect people. Consider a standard case of easy rescue: someone is drowning in a pond, and you can save them at reasonable cost to yourself. Or consider a case of easy other-defense: a child is being attacked by an aggressive bully, and you can intervene to stop the bully without incurring serious costs yourself. It is uncontroversial among philosophers that we all have such duties. My contention is that this simple insight provides one source (albeit a limited one) for the idea that platforms should engage in content moderation. Faced with speech that endangers life and limb, and the opportunity to mitigate that danger at reasonable cost, they have a duty to do so. While this duty cannot justify most that we might expect SMCs to do, it can justify a minimal baseline. Specifically, it can justify a duty to remove wrongful speech of which the platform becomes aware.

Conceiving content moderation, in part, as a rescue effort illuminates something fundamental about its nature. The purpose of deleting a user’s post is not, in the first instance, some backward-looking effort to punish or sanction a wrongdoer.[12] Even if the general deterrent effects of content moderation are salutary side effects, they are not, on the view I am advocating, its justifying goal. Rather, the purpose of content moderation is defensive; it is an attempt to defuse the danger posed by an ongoing threat, to protect prospective victims from (wrongful) harm.[13] My suggestion is that SMCs’ natural duties require them to engage in some amount of content moderation of wrongful speech, to defend those harmed or endangered by it.

This minimal duty seems well-placed to justify a practice of notice-and-takedown, whereby platforms remove certain harmful or dangerous content once notified about it. This is the most common contemporary form of SMCs’ legal obligations (under current UK and EU law); while this article does not fully address the issue of legal enforcement, one fruitful way to begin justifying this kind of legal duty would be to point to an underlying natural moral duty to defend (or rescue) others from (risks of) harm. Yet it seems less likely that these duties can justify more demanding obligations—to set up monitoring systems powered by a combination of sophisticated AI bots and thousands of human moderators—to proactively police the platform for wrongful speech. To lean further on the rescue analogy, a natural duty of rescue enjoins us to rescue drowning children when we stumble upon them; but it does not require us to become or to hire a network of lifeguards. Nor can these duties justify any requirement to redesign platforms to minimize the likelihood of the transmission and spread of illicit content. These more demanding obligations could only be justified by appealing to further, more demanding moral duties.

What would be so objectionable about stopping here, and contending that defensive rescue duties were the sole basis of platforms’ moderation responsibilities? The answer is that this would miss something fundamental about the relationship between companies and the wrongful speech that they platform. Consider a traditional rescue case, where misfortune or malice imperils some victim, and a prospective bystander can intervene to save him. Here, the rescuer lacks any morally significant causal relationship with the original threat. Now suppose I hire a hitman to kill you, and then (after a moral epiphany) intervene to rescue you. Or suppose I negligently cause a boulder to fall down a hill toward you, and I rush to rescue you from its path. These are no ordinary rescue cases; in such cases, I incur a far more stringent duty to defend you from the threat precisely because of my wrongful causal role in creating it.[14] While the social media case is not perfectly analogous to either of those examples, the general point is that platforms have a more morally fraught causal relationship with incendiary content than the rescue duty alone can illuminate. This relationship, I will now argue, generates more demanding obligations to act against such content, to which I now turn.


3. Platform Complicity

What is the nature of the relationship between social media companies and wrongful speech that is posted on their platforms? It is a relationship of complicity. My proposal is that an agent can be complicit with wrongs committed by others simply by providing them with a space in which they will foreseeably commit them. Just by dint of one’s ownership or control over a space, one has certain duties to minimize the likelihood that the space will be used for wrongdoing.

Consider the parking garage company who refuses to install appropriate lighting or other security mechanisms to reduce the likelihood that crimes will occur. Consider the landlord who knows, or should know, that his tenant stashes trafficked children in his flat, yet does nothing in response. Or consider the political leaders who offer haven to terrorists, allowing them to use their land to plan further attacks—as the Taliban did for al-Qaeda in the years before September 11, 2001. Across these examples, the relevant owner or controller enabled wrongdoing, by providing a space propitious for its occurrence, while doing nothing to mitigate the likelihood of its occurrence (and in some cases, actively encouraging its occurrence). While this duty uncontroversially arises in physical, offline space, there is no reason to think it applies less forcefully in cyberspace. Accordingly, when a social media platform provides a platform that users will foreseeably use to engage in wrongdoing, and then fails to take reasonable steps to minimize the likelihood of its occurrence, it is complicit in the speaker’s wrongdoing. The second duty that justifies content moderation responsibilities, then, is a duty to avoid complicity with wrongdoing.

What is complicity? Here are the crucial ideas. First, I assume a causal conception of complicity, whereby one is morally complicit just in case one’s acts or omissions wrongly and foreseeably risk contributing to wrongs committed by others.[15] To discharge the duty to avoid complicity, one must take reasonable steps not to make such causal contributions.

Next, cases of complicity always involve some principal wrongdoer, on the one hand, and those who contribute to his wrongdoing, on the other. In our case, a speaker who posts a wrongful post is the principal wrongdoer, and the social media platform is a secondary wrongdoer, complicit in the wrongdoing of the principal. This is a purely conceptual point; it need not follow that the principal wrongdoer is morally worse than those who aid him. It is perfectly possible that, in the final analysis, the complicit party is more blameworthy than any one principal.[16] This is especially likely when we contrast any single wrongful speaker, with a relatively few number of harmful posts, with a massive platform that enables the communication of millions of harmful posts by many people.

Second, I am assuming that complicity is a partly moralized notion; for something to count as complicit, it must involve a wrongful causal contribution to the wrongdoing of others. The reason is that not all causal contributions to the wrongdoing of others are wrongfulcausal.[17] Consider again the parking garage company. If it had a duty to guarantee that there was a zero probability of crimes ever being committed in its space, it would likely be obligated to close up shop, given the inordinate difficulty of achieving such a goal. There is no categorial duty to eliminate one’s contributions to wrongdoing, only to take reasonable steps to minimize one’s contributions (which may or may not involve eliminating them fully). What counts as reasonable will depend, first, on how much cost the agent can be expected to bear toward this purpose. Like any duty, the duty to avoid complicity will be cost-sensitive, in that its demands cannot be unreasonably burdensome. For corporations, these costs will involve costs to owners, employees, and even shareholders. But it will also (often more importantly) involve costs to third parties. If parking garages all had to close down, since this was the only way to eliminate wholly their use as platforms for crimes, this would involve an obvious social cost, to be borne by the many who would otherwise benefit from their existence. The main reason to think that parking garage companies don’t have such a stringent duty, then, is not merely that it would put garage company owners and employees out of work ; it also reflects the burdens such a duty would place on the public. Or consider the example of firearms and ammunitions manufacturers, who also (on the view I am defending) have duties to take reasonable steps to reduce the likelihood that the guns and bullets they sell will be used in crimes (e.g., by making bullets traceable to their owners, thereby helping to deter their criminal use). If they had a duty to eliminate such a possibility, they could not sell guns or ammunition. Aside from any complaints the companies might have, the significant objection to that requirement would be that legitimate police and military services would then be unable to acquire weapons they need to discharge morally compulsory purposes[18].

So, too, it goes for social media platforms (potentially among other platforms[19]). For platforms to reduce platforming complicity to zero, they would likely need to close down. What makes such a demand unreasonable is not merely its impact on those who work at the companies; just as salient (if not much more salient) would be the unacceptable costs on the broader public. Given the enormous potential good served by social media—the substantial opportunities these platforms offer for valuable expression, connection, and mobilization for just ends—we should tolerate some amount of unavoidable platforming of wrongful speech, to enable these benefits[20]. As agents, we have interests both in access to forums in which we can pursue our weighty communicative interests, and in reducing the likelihood that we will be wronged while in those forums. A plausible theory will accommodate both interests.

Finally, I stress that complicity involves contributions that are foreseeable. The requirement of foreseeability steers a middle course between two implausible poles. One pole eschews any mental condition altogether: one is complicit just in case one causally contributes, even if one doesn’t know and couldn’t be expected to have known one’s decisions had this property[21]. This position is implausible, given the interconnectedness of the world; it would counterintuitively hold that the taxi driver who drives the passenger to the nightclub where he will end up killing someone in a barfight will thereby count as an accomplice. A second pole requires the full-blown intentionality of the accomplice—what Christopher Kutz calls a “participatory intention.”[22] On this view, those who causally contribute to the wrongdoing of others, even with full knowledge that they do so, are not complicit because they do not share a common purpose with the principal wrongdoers. Yet intuitively, the arms vendor who sells weapons to the tyrannical dictator, who does not share an intention with the dictator but who knows (or should know) what the dictator intends to do, is complicit in the killings subsequently perpetrated. What truly matters, as Larry May puts it, is that the complicit agent “knows, or should have known, that…he or she will advance whatever intentions the principal has.”[23] Accordingly, even if SMCs do not intend to enable hateful speakers to engage in wrongful communications, they qualify as complicit, just in virtue of the wrongful and foreseeable causal contribution they make. However, where certain wrongful uses of the platform are not foreseeable, it is a mistake to think that SMCs are violating a duty by platforming them.[24]

The duty to avoid complicity is more demanding than the general defensive duties invoked in the previous section. The duty not to be complicit in murder and other serious crimes, by platforming those who incite them, is very stringent[25]. Accordingly, platforms should be expected to bear greater costs to discharge it. While notice-and-takedown protocols may be sufficient to discharge the former duty, they are plainly inadequate to discharge the latter. In addition to such protocols, my contention is that SMCs have a responsibility actively to police their networks for such communicative wrongdoing, and to delete it when they find it. Only by doing so can they avoid the charge of complicity.[26]


4. Amplification Duties

So far, I have discussed the phenomenon of merely providing a platform on which a wrong will be perpetrated. The duty to avoid complicity, I argued, requires that SMCs take reasonable steps to reduce the likelihood that their platform will be used for wrongdoing. Content moderation reduces that likelihood.

But SMCs do more than merely provide a platform for dangerous speech. Through their algorithms, they have the power to amplify it—to increase its visibility—and when they do, their complicity only deepens. Depending on a speaker’s authority and message, the wrong of an incendiary communication can be more or less grave[27]. But the gravity of this wrong also depends on the size and susceptibility of its audience. In the case of social media, platforms have substantial influence on precisely these variables. How many or how few people see a speaker’s incendiary post is precisely a function of platforms’ engineering, which determines what content is amplified.

What counts as amplification? Amplification will always be relative to some baseline, and the selection of a baseline will always be arbitrary, at least within a range of plausible options.[28] In this sense, there is no such thing as merely providing a platform, since providing a platform counts as amplification relative to a baseline of not providing a platform. Thus when we refer to amplification, it must be comparatively, with reference to some baseline that involved less visibility. So, suppose we stipulate the relevant baseline as a configuration where users can post content that is then findable by others who follow or search for them. Any platform-enabled visibility increases beyond this baseline—e.g., through recommender-algorithms that pipe the content into other users’ newsfeeds—counts as amplification relative to that baseline. In some cases, amplification may be an intentional decision by platform designers (e.g., the decision to amplify trustworthy Covid news content during the pandemic). But nearly always, amplification is the spontaneous result of platform algorithms, which are designed to show people more of the content that is likely to optimize their engagement with the platform—and hence increase their advertising revenue. This story—of how social media manages our attention—is by now quite familiar[29]

My claim is that when a platform amplifies wrongful speech, increasing its visibility, it thereby makes a greater causal contribution to the speaker’s wrongdoing—making his principal wrongdoing worse than it would otherwise be. Like the gun vendor who sells the terrorist a larger weapon, enabling him to kill more people, platform amplification enables wrongful speakers to commit a greater wrong. This is so even if a platform does so unintentionally—e.g., when it is merely the foreseeable result of the platforms’ algorithmic design.

Amplification can occur in different ways. A standard form of amplification increases visibility by expanding the audience, making more people see it. This plausibly makes the speech more dangerous or harmful since there are more people likely to be exposed to it. This is true both for indirectly and directly harmful speech. In the case of speech that incites violence, for example, such amplification increases the pool of people who may be inspired to act on such exhortations. For speech that promotes hatred, amplification increases the pool of people who may be persuaded to adopt bigoted views. For other kinds of harmful speech—like the subset of hate speech which functions as a direct psychological attack —this expansion has the effect of increasing the number of victims who see, and are thereby directly harmed by, the speech.

Even if there is no net visibility increase for the general platform population, platforms may amplify speech in a second sense—not by increasing the number of people exposed to it, but by flooding particular users’ feeds with such speech, overwhelming them with exposure to it. For indirectly harmful speech-acts like incitement to racial hatred, it stands to reason that those flooded with the relevant speech are at greater risk of radicalization by it, as their feed is dominated by speakers promoting bigoted views. For directly harmful speech-acts like bullying and racist harassment, flooding targets’ feeds with such speech is manifestly more harmful. Moreover, such flooding crowds out forms of ameliorative counter-speech—in which others “talk back” against the wrongful speech, offering their own countervailing points of view. Such counter-speech (e.g., arguing against an extremist speaker’s violent interpretation of their religion, or affirming an anti-racist narrative) can both dissuade listeners from acting on incitements[30], and can offer reassurance to the direct targets of harmful speech and even block its wrongful effects[31]. In this way, amplification of wrongful speech by platforms can stymy the efforts of counter-speakers to prevent or block the relevant harms—by reducing the likelihood that those counter-speakers’ posts will be seen.

The normative upshot of this may seem redundant. After all, I have already established that SMCs have a moral duty to police their networks for wrongful speech and remove it. The point here is that when a company provides a platform for wrongful speech, and then takes measures to increase its audience beyond what it would otherwise be, they position the speech to cause or risk even greater harm than it otherwise would. Accordingly, the platform makes a greater causal contribution to wrongdoing, breaching a more stringent duty. It is reasonable to demand that they bear greater costs to avoid that outcome through more intensive content moderation of wrongful content—since if that content is not removed, it will (given what we know about the platforms’ design) almost certainly be amplified.[32]

To be sure, these costs are not unlimited. As I argued earlier, to reap the benefits of a digital public sphere, platforms cannot reasonably be expected to eliminate the prospect that their products will be abused for wrongdoing, since they therefore would not be able to exist at all. Still, given the profits that some of these platforms earn, they have ample resources from which to draw to ensure their products are safe to use and their negative externalities are reduced. It is therefore fully appropriate to demand that platforms conduct assessments of the risks posed by their products, setting out clear plans to reduce the risks—and be held accountable for doing so.[33]


5. Transforming Innocuous Speech

My main concern so far has been speech that is wrongful already independent of any further amplification; the further amplification serves simply to make it worse. For example, terrorist incitement is wrongful even when it is not amplified to a mass audience. But not all forms of wrongful speech are like this. Some speech only becomes wrongfully harmful when aggregated and amplified alongside lots of similar speech[34]. No single unit of speech causes substantial harm. Some forms of dangerous misinformation may fall into this category. Consider the example of climate change denial, peddled by people duped into believing it is or might be true. Such misinformation is dangerous, but only when aggregated and amplified such that it floods the information ecosystem. In small doses, it is no more objectionable than speech suggesting the earth is flat—something that hardly violates a moral duty. Unlike the terrorist incitement case, the individual user is not plausibly described as a culpable wrongdoer. It is only when the speech is aggregated and amplified that such speech becomes wrongful, because only in such circumstances does their speech help to constitute a flood of misinformation that can genuinely lead to harm (namely, in propagating a dangerous falsehood that endangers countless living and future people by stymying climate action). Or consider the example of militaristic, violent rhetoric (“we’re going to slaughter our political opponents”) or rhetoric using hyperbolic insults of one’s political opponents (“they’re a bunch of traitors”); such speech is ubiquitous in divided democracies, but it seems unlikely that any one unit of such speech constitutes a moral wrong. Yet when amplified and aggregated, it may well be that such speech becomes wrongful, by serving as a constituent component of a wave of speech that coarsens the public discourse and gradually attenuates citizens’ inhibitions against violence. Some “pile-on” campaigns of harassment work like this; a single critical post from a stranger is innocuous, yet when receiving thousands of such posts from countless strangers, it can do real harm[35]. If this conjecture is correct, this would be a further way in which platforms act wrongly through their amplification practices.

In such cases, algorithmic systems transform speech that would other be innocuous into speech that is genuinely wrongful. They make the moral difference[36]. Conversations that might be utterly anodyne when occurring offline become seriously harmful online in virtue of algorithmic systems. Consider again the naïve citizen airing misguided questions about climate change; through the aggregation of her speech with similar speech, and its consequent amplification, she becomes transformed into a co-constitutive producer of a flood of falsehoods that endangers the planet.

I am assuming here that such citizens have reason to foresee that their speech will be aggregated and amplified in just this way; accordingly, when it is aggregated and amplified, they become liable. But we could imagine cases in which users had no reason to anticipate that their speech would be aggregated and amplified in a way that would then cause harm. Where this is so, they are not plausibly described as wrongdoers (not, anyway, on an evidence-relative standard). Yet if the platform nevertheless had grounds to believe that such speech would emerge through its algorithms, it—the platform—remains a wrongdoer, breaching duties it owes to others. In such cases, it is the only wrongdoer.

Because such aggregatively harmful speech only becomes wrongful once amplified, it isn’t clear that removal is necessary to redress its harm. Suppose that a policy of deamplifying climate misinformation—deliberately reducing its visibility down to the baseline where it is allowed and findable but not promoted—adequately redressed its harms, as seems plausible. If so, deamplifying such speech would mean that those engaging in it would no longer qualify as wrongdoers, since their speech would not be innocuous[37].


6. The Objection from Free Speech

I have argued that SMCs have a weighty moral responsibility to engage in vigorous content moderation of wrongful speech. This responsibility is justified by a suite of underlying moral duties: natural rescue duties to defend those wronged by such speech; duties to avoid complicity with users’ wrongful speech (which is exacerbated through greater amplification); and duties to refrain from rendering otherwise innocuous content harmful through amplification.

Showing that platforms have this moral responsibility is necessary to justifying its codification into a legal responsibility. Yet it is not sufficient; one could accept everything I have argued about platforms’ moral duties while also denying that those moral duties ought to be legally enforced. A strong version of such a view would hold that while speakers have moral duties to refrain from wrongful speech, and platforms have duties not to platform or amplify it, the coercive enforcement of such duties would violate the moral right to freedom of expression. On this view, speakers’ have moral rights to communicate all sorts of wrongful messages, and prospective audiences have moral rights to hear those messages, notwithstanding the danger or harm they pose. A more contingent, instrumental version of the view would hold that legal enforcement is not in principle impermissible; but in practice, it is simply too risky to grant the state the authority to enforce platforms’ and speakers’ moral duties, given the potential for abuse and overreach. American liberals who champion the orthodox interpretation of the First Amendment, yet insist on robust content moderation, hold one or both of these views[38]. (Note that the prevailing interpretation of the First Amendment largely forbids the state from punishing wrongful speakers or forcing SMCs to remove much, though importantly not all, wrongful speech.[39])

Suppose free speech concerns (whether noninstrumental or instrumental) militate against the legal enforcement of speakers’ and platforms’ moral duties. This fact does nothing to reduce the significance or stringency of those moral duties. Even if SMCs are off the legal hook, this does not immunize them from their moral requirements. And this has normative implications: SMCs are justifiably subjected to public opprobrium, advertiser boycotts, and other social sanctions in response to their failures to moderate dangerous content with sufficient vigor. It is even permissible for officials to pressure companies to discharge their responsibilities, even though it would be impermissible (on the view I am entertaining) to force companies to do so. Take a different example: the duty to vote, which let us suppose is a weighty moral duty but, for various reasons, shouldn’t be legally codified and enforced. It is nevertheless wholly appropriate for officials to strongly encourage citizens to vote. Likewise, there is nothing incoherent in supposing that all-things-considered, the state shouldn’t force SMCs to remove incitement, yet it may express and defend its view that they have a moral duty to do so.

The upshot, then, is that concerns of free speech do not pose an obstacle to the thesis that social media companies have a moral duty to engage in vigorous content moderation of wrongful speech. But now consider an objection to this view. The objection holds that if it would be impermissible for the state to restrict certain speech, it is likewise impermissible for social media platforms to restrict it. On this view, it is not even permissible for social media platforms to adopt rules against (most) wrongful content.[40]

The suggestion, now prominent among Republican lawmakers in the U.S., is that the large social media platforms constitute public forums—spaces so central for the operation of public discourse and the exercise of communicative freedoms that they are thereby obligated to refrain from restrictions on users’ legitimate speech. On this view, if it is impermissible for a government to restrict a certain category of speech, so too is it impermissible for a large social media platform to restrict it[41]. Matthew Kramer has recently defended just this claim:

"social media platforms such as Facebook and Twitter and YouTube have become public fora. Although the companies that create and run those platforms are not morally obligated to sustain them in existence at all, the role of controlling a public forum morally obligates each such company to comply with the principle of freedom of expression while performing that role. No constraints that deviate from the kinds of neutrality required under that principle are morally legitimate."[42]

Given that Kramer endorses the First Amendment view that substantial amounts of wrongful speech is intrinsically protected, he thereby holds that social media platforms act unjustly when they adopt prohibitions against it. For Kramer, this encompasses speech advocating criminal violence such as terrorism, speech advocating racial and religious hatred, misogynistic pornography, and much else.

There is reason to doubt this position. First, it is certainly plausible that government channels or pages on social media networks are public forums in the sense that triggers the duty not to discriminate against legitimate speech.[43] But it is not clear why this is a general duty that applies across the entirety of a social media network. Social media networks routinely restrict all manner of content that is protected by any plausible theory of free speech—such as low-quality commercial advertising (spam) and sexual content. Virtually everyone agrees that it would be unacceptable for the state to punish this speech; yet it is very counter-intuitive to think that Facebook violates its users’ rights to free speech by doing so. Yet if Kramer were correct, Facebook would be duty-bound to allow pornography on its platform—an implausible result.[44]

Next, while Kramer is surely right that SMCs are important sites for public discourse, so too are newspaper op-ed pages, television news and debate shows, and much else. Their status as such seems to trigger a demand for public justification and accountability for their activities; but it is not clear why they must showcase the same viewpoint neutrality that presumptively binds the state’s speech-impacting decisions. In the case of traditional media, their contribution to public discourse seems to depend on making a wide range of editorial judgments about what views are reasonable and worth taking seriously. While social media companies of course have a different role than traditional media, that role (however one wishes to specify it) seems compatible with the moral duty I have sketched to guard against their spaces’ weaponization by nefarious actors. Note that such a view is compatible with the claim that platforms’ voluntary commitment to principles such as “user voice”, and general obligations as public-facing businesses, generates an obligation not to discriminate against certain views or groups without justification. One can complain when a social media platform’s content decisions seem to place excessive burdens on speech by racial minorities, without thinking that the platform has duties to respect free speech akin to the state’s.

Suppose, however, Kramer were to come up with an argument establishing that platforms are, in fact, under the same free-speech duties as states. It would matter why states’ duties are what they are. Consider those who think the state shouldn’t ban hate speech because of instrumental anxieties about the abuse of state power; on such a view, hate speech is not intrinsically protected as free speech, but it remains all-things-considered impermissible for the state to restrict it. Such a view recognizes the distinctive evils involved when states misuse their awesome capacities. But of course platforms do not have anything like the awesome capacities of states—e.g., to imprison people for the views they express. A policy of subjecting certain speakers to criminal punishment, and a company policy of moderating speakers’ posts, involve manifestly different costs, plausibly leading to different standards for what speech may be restricted. Those (unlike Kramer) who endorse the orthodox First Amendment view on instrumental grounds (because of the dangers of giving the state too much power), then, could accept some kind continuity between states’ and platforms’ duties without thinking their speech policies need to be identical. One could, in other words, think that states shouldn’t ban hate speech, while also thinking that platforms should.


7. Conclusion

Many citizens are increasingly inclined to argue that platforms have duties to moderate wrongful speech. Yet we have lacked a compelling argument as to why. My aim in this article has been to set out the positive case for such a responsibility. Much more would need to be said to offer a complete theory of the duty to engage in moderation; in particular, such a theory would need to explore the importance of avoiding bias, especially in the use of automated technologies for moderation. It would also need to address the harms that befall content moderation workers themselves.[45] Further, the duty to moderate is only one of many moral duties held by platforms. The analysis I have offered here is broadly compatible with a vision of single-mindedly profit-maximizing platforms, hellbent on maximizing engagement and thus advertising revenue—but (inconveniently) side-constrained by the stringent moral duties I have set out. Yet it would be a mistake to think this is where the story ends. As agents, platforms do not simply have the positive duties of rescue and negative duties to refrain from imposing or contributing to wrongful harms that I have canvassed here; they also plausibly have civic duties to help maintain a salutary public discourse—to bring the best out of their users, rather than the worst. Thinking through what those duties involve will require a fuller theory of the positive role that platforms should play in a liberal democracy—and greater disruption to the business model of social media as it currently functions.


Acknowledgements:

Thanks to the editors and referees of this journal for very helpful feedback, which led to several improvements. I am grateful to audiences at the 2022 American Philosophical Association Pacific Division conference in Vancouver; the 2022 American Political Science Association conference in Montreal; the 2022 MANCEPT Workshops at the University of Manchester; the Law & Philosophy Colloquium at Pompeu Fabra University; the CELPA Seminar at the University of Warwick; the Oxford Institute for Ethics in AI’s Lunchtime Research Seminar; the Conceptual Foundations of Conflict Project seminar at the University of Southern California; and the Legal & Political Theory Colloquium at UCL. I am grateful to written comments from, or detailed conversations with, Diana Acosta, Etienne Brown, Leslie Kendrick, Seth Lazar, Spencer McKay, Tom Parr, Seana Shiffrin, Robert Simpson, and John Tasioulas. I am thankful to UKRI for research funding (UKRI grant MR/V025600/1).





[1] See, for example, Meta’s community standards: https://transparency.fb.com/en-gb/policies/community-standards/, which govern speech on Facebook and Instagram.  [^]

[2] For relevant background on content moderation, see Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131 (6): 1599–670.Evelyn Douek, “Content Moderation as Systems Thinking,” Harvard Law Review 136, 2 (2022): 526:607; Robert Gorwa et al, “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society 7, 1 (2020): 1-15; and Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (New Haven: Yale University Press, 2019).[^]

[3] See, e.g., https://transparency.fb.com/en-gb/enforcement/; https://transparency.twitter.com/en/reports/rules-enforcement.html#2021-jul-dec [^]

[4] Douek, “The Siren Call of Content Moderation Formalism,” in Lee Bollinger and Geoffrey Stone (eds.,), Social Media, Freedom of Speech, and the Future of our Democracy (Oxford: Oxford University Press, 2022), p. x. [^]

[5] This argument underpins the recent Texas statute forbidding “viewpoint discrimination” by social media platforms, upheld by the Fifth Circuit Court of Appeals; see NetChoice, LLC, v. Paxton, No. 21-51178 (5th Cir.), No. 22-555. A variation of this position is defended in Matthew Kramer, Freedom of Expression as Self-Restraint (Oxford: Oxford University Press, 2021), pp. 58-59, which I discuss below.[^]

[6] For some important examples of contributions to this issue, see Nicholas P. Suzor, Lawless: The Secret Rules That Govern Our Digital Lives (Cambridge: Cambridge University Press, 2019) and Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (New Haven: Yale University Press, 2018); [^]

[7] See Jeffrey W. Howard, "Dangerous Speech," Philosophy & Public Affairs 47, 2 (2019): 208-254.[^]

[8] These rules need not be identical for all users. Perhaps state officials have special responsibilities to refrain from lies and other culpable misrepresentations; for an illuminating defence of this view in the context of debates over social media, see Seana Shiffrin, "Unfit to Print: Government Speech and the First Amendment," UCLA Law Review 69 (2022): 986-1026.[^]

[9] Some speech might be wrongful yet neither harmful nor dangerous. I set this complication aside for present purposes. [^]

[10] For example, later I argue that some speech only becomes wrongful when aggregated and amplified. [^]

[11] It is immaterial whether we conceive them as bona fide group agents or not. See Philip Pettit and Christian List, Group Agency (Oxford: Oxford University Press, 2011), for the view that groups can be genuine agents. Of course, the fact that groups can be agents does not mean they hold primary moral rights; see Group Agency, p. 182. See also Kendy M. Hess, “‘If You Tickle Us....’: How Corporations Can Be Moral Agents without Being Persons,” Journal of Value Inquiry 47, 3 (2013): 319-335; Avia Pasternak, “From Corporate Moral Agency to Corporate Moral Rights,” The Law & Ethics of Human Rights 11, 1 (2017): 135-159; Meir Dan-Cohen, Rights, Persons, and Organizations (Berkeley: University of California Press, 1986).[^]

[12]The assumption that content m0deration is a form of extrajudicial punishment arises, for example, in Eric Goldman, “Content Moderation Remedies,” Michigan Technology Law Review 28 (2021): 1-59. [^]

[13] With respect to the duty of rescue or other-defence, it doesn’t matter much that the relevant harm is wrongful (since this duty can require us to save people from tornadoes and wolves, who can’t be wrongdoers). The fact that certain speech is wrongfully harmful (or dangerous—i.e., risks harm) is mostly relevant below, since it explains (a) why speakers have moral duties to refrain from the speech, and (b) why platforms who do nothing to combat such speech count as complicit with it.[^]

[14] For an exploration of cases along these lines, see Victor Tadros, The Ends of Harm: The Moral Foundations of Criminal Law (Oxford: Oxford University Press, 2011), Chapters 11 and 12. [^]

[15] For versions of a view along these lines, see Chiara Lepora and Robert E. Goodin, On Complicity and Compromise (Oxford: Oxford University Press, 2013), and John Gardner, “Complicity and Causality,” in Offences and Defences: Selected Essays in the Philosophy of Criminal Law (Oxford: Oxford University Press, 2007). Note that a complicit act might be potentially essential to the principal wrong or definitely essential. A potentially essential complicit act is an INUS condition (to use Mackie’s term) of a principal wrong—i.e., an insufficient but necessary condition of an unnecessary but sufficient condition of the primary act’s occurrence. A definitely essential complicit act is a particularly strong version of an INUS condition: it is an insufficient but necessary condition of every unnecessary but sufficient condition of the primary act’s occurrence. See Lepora and Goodin, On Complicity and Compromise, p. 60n, and J.L. Mackie, Cement of the Universe (Oxford: Clarendon Press, 1974).[^]

[16] Lepora and Goodin, On Complicity and Compromise, p. 34. Consider example of a policeman who abets a petty theft; surely the policeman, while merely an accomplice, is nevertheless morally worse in this case than the petty thief. [^]

[17] Some acts of complicity might be pro tanto wrongful but all-things-considered justified. Consider the humanitarian organization that must bribe a warlord to access and thus provide aid in some territory—a bribe that helps the warlord buy bullets. The humanitarian organization is complicit with the killings their bribe made possible—there is genuinely pro tanto wrong done here, triggering remedial and compensatory obligations—yet in many cases there could well be an all-things-considered justification, depending on just how many lives are thereby saved.[^]

[18] It might be suggested that the intervening agency of those who buy guns and then use them for wrongdoing means that they, rather than the companies, are responsible for any ensuring harm. Similarly, one might think that it is those who abuse platforms to harm or endanger others who should be held responsible, rather than platforms themselves. My own view is that while criminal gunmen and online abusers are certainly responsible for their own wrongdoing, this fact does not let others off the hook; manufacturers and platforms remain potential accomplices if they have failed to take reasonable steps to minimize their contributions to that principal wrongdoing. I defend the claim that intervening agency has less significance than commonly supposed in [omitted], following Victor Tadros, “Permissibility in a World of Wrongdoing,” Philosophy & Public Affairs 44, 2 (2016): 101-132. I am grateful to a referee for raising this issue. [^]

[19] While I focus here on social media platforms, the framework I offer here plausibly applies, mutatis mutandis, to other platforms like search engines, and even to offline platforms. I thank a referee for raising this issue.[^]

[20] Section 230 of the Communications Decency Act in the U.S., which confers a broad immunity on platforms for most illegal content posted by users, captures this concern; for example, if platforms were liable for all defamatory remarks posted by users, they would have to shut down. While Section 230 raises a wide range of policy issues, it is coherent to argue that platforms should retain 230 immunity (shielding them from an onslaught of private action claims) while also arguing that platforms should be subjected to the kinds of oversight and risk assessment by regulatory bodies that will soon be in operation in the UK and EU. This, it seems to me, holds platforms accountable without making their business untenable (and thereby compromising the social value it brings). For relevant background, see Jeff Kosseff, The Twenty-Six Words that Created the Internet (Ithaca: Cornell University Press, 2019). [^]

[21] We might call this “fact-relative complicity.” The language of “fact-relative”, in contrast to “evidence-relative” or “belief-relative”, traces to Derek Parfit, On What Matters, volume 1 (Oxford: Oxford University Press, 2013).[^]

[22] Kutz, Complicity, pp. 74, 89. Goodin and Lepora reject Kutz’s view (On Complicity & Compromise, p. 80). Kutz’s view is also rejected by John Gardner, “Review of Complicity: Ethics and Law for a Collective Age by Christopher Kutz,” Ethics 114 (2004): 827–30. [^]

[23] Larry May, Genocide: A Normative Account (Cambridge: Cambridge University Press, 2010), p. 169; this is also cited and discussed in Lepora and Goodin, On Complicity and Compromise, p.42.[^]

[24] For example, some instances of platforming wrongful speech will not be reasonably foreseeable (e.g., for new cases of “coded” language), and so it would be infeasible to prevent it. [^]

[25] Of course, not all duties are this stringent; different categories of wrongful speech will trigger differentially stringent moderation responsibilities, depending how harmful/dangerous the category is.[^]

[26] Given the scale of these platforms and the sheer volume of speech within them, such a herculean feat cannot be accomplished through humans alone. It requires the deployment of artificial intelligence, trained to hunt for violations. The use of AI raises distinctive normative problems, which I discuss elsewhere. For a critical take on the use of AI, and the ways the tech sector overplay its benefits, see Michael Barnes, “Online Extremism, AI, and (Human) Content Moderation,” Feminist Philosophy Quarterly 8, ¾ (2022): 1-28. For arguments that AI struggle to satisfy our right to explanation (given the inscrutability of their “black box” deliverances), see Kate Vredenburgh, “The Right to Explanation,” The Journal of Political Philosophy 30, 2 (2022): 209-229; Seth Lazar, “Legitimacy, Authority, and the Political Value of Explanations,” Computers and Society (2022): https://doi.org/10.48550/arXiv.2208.08628 [^]

[27] For example, hate speech is especially pernicious when uttered by a (perceived) practical authority, who has the (perceived) illocutionary power to authorize, permit, and license the subordination of targeted groups; see Rae Langton, “The Authority of Hate Speech,” Oxford Studies in Philosophy of Law, volume 3 (Oxford: Oxford University Press, 2018), 123-152. Likewise, harmful misinformation is especially harmful when communicated by (perceived) epistemic authorities.[^]

[28] For an accessible overview of how recommender/amplification systems work, see the outstanding newsletter “Understanding Recommenders” edited by Luke Thorborn, Jonathan Stray, and Priyanjana Bengani, available at https://medium.com/understanding-recommenders. For a good discussion on what counts as amplification, with attention to the baseline issue, see especially their “Making Amplification Measurable,” Medium, April 28, 2023. For related analysis of the difficulties in measuring amplification, and the unavailability of any neutral baseline, see Kristian Lum and Tomo Lazovich, “The Myth of ‘The Algorithm’: A System-Level View of Algorithmic Amplification,” (manuscript on file with author). See also Erin Miller, “Amplified Speech,” Cardozo Law Review 43: 1-69, especially pp. 11-12. [^]

[29] For incisive philosophical work on the attention economy, see Seth Lazar, “Communicative Justice and the Distribution of Attention,” Tanner Lectures on AI and Human Values 2023(on file with author). For a historical perspective, see Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (New York: Knopf 2016), See also Thorborn et al., “What Does it Mean to Give Someone What They Want? The Nature of Preferences in Recommender Systems,” Medium, March 11, 2022.[^]

[30] [redacted] [^]

[31] Rae Langton, “Blocking as Counter-Speech,” in Daniel Fogal (ed.), New Work on Speech Acts (Oxford: Oxford University Press, 2018), pp. 144-164; Maxime Lepoutre, Democratic Speech in Divided Times (Oxford: Oxford University Press, 2021), Chapters 3 and 4.[^]

[32] A reviewer asks why it wouldn’t be enough simply for platforms to alter their algorithmic systems such that this content is not amplified in the first place. While this would be preferable to doing nothing, I stress that such speech is wrongfully harmful even if it is not amplified. Reducing amplification for this content, then, is not enough. (In contrast, the next section discusses cases where reducing amplification would be sufficient, at least potentially.) [^]

[33] This is the basic insight of the EU Digital Services Act. Note that, in principle, governments could require risk assessments while remaining content-neutral about what exact speech ought to be removed—thereby (better) avoiding (in the U.S. context) a First Amendment challenge. In other jurisdictions, content-based directives can be consistent with free speech protections; for a philosophical defense of the claim that content- and even viewpoint-based restrictions on (some) wrongful speech can be compatible with a proper commitment to freedom of expression.[^]

[34] Strictly speaking, we can imagine four sets of cases: (1) speech that is wrongful even when not aggregated and amplified; (2) speech that is wrongful only when aggregated with similar speech (regardless of whether amplified); (3) speech that is wrongful only when amplified (regardless of whether aggregated with similar speech); and (4) speech that is wrongful only when amplified and aggregated with similar speech. I have focused above on (1) and in this sub-section am focused on (4), but that is not to deny the importance of (2) and (3). What I say here applies, mutatis mutandis, to them, too. Seth Lazar has independently developed some similar insights, distinguishing between stochastic harm (whereby aggregation increases the probability of harm) and collective harm (whereby lots of small harms aggregate into a large harm); see his “Communicative Justice and the Distribution of Attention, ” Tanner Lectures on AI and Human Values 2023 (on file with author). [^]

[35] Thanks to an editor for suggesting this point. For discussion, see Paul Billingham and Tom Parr, “Online Public Shaming: Virtues and Vices,” Journal of Social Philosophy 51 (3): 371-390.[^]

[36] It is striking that platforms arguably already do this for speech that is wrongful independent of amplification. For example, if a speaker posts a slur to a platform, it is because the platform’s systems enable it to be posted that the speaker thereby becomes a wrongdoer. Had the post remained for her own private viewing, it wouldn’t have constituted a wrong (at least, not an instance of wrongful harm). In this way, platforms already make the difference between whether some people are wrongdoers or not. What is distinctive about the amplification power is that it takes communications that would likely be innocuous offline and puts them in a context where they can cause or risk real harm. [^]

[37] For relevant discussion of platform policies on deamplification, see Tarleton Gillespie, “Do Not Recommend: Reduction as a Form of Content Moderation,” Social Media + Society (online first) https://doi.org/10.1177/20563051221117552. See also Daphne Keller, “Amplification and Its Discontents,” Knight First Amendment Institute, June 8, 2021. Available at https://knightcolumbia.org/content/amplification-and-its-discontents.[^]

[38] See, e.g., Miller, “Amplified Speech.” [^]

[39] For incisive reflection on the First Amendment in the age of social media, with an emphasis on ways in which online speech can be weaponized (especially by governments) to the detriment of public discourse, see Tim Wu, "Is the First Amendment Obsolete?” Michigan Law Review 117, 3 (2018): 547-581.[^]

[40] This piece was accepted before the publication of J.P. Messina, Private Censorship (Oxford: Oxford University Press, 2023), so I regrettably do not engage with that book's argument here.[^]

[41] For relevant litigation in the state of Texas that is likely to be heard eventually before the U.S. Supreme Court, see NetChoice v. Paxton, No. 21-51178 (5th Cir. 2022). [^]

[42] Matthew Kramer, Freedom of Expression as Self-Restraint (Oxford: Oxford University Press, 2021), pp. 58-59.[^]

[43] This was the central claim at issue in Knight First Amendment Institute v. Trump, 928 F.3d 226 (2019). For relevant discussion, see Dawn Nunziato, “From Town Square to Twittersphere: The Public Forum Doctrine Goes Digital,” BU Journal of Science and Technology Law 25 (2019): x-x. [^]

[44] Of course, one could hold that pornography falls outside the legitimate ambit of free speech. Yet Kramer himself rejects that position; see Freedom of Expression as Self-Restraint, pp. 160ff.[^]

[45] Both issues are astutely addressed in Karen Frost-Arnold, Who Should We Be Online? (Oxford: Oxford University Press, 2022), Chapter 2, which uses tools from social epistemology to explore both the exploitation of workers and the epistemic injustices that current moderation systems produce. [^]

[46] Strikingly, Frost-Arnold argues that current content moderation systems make it nearly impossible for moderators to exercise the epistemic virtues.[^]

Tags  


Back to Letters & Forthcoming Articles List