<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2330-4014</journal-id>
<journal-title-group>
<journal-title>Ergo AN OPEN ACCESS JOURNAL OF PHILOSOPHY</journal-title>
</journal-title-group>
<issn pub-type="epub">2330-4014</issn>
<publisher>
<publisher-name>Michigan Publishing Services</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3998/ergo.7967</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Moral Uncertainty, Proportionality and Bargaining</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Kaczmarek</surname>
<given-names>Patrick</given-names>
</name>
<email>pakazmarek@gmail.com</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lloyd</surname>
<given-names>Harry R.</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Plant</surname>
<given-names>Michael</given-names>
</name>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Centre for Ethics, Philosophy and Public Affairs, University of St Andrews</aff>
<aff id="aff-2"><label>2</label>University of North Carolina, Chapel Hill</aff>
<aff id="aff-3"><label>3</label>Wellbeing Research Centre, University of Oxford</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-07-15">
<day>15</day>
<month>07</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>12</volume>
<elocation-id>44</elocation-id>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2025 The Author(s)</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">https://creativecommons.org/licenses/by-nc-nd/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://journals.publishing.umich.edu/ergo/article/10.3998/ergo.7967/"/>
<abstract>
<p>Besides disagreeing about <italic>how much</italic> one should donate to charity, moral theories also disagree about <italic>where</italic> one should donate. In many cases, one intuitively attractive option is to split your donations across all of the charities that are recommended by theories in which you have positive credence, with each charity&#8217;s share being proportional to your credence in the theories that recommend it. Despite the fact that something like this approach is already widely used by real-world philanthropists to distribute billions of dollars, it is not supported by any account of handling decisions under moral uncertainty that has been proposed thus far in the literature. This paper develops a new bargaining-based approach that honors the proportionality intuition. We also show how this approach has several advantages over the best alternative proposals.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Consider</p>
<disp-quote>
<p><italic>Torn Up:</italic> Jane intends to give away her fortune. Although positive that suboptimal sacrifices are wrong<xref ref-type="fn" rid="n1">1</xref>, Jane is torn between two moral theories. One implies that she should donate her fortune to an initiative providing deworming pills to distant children; the second implies that she should support local soup kitchens. After many sleepless nights, she is no closer to knowing what the right thing is to do.</p>
</disp-quote>
<p>Jane&#8217;s predicament is all too familiar. Each of us has made tough choices while plagued by doubt.</p>
<p>What should she do?</p>
<p>When agents are deciding under uncertain conditions, we can distinguish between the &#8216;objective should,&#8217; the &#8216;subjective should,&#8217; and the &#8216;super-subjective should&#8217; (<xref ref-type="bibr" rid="B17">Hedden 2012</xref>; <xref ref-type="bibr" rid="B45">Sung 2022</xref>). The &#8216;objective should&#8217; describes what should be done given full knowledge of the situation. Suppose the moral view that implies Jane should donate to the deworming initiative is in fact true. If so, then she objectively should fund the deworming initiative, and it would be wrong for Jane to fund local soup kitchens.<xref ref-type="fn" rid="n2">2</xref> Clearly, this is the advice that Jane prizes most and wishes she had to follow in <italic>Torn Up</italic>. Try as she did, however, Jane could not glean the objectively right thing to do. All she has to go on are her patchy beliefs about what is good or bad, permissible or wrongful, supererogatory and so forth. By contrast, the two remaining senses of &#8216;should&#8217; guide deliberation by virtue of being sensitive to an agent&#8217;s false and gappy beliefs (<xref ref-type="bibr" rid="B30">Mu&#241;oz and Spencer 2021: 77</xref>). Following Brian Hedden (<xref ref-type="bibr" rid="B17">2012</xref>) and Leora Sung (<xref ref-type="bibr" rid="B45">2022</xref>), we understand the &#8216;subjective should&#8217; as being sensitive to an agent&#8217;s descriptive uncertainty, whereas the &#8216;super-subjective should&#8217; is sensitive to her descriptive <italic>and moral</italic> uncertainty.<xref ref-type="fn" rid="n3">3</xref> This last sense of &#8216;should&#8217; is most relevant to Jane&#8217;s situation. Unless otherwise stated, by &#8216;should&#8217; we will mean &#8216;super-subjectively should&#8217; from this point onward.</p>
<p>Our own intuition is that Jane should split her donations in <italic>Torn Up</italic>. She should give some portion of her donations to the deworming initiative and the rest to local soup kitchens, where the precise distribution corresponds to her credences in the two moral theories that Jane is torn between. Let&#8217;s call this response &#8220;Proportionality.&#8221; Stated more generally, Proportionality is the view that if some decision maker has <italic>x</italic>% credence in some moral theory, then she should use <italic>x</italic>% of her overall endowment of resources in the manner recommended by that particular moral theory.<xref ref-type="fn" rid="n4">4</xref></p>
<p>Many people, we gather, feel the same about <italic>Torn Up</italic> and cases like it; so much so that it might be said to be part of common sense to respond proportionally to a case like <italic>Torn Up</italic>. Note that several features of <italic>Torn Up</italic> help to make the Proportionality response attractive here. Firstly, the charitable interventions favoured by each of the two moral theories between which Jane is torn are independent of each other in the sense that soup kitchens in themselves neither thwart nor promote deworming initiatives, and vice versa. (As we discuss in &#167;2.2 below, things are rather different in cases where this assumption fails.) Secondly, in deciding how to donate her fortune, we can assume that Jane is facing a decision which both of her moral theories regard as &#8220;high stakes&#8221; relative to any other moral decision that Jane knows she will confront. (As we discuss in &#167;6 below, prima facie it seems appropriate for a moral agent who faces a sequence of several choices to give priority in each choice C to the moral theory or theories that regard C as &#8220;higher stakes&#8221; than the other choices which the moral agent will confront.)</p>
<p>However, these special features notwithstanding, cases like <italic>Torn Up</italic> illustrate that there is something attractive about the Proportionality idea that each moral theory&#8217;s degree of influence over how one allocates one&#8217;s resources should be proportional to one&#8217;s degree of credence in that moral theory. Even if the Proportionality intuition is far from ubiquitous, it is in fact regularly relied upon to decide the fates of millions of people, many of whom are young, poor, and vulnerable to disease. In particular, some effective altruists seem to rally behind this intuition.<xref ref-type="fn" rid="n5">5</xref> This social movement counts Dustin Moskovitz and Cari Tuna among its ranks, who promised to give away billions of dollars in (apparent) accordance with Proportionality.<xref ref-type="fn" rid="n6">6</xref></p>
<p>And yet, despite the practical importance of doing so, the widespread practice of diversifying donations when morally uncertain has gone unexamined by philosophers; even by those who would consider themselves card-carrying effective altruists.<xref ref-type="fn" rid="n7">7</xref> The paper rectifies this oversight. We defend a novel account for handling moral uncertainty which honors the intuition behind Proportionality.<xref ref-type="fn" rid="n8">8</xref></p>
<p>Here is the plan. In &#167;2, we will present two challenges for constructing this account. First, Proportionality does not cover cases where the agent faces a choice between discrete options, as opposed to a resource distribution case like <italic>Torn Up</italic>. Second, there is a class of cases&#8212;so-called &#8220;Jackson cases&#8221;&#8212;where Proportionality could be applied but delivers counterintuitive verdicts.<xref ref-type="fn" rid="n9">9</xref> Jackson cases drive many of our colleagues into the arms of Maximize Expected Choice-Worthiness, a rival account of appropriate behavior under conditions of moral uncertainty. We put pressure on their argument in &#167;3. Finally, we will develop and defend a bargaining approach across &#167;&#167;4-6.</p>
<p>Although honouring Proportionality is the initial inspiration for the bargaining approach to moral uncertainty we put forward, we will also argue that this approach has a number of other attractive features, including avoiding inter-theoretic comparisons of choiceworthiness, and dissolving the problems of &#8220;fanaticism&#8221; and &#8220;demandingness.&#8221; Thus, our approach might be attractive even to those who are not strongly swayed by intuitions in support of Proportionality. In short: Although Proportionality inspired our bargaining approach, it is far from that approach&#8217;s only selling point.</p>
</sec>
<sec>
<title>2. Two Challenges</title>
<p>Proportionality strikes many as a plausible solution to the problem of &#8220;where to give&#8221; when morally uncertain. But we face moral uncertainty in non-donation cases too, regarding the wrongness of eating meat, breaking promises, diverting trolleys into innocent bystanders, and so on.</p>
<p>As this section demonstrates, Proportionality tends to offer bad advice on these other quandaries. But if we must appeal to one or more additional subjective norms for reasonable guidance in these kinds of cases, then this brews doubts about Proportionality. Is there a unified account that explains all possible cases?</p>
<sec>
<title><italic>2.1. Proportionality is Incomplete</italic></title>
<p>The most immediate challenge to Proportionality is that it has no guidance to offer in a variety of choice situations. In general, Proportionality does not cover those cases where the agent faces a choice between discrete options, as opposed to a resource distribution case like <italic>Torn Up</italic>. Consider:</p>
<disp-quote>
<p><italic>Trolley:</italic> A runaway trolley is barreling down the track towards two innocent strangers. It will soon kill them both if you do nothing. Standing beside you is a man, George, wearing a heavy backpack. If you push him into the path of the trolley, their combined weight will cause the trolley to come to a complete stop before killing the pair of strangers, but George will unfortunately be crushed to death.<xref ref-type="fn" rid="n10">10</xref></p>
</disp-quote>
<p>Suppose that you are torn between two moral theories. You have 10% credence in a moral theory that prescribes shoving George into the trolley path, since doing so is best. But you also have 90% credence in a moral theory that proscribes doing so, since George has not waived his right against bodily harm.</p>
<p>What does Proportionality recommend?</p>
<p>Nothing; this is because it is impossible to commit to both pushing George <italic>and</italic> not pushing him into the path of the trolley. Indeed, even if you had the option of dangling George&#8217;s legs over the tracks, such that he survives and the trolley stops before killing the second but not the first stranger, doing so neither <italic>partially</italic> violates his rights nor treats <italic>only part of George</italic> as a mere means. Rather, doing so violates his right and fails to treat him as an end in himself. And so, even when armed with this expanded choice-set you will still be unable to partially satisfy both moral views in <italic>Trolley</italic>.</p>
<p>Many of the moral situations that ordinary people will face in life involve choosing between discrete options. Proportionality either goes silent or asks the impossible of us. This is highly troublesome.</p>
<p>To our minds, the decision maker should <italic>not</italic> push George onto the tracks in Trolley. We will revisit the question of how to square the desired verdict in this case with Proportionality in &#167;5.</p>
</sec>
<sec>
<title><italic>2.2. Proportionality is Reckless</italic></title>
<p>Perhaps there is some relief to be found in telling ourselves that the first challenge does not yet reveal a fatal error in choosing to split donations proportionally when morally uncertain. However, there is a second, no less severe, problem awaiting those who were unshaken by the first.</p>
<p>We begin by looking at a classic puzzle, <italic>Miners</italic>, and pulling out the main lesson that it teaches.</p>
<disp-quote>
<p><italic>Miners:</italic> There was a disaster in the quarry, and 100 miners are trapped in Shaft A; the nearby Shaft B is empty. You know that, if you do nothing, then both mineshafts will partly flood and 10 miners will die. You also know that, if you block the mineshaft where the miners are, you will save all 100. And if you block the empty shaft, the other will totally flood, drowning all 100. But your evidence doesn&#8217;t tell you where the miners are; for you, it&#8217;s a 50/50 guess (<xref ref-type="bibr" rid="B30">Mu&#241;oz and Spencer 2021: 78</xref>).</p>
</disp-quote>
<p>What should you do?</p>
<p>It isn&#8217;t the case that you objectively should refrain from blocking either shaft. After all, you know that if the miners are trapped in Shaft A, then you objectively should block Shaft A. And you also know that if the miners are trapped in Shaft B, then you objectively should block Shaft B. Wherever these miners are located, blocking neither mineshaft is sure to be the wrong thing to do.</p>
<p>Yet, your doxastic attitudes being what they are, it is <italic>reckless</italic> to block either mineshaft; because you don&#8217;t know their location, you are as likely to kill the miners as you are to rescue them. The lesson we are meant to learn in <italic>Miners</italic> is this: &#8220;you (subjectively) shouldn&#8217;t even try to do as you objectively ought, because you don&#8217;t know which shaft you objectively ought to block&#8212;and a wrong guess spells disaster&#8221; (<xref ref-type="bibr" rid="B30">Mu&#241;oz and Spencer 2021: 79</xref>).</p>
<p><italic>Miners</italic> is a Jackson case. The following criteria make for a Jackson case: (a) the agent should choose an option that is suboptimal; (b) the agent knows that the option she should choose is suboptimal; and (c) it would be unacceptably reckless for the agent to choose any other option (<xref ref-type="bibr" rid="B9">Field 2019: 394</xref>). The purpose of having gone through the <italic>Miners</italic> exercise was to establish as much.</p>
<p>Notice that in cases where one does not have any empirical uncertainty, Proportionality <italic>never</italic> recommends putting any resources towards outcomes known to be objectively wrong (or rather, actions which every moral theory that you have positive credence in deems impermissible). So, Proportionality will never honor the lesson from <italic>Miners</italic>. As such, Proportionality is objectionably insensitive to the stakes described by moral theories. Consider:</p>
<disp-quote>
<p><italic>Mining Safari:</italic> You know all of the following. There was a disaster in the quarry: 10 giraffes are trapped in Shaft A and 20 canaries are trapped in Shaft B. There isn&#8217;t enough time to fully block both shafts. If you block Shaft A, then the giraffes will be saved, but the other shaft will totally flood, killing the canaries. If you block Shaft B, then the canaries will be saved, but the other shaft will totally flood, killing the giraffes. You could partially block each shaft, but bricks and other deadly debris will then get washed into both mineshafts by the water, making the flood that much more deadly and killing everyone inside. If you block neither shaft, 6 giraffes and 12 canaries will survive.</p>
</disp-quote>
<p>Suppose that you are equally torn between Peter Singer&#8217;s utilitarianism, according to which all creatures with moral standing share the same moral status (<xref ref-type="bibr" rid="B44">2009</xref>), and Shelly Kagan&#8217;s hierarchical approach, which assigns a lower moral status to canaries than giraffes (<xref ref-type="bibr" rid="B22">2019</xref>).</p>
<p><xref ref-type="table" rid="T1">Table 1</xref> describes the status-adjusted goodness of rescuing these animals. For concreteness, let&#8217;s assume the value of saving a giraffe&#8217;s life is 1, that saving a canary&#8217;s life is equally good as saving a giraffe&#8217;s life on Singer&#8217;s view and that the status-adjusted goodness of saving a canary&#8217;s life is <inline-formula><alternatives><mml:math id="Eq001-mml"><mml:mrow><mml:mpadded><mml:mn mathsize="70%">1</mml:mn></mml:mpadded><mml:mpadded lspace="-0.1em" width="-0.15em"><mml:mo stretchy="true" symmetric="true">/</mml:mo></mml:mpadded><mml:mn mathsize="70%">4</mml:mn></mml:mrow></mml:math></alternatives></inline-formula> that of saving a giraffe&#8217;s life on Kagan&#8217;s view. Singer&#8217;s view recommends blocking Shaft B; meanwhile, Kagan&#8217;s view recommends blocking Shaft A.<xref ref-type="fn" rid="n11">11</xref></p>
<table-wrap id="T1">
<caption>
<p><bold>Table 1:</bold> Upside in <italic>Mining Safari</italic>.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"></td>
<td align="center" valign="top"><bold>Singer&#8217;s View</bold></td>
<td align="center" valign="top"><bold>Kagan&#8217;s View</bold></td>
</tr>
<tr>
<td align="left" valign="top">Block A</td>
<td align="center" valign="top">10</td>
<td align="center" valign="top">10</td>
</tr>
<tr>
<td align="left" valign="top">Block B</td>
<td align="center" valign="top">20</td>
<td align="center" valign="top">5</td>
</tr>
<tr>
<td align="left" valign="top">Neither</td>
<td align="center" valign="top">16</td>
<td align="center" valign="top">8</td>
</tr>
<tr>
<td align="left" valign="top">Block both (partially)</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>What should you do?</p>
<p>If we tried to extend the Proportional division of resources idea to <italic>Mining Safari</italic> in a literal-minded way&#8212;thinking about your pile of bricks as your endowment of resources&#8212;then we would have to say that you should partially block both mineshafts (thereby splitting your resources between bricking up Shaft A, as Singer recommends, and Shaft B, as Kagan recommends). Clearly, however, this is absurd. Although they disagree about the ranking order of the alternatives in <italic>Mining Safari</italic>, Singer and Kagan&#8217;s views each, internally, recognize partially blocking both shafts as the worst possible outcome in this situation.</p>
<p>This feature of the case seems relevant, and perhaps that&#8217;s where the excessively literal-minded reading of Proportionality goes wrong. Suppose it was instead Peter and Shelly who stumbled into <italic>Mining Safari</italic>. What would they do? We cannot imagine that either of them would dig their heels in, refusing to alter course in light of the other&#8217;s preference. Peter and Shelly would recognize, in other words, they cannot singlehandedly determine the outcome. This brings out the underlying flaw in Proportionality: <italic>it overlooks how the various recommendations from each theory combine to bring about some final outcome</italic>.</p>
<p>We can begin to patch this flaw by proposing that your decision procedure should reflect what flesh-and-blood Peter and Shelly would actually do. Their theories should be made to, in a sense to be explained, <italic>cooperate</italic>. Let&#8217;s suppose that each view you are torn between is assigned a representative in your practical deliberations, and that each representative tailors their recommendation to account for the preferences of other representatives. In this case, the first representative champions Singer&#8217;s view while the second champions Kagan&#8217;s view. What would your Kagan representative recommend in light of what the Singer representative prefers happen with her share of your resources? He would not recommend blocking Shaft A conditional on the Singer representative blocking Shaft B, since doing so guarantees the worst possible outcome. What would your Singer representative recommend in light of what the Kagan representative prefers happen with his share of your resources? Similarly, she would not recommend blocking Shaft B conditional on your Kagan representative blocking Shaft A, since doing so guarantees the worst possible outcome. Thus, we can rule out partially blocking both mineshafts in response to <italic>Mining Safari</italic>. However, we have not yet fleshed out this basic idea in enough detail to determine which of the remaining options these representatives would actually recommend (that detail will come in &#167;4 below).</p>
<p>Although we seem to be on the right track by viewing the problem of what to do when morally uncertain as a cooperation problem, the idea just sketched falls short of a satisfying solution. This is because failures of cooperation can crop up even when representatives are made aware of one another&#8217;s preferences. The problem can be illustrated with the aid of another toy example.</p>
<disp-quote>
<p><italic>Procreation:</italic> Jane intends to give away her life savings. If she funds a fertility initiative, two additional children will be born. Alternatively, Jane can fund a contraception initiative that results in two fewer pregnancies. She is equally torn between impersonal total utilitarianism and anti-natalism.<xref ref-type="fn" rid="n12">12</xref> Total utilitarianism implies that Jane has strong all-things-considered reason to fund the fertility initiative, and equally strong all-things-considered reason not to fund the contraception initiative. By contrast, anti-natalism implies that Jane has strong all-things-considered reason to fund the contraception initiative, and equally strong all-things-considered reason not to fund the fertility initiative. Both of these moral theories agree that Jane has <italic>some</italic> all-things-considered reason to fund the Against Malaria Foundation, which supplies insecticide-treated bed nets to children at risk of contracting malaria. However, both views also agree funding the Against Malaria Foundation would be wrong <italic>qua</italic> suboptimal. After many sleepless nights, Jane is no closer to knowing what the right thing is to do.</p>
</disp-quote>
<p>If Jane gives the fertility and contraception initiatives an equal share of her money, they will balance each other out, leaving the world exactly as she found it (as far as these moral theories are concerned). Given this, splitting her donations is no more valuable than doing nothing, frittering away her fortune.</p>
<p>As above, suppose that Jane deliberates as if she had two representatives tailoring their recommendations in <italic>Procreation</italic>. Whatever the anti-natalist representative does, her total utilitarian representative prefers to fund the fertility initiative. After all, if the anti-natalist representative were successful in preventing the birth of, say, Elroy, the world would be worse on balance, according to the total utilitarian representative, given there would be less happiness in the population. From the total utilitarian representative&#8217;s point of view, maintaining the status quo by creating, say, Judy, is more important than bed nets. And if the anti-natalist representative gives out bed nets, then the total utilitarian representative still prefers creating Judy over supplying bed nets. Jane&#8217;s anti-natalist representative similarly prefers funding the contraception initiative whatever the total utilitarian representative chooses to do. So, together they squander Jane&#8217;s donations. And yet, both the total utilitarian representative and the anti-natalist representative agree that bed nets are better than nothing at all.</p>
<p>Parfit (<xref ref-type="bibr" rid="B35">1984: 91</xref>) called this an &#8220;Each-We Dilemma.&#8221;<xref ref-type="fn" rid="n13">13</xref> If each of Jane&#8217;s theory representatives produces the best outcome they can individually, they produce a worse outcome collectively.</p>
<p>We believe that Jane should donate all of her savings to the Against Malaria Foundation in <italic>Procreation</italic>. And we don&#8217;t seem to be alone in thinking this; Toby Ord (<xref ref-type="bibr" rid="B33">2015</xref>) defends the same conclusion in scenarios where distinct <italic>people</italic> rather than imaginary representatives hold different moral views and coordinating would be mutually beneficial. He refers to this as &#8216;moral trade.&#8217; This suggests that a promising approach to handling moral uncertainty may be to treat it as a case of intra-personal bargaining, where we imagine what the representatives of the different moral theories that one believes in would do, if given the opportunity to coordinate their actions.</p>
</sec>
<sec>
<title><italic>2.3. Recap</italic></title>
<p>&#167;2 surveyed the main challenges for constructing a comprehensive account for handling moral uncertainty that vindicates the practice of diversifying one&#8217;s donations when torn between competing moral theories. Viewing the problem of what one ought to do when morally uncertain as a cooperation problem between moral theories seems promising. At first blush, a sophisticated bargaining approach to moral uncertainty seems well-placed to explain all of the cases in this paper.</p>
<p>We will further motivate the idea in &#167;4. First, however, we introduce Maximize Expected Choice-Worthiness&#8212;a popular rival to the approach that we will defend in this paper.</p>
</sec>
</sec>
<sec>
<title>3. Argument from Analogy</title>
<p>The most popular decision procedure in the literature on moral uncertainty is Maximize Expected Choice-Worthiness (hereafter &#8220;MEC&#8221;), where the &#8220;choice-worthiness&#8221; of some action according to a moral theory is the strength of the decision maker&#8217;s all-things-considered reasons in favor of performing that action according to that moral theory (<xref ref-type="bibr" rid="B28">MacAskill and Ord 2020: 329</xref>). The &#8220;expected choice-worthiness&#8221; of some action is a weighted average of its choice-worthiness according to each of the theories in which the decision maker has credence, where each theory&#8217;s weight in the average is the decision maker&#8217;s credence in that theory.</p>
<p>Thus, MEC says that we should handle moral uncertainty in the same way as expected utility theory says that we should handle empirical uncertainty. In fact, several advocates of MEC regard this analogy with standard decision theory as a reason to endorse MEC. For instance, MacAskill et al. (<xref ref-type="bibr" rid="B27">2020: 47-48</xref>) claim that since &#8220;expected utility theory is the standard account of how to handle empirical uncertainty &#8230; maximizing expected choice-worthiness should be the standard account of how to handle moral uncertainty.&#8221; In a similar vein, Christian Tarsney (<xref ref-type="bibr" rid="B49">2021: 172</xref>) maintains that treating moral and empirical uncertainty &#8220;differently when we are not forced to is at least prima facie inelegant and undermotivated&#8221; (<xref ref-type="bibr" rid="B40">likewise Sepielli 2010: 75-78</xref>).</p>
<p>Unfortunately, however, we think that there are some disanalogies between moral and empirical uncertainty, which call into question the argument from analogy in favour of MEC. To be clear: we do not think that these disanalogies constitute a fatal blow to MEC. All we hope to show in this section is that the case isn&#8217;t open and shut. That&#8217;s enough for our purposes; all we want to show is that proposing an alternative decision procedure for handling moral uncertainty isn&#8217;t a non-starter.</p>
<p>Perhaps the most important disanalogy between empirical and moral uncertainty concerns intertheoretic choice-worthiness comparisons. In paradigm cases of decision making under empirical uncertainty, the goodness or badness of each of the various possible outcomes can be measured on some shared evaluative scale. For instance, the value of the different possible outcomes at a casino table can be measured in terms of dollars won or lost. And the value of several different possible plays in gridiron football can be measured in terms of net points won or lost. In the absence of this kind of comparability, it would simply be impossible to calculate the expected value of any particular action.</p>
<p>Unfortunately for MEC, it remains deeply controversial whether it is possible to make intertheoretic choice-worthiness comparisons across several different moral theories. Advocates of MEC such as MacAskill et al. (<xref ref-type="bibr" rid="B27">2020: ch. 5</xref>) have argued that the choice-worthiness scores assigned to options by different moral theories might all be cardinally measurable on a shared &#8220;universal scale&#8221; of choice-worthiness. On the other hand, critics of intertheoretic comparisons have argued that &#8220;it is part of the very nature of a moral system that it presents a way of viewing reality, and that the differing visions of different systems cannot be directly compared&#8221; (<xref ref-type="bibr" rid="B10">Gracely 1996: 328</xref>). For instance, imagine trying to compare absolutist deontology against scalar utilitarianism. These two moral theories don&#8217;t even use the same deontic categories: absolutist deontology sees the world only in terms of permissions and prohibitions, whereas scalar utilitarianism sees the world only in terms of betterness and worseness of outcomes in terms of aggregate utility. It strikes many working in the field as implausible to suppose these two moral theories both rank actions on a shared &#8220;universal scale&#8221; of choice-worthiness.<xref ref-type="fn" rid="n14">14</xref> This is an important disanalogy between empirical and moral uncertainty.<xref ref-type="fn" rid="n15">15</xref></p>
<p>A second problem with the argument from analogy is that selecting a decision procedure that is designed to recommend optimal <italic>gambles</italic> strikes us as prima facie much less appealing in the moral uncertainty case than it is in the descriptive uncertainty case. Instead, we think it is more attractive to adopt an approach to moral uncertainty that is designed to select optimal <italic>compromises</italic> between the moral theories in which one has positive credence. MacAskill himself suggests an alternative analogy between moral uncertainty and social choice, and explicitly emphasizes the idea of compromising:</p>
<disp-quote>
<p>The formal structure of the two problems is very similar. But the two problems are similar on a more intuitive level as well. The problem of social choice is to find the best compromise in a situation where there are many people with competing preferences. The problem of [moral] uncertainty is to find the best compromise in a situation where there are many possible normative theories with competing recommendations about what to do (<xref ref-type="bibr" rid="B25">2016: 977</xref>).</p>
</disp-quote>
<p>We ourselves develop this alternative analogy in &#167;4 of this paper.</p>
<p>A final problem with the argument from analogy is that it torpedoes Proportionality. According to MEC, it is rarely, if ever, correct to split one&#8217;s donations according to Proportionality. In general, MEC only ever implies the permissibility of proportionally diversifying donations as a matter of coincidence, such as in cases where all of the available options in some choice situations are maximally choice-worthy in expectation, or in certain very particular cases where the returns to donating to every theory&#8217;s favored charities diminish at exactly the right rate.</p>
<p>We can illustrate these claims by attempting to apply MEC to <italic>Torn Up</italic>. Let us suppose, <italic>arguendo</italic>, that we can make intertheoretic choiceworthiness comparisons between the two moral theories in which Jane has credence. For sake of concreteness, suppose that Jane has 60% credence in a moral theory according to which for each dollar that Jane can donate, funding deworming is five times as choiceworthy as funding soup kitchens. On the other hand, Jane has 40% credence in a moral theory according to which funding soup kitchens is five times as choiceworthy as funding deworming. If Jane spends <inline-formula><alternatives><mml:math id="Eq002-mml"><mml:mrow><mml:mi>d</mml:mi><mml:mo>%</mml:mo></mml:mrow></mml:math></alternatives></inline-formula> of her money on deworming, and <inline-formula><alternatives><mml:math id="Eq003-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>%</mml:mo></mml:mrow></mml:math></alternatives></inline-formula> on soup kitchens, then the choiceworthiness of her donations is <inline-formula><alternatives><mml:math id="Eq004-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mn>5</mml:mn><mml:mi>d</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>+</mml:mo><mml:mrow><mml:mn>4</mml:mn><mml:mi>d</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></inline-formula> according to the first moral theory, and <inline-formula><alternatives><mml:math id="Eq005-mml"><mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mn>5</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mn>500</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>4</mml:mn><mml:mi>d</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></inline-formula> according to the second. Intertheoretic expected choiceworthiness as a function of <inline-formula><alternatives><mml:math id="Eq006-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> is therefore <inline-formula><alternatives><mml:math id="Eq007-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mn>0.6</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>+</mml:mo><mml:mrow><mml:mn>4</mml:mn><mml:mi>d</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>0.4</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>500</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>4</mml:mn><mml:mi>d</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mn>260</mml:mn><mml:mo>+</mml:mo><mml:mrow><mml:mn>0.8</mml:mn><mml:mi>d</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></inline-formula>, as illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1">
<caption>
<p><bold>Figure 1:</bold> MEC in <italic>Torn Up</italic>.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-7967_kaczmarek-g1.png"/>
</fig>
<p>Expected choiceworthiness is clearly maximised when <inline-formula><alternatives><mml:math id="Eq008-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> is as large as possible. So, under these assumptions, MEC implies&#8212;contra Proportionality&#8212;that it is most appropriate for Jane to donate all of her fortune to deworming. Although this particular result is an artefact of our simple assumptions about Jane&#8217;s credences and her moral theories&#8217; choiceworthiness functions, working through this example illustrates that there is little reason to think that MEC will support Proportionality (or anything like it).</p>
<p>In this section, we have argued that the case in favour of MEC is far from open and shut. The argument from analogy in favour of MEC is in tension with Proportionality, is less prima facie plausible than MEC&#8217;s advocates assert and papers over a disanalogy between moral and empirical uncertainty concerning the plausibility of value comparisons. In the next section, we motivate an alternative <italic>bargaining</italic> approach to handling decision making under moral uncertainty.</p>
</sec>
<sec>
<title>4. Moral Marketplace</title>
<p>In human social settings, sometimes we agree with each other about what to do, and sometimes we don&#8217;t. Where we agree, little more needs to be said: we act. Where we disagree, we often bargain to see if we can find an acceptable compromise. Indeed, bargaining is a skill we develop almost before we can walk. We begin with requests to our parents: requests they&#8217;ll often grant in exchange for us doing what they want&#8212;such as eating our greens and sitting quietly in church. Negotiation continues apace thereafter. We learn how to compromise with sweethearts, friends and enemies. Industry titans and politicians alike wheel and deal. Striking a bargain is fundamental to our lives as social beings, something so familiar we sometimes barely recognize that we are doing it. Sometimes, bargaining does not work, or is impractical. In some of these contexts, we may turn to voting and imposing decisions on others they do not want. <italic>In extremis</italic>, we resort to force.</p>
<p>Given the ubiquity of bargaining as a means of resolving disagreements, it is perhaps surprising that, whilst voting theory is widely referenced in the moral uncertainty literature, bargaining is not. In the paper comparing moral uncertainty to social choice that we quoted from in &#167;3 above, MacAskill adopts a voting-theoretic approach. Similarly, an early, but still-underdeveloped proposal for moral uncertainty was the &#8220;moral parliament,&#8221; where theory representatives vote on what decisions to take.<xref ref-type="fn" rid="n16">16</xref> Note, however, that bargaining may equally fit the bill for finding a compromise between normative theories. We&#8217;ve already hinted at its potential in &#167;2 of this paper.</p>
<p>The aim of this section is to develop a bargaining approach to moral uncertainty. To our knowledge, Hilary Greaves and Owen Cotton-Barratt (2023) are the only philosophers to have previously discussed bargaining-theoretic approaches to moral uncertainty. They propose (but stop short of endorsing) a bargaining approach that is inspired pretty directly by the mathematical formalisms of John Nash&#8217;s influential theory of bargaining. By contrast, although our approach will utilise Nash&#8217;s formal bargaining solution, it is inspired by something else: not parliaments, but instead the <italic>marketplace</italic>. Hence, we will call our approach the &#8220;Moral Marketplace Theory&#8221; (&#8220;MMT&#8221;).</p>
<p>MMT is inspired by the kind of market interactions and trades that are made between human agents when they have well-defined initial entitlements to resources. According to MMT, how a morally uncertain decision maker should act in any given choice situation is determined by a certain economic model of that choice situation. Each moral theory is modelled as an economic agent, who is endowed with a share of the decision maker&#8217;s resources proportional to the decision maker&#8217;s credence in the corresponding moral theory. This representative of the theory is modelled as the owner of these resources and can use those resources however they see fit. Each representative&#8217;s preference ranking over how resources are used overall is identical to the choice-worthiness ranking of the moral theory that it represents. To this end, we will restrict our attention to cases where all of the theories in which our uncertain decision makers have credence exhibit a certain kind of cardinal structure. More specifically, we only consider theories that can be presented by <italic>interval-scale</italic> choice-worthiness functions.<xref ref-type="fn" rid="n17">17</xref></p>
<p>Representatives can make deals with each other, but they don&#8217;t have to. Because they have the right to their own resources, a pair of representatives will make a deal <italic>iff</italic> that deal would be mutually beneficial&#8212;that is, if both believe the bargain is better for them than acting unilaterally. An important issue, one we will come back to, is determining in different contexts the appropriate &#8220;disagreement point&#8221;: what happens if the representatives cannot agree.</p>
<p>Although MMT will have certain features in common with Greaves and Cotton-Barratt&#8217;s approach (in particular the use of Nash&#8217;s bargaining solution),<xref ref-type="fn" rid="n18">18</xref> it also differs from their approach in several important respects, as we note in &#167;6 below. Perhaps most notably, MMT&#8217;s disagreement point will differ from any of those considered by Greaves and Cotton-Barratt, and it is this disagreement point that will allow MMT to honour Proportionality.<xref ref-type="fn" rid="n19">19</xref></p>
<p>Here&#8217;s how we proceed in the remainder of this paper. We start by explaining what MMT recommends in cases where resources are divisible, for instance allocating resources to charity. Along the way, we highlight how MMT has several attractive features: it vindicates Proportionality in certain cases (&#167;4.1), delivers the desired verdict in <italic>Procreation</italic> (&#167;4.2), does not require intertheoretic comparison of choice-worthiness (&#167;4.3) and avoids the challenges of both fanaticism (&#167;4.4) and demandingness (&#167;4.5). From there, we move on to consider cases where resources are non-divisible (&#167;5) before addressing the main challenge facing proponents of bargaining approaches to handling moral uncertainty: the <italic>problem of small worlds</italic> (&#167;6). We then conclude.</p>
<sec>
<title><italic>4.1. Divisible Resources, No Bargains Available</italic></title>
<p>For a simple illustration of these ideas, consider the <italic>Torn Up</italic> thought experiment from &#167;1 of this paper. In <italic>Torn Up</italic>, Jane is torn between two moral theories, one of which recommends donating to a deworming initiative, and the other of which recommends donating to soup kitchens in her hometown.</p>
<p>MMT suggests that Jane should model these two theories as two economic agents, each of whom is initially endowed with a share of Jane&#8217;s fortune proportional to Jane&#8217;s credence in the corresponding theory. If these two representatives wished to, they could make contracts with each other. And they can also choose to spend their endowments in any of the ways open to Jane. As it happens, in <italic>Torn Up</italic> these two representatives do not have anything to gain by making contracts with each other. The first representative just wants to donate all of her endowment to the deworming initiative, and the second representative just wants to donate all of her endowment to local soup kitchens. Hence, according to MMT, Jane should split her donations proportionally between deworming pills and soup kitchens.</p>
<p>As this discussion of <italic>Torn Up</italic> makes plain, MMT &#8220;builds in&#8221; Proportionality as, in some sense, the &#8220;default response&#8221; to cases of moral uncertainty in which the decision maker is deciding how to distribute some continuously divisible resource. MMT deviates from Proportionality only in cases where some alternative resource allocation is a Pareto improvement over the proportional one (we will discuss one such case in &#167;4.2 below). Moreover, MMT supplies us with a principled reason for this result. According to MMT, each theory&#8217;s representative is initially entitled to a proportional share of the decision maker&#8217;s resources. Thus, each representative always has the option to spend its share of these resources in the manner recommended by the theory that it represents. Each representative will only agree to a contract if it represents an improvement over this proportional response.</p>
</sec>
<sec>
<title><italic>4.2. Divisible Resources, a Successful Contract</italic></title>
<p>In some cases, MMT will deviate from Proportionality. For instance, consider the <italic>Procreation</italic> case introduced in &#167;2.2. If Jane&#8217;s theory representatives each spent their endowment on the initiative that they regard as optimal, then the total utilitarian representative would donate her endowment to the fertility initiative, and the anti-natalist representative would donate her endowment to the contraception initiative. Each representative regards this overall use of Jane&#8217;s resources as no better than Jane doing nothing at all.</p>
<p>By contrast, consider a possible outcome in which Jane&#8217;s total utilitarian and anti-natalist representatives both agree to donate their endowments to the Against Malaria Foundation. Each of these two representatives regards this outcome as better than doing nothing. Hence, each representative regards this outcome as better than the outcome in which each representative spends their endowment on the initiative that they regard as optimal. It is in each representative&#8217;s interests to enter into a contract with the other representatives which stipulates they will both donate their endowments to the Against Malaria Foundation.</p>
<p>In cases like this, where an agent&#8217;s theory representatives stand to gain from forming contracts with each other, MMT will need to provide a precise bargaining &#8220;solution concept&#8221; to tell us which contract these representatives will agree to. Perhaps the most well-known such solution concept&#8212;and the one that we will adopt in this paper&#8212;is the <italic>Nash bargaining solution</italic>.</p>
<p>The Nash bargaining solution is the bargaining solution that uniquely satisfies Nash&#8217;s (<xref ref-type="bibr" rid="B31">1950</xref>) four plausible axioms on the outcomes of good-faith (referred to as &#8220;cooperative&#8221;) bargaining procedures:<xref ref-type="fn" rid="n20">20</xref></p>
<list list-type="order">
<list-item><p><italic>Scale invariance:</italic> any positive linear rescaling of any bargainers&#8217; utility functions should not alter the bargaining solution.</p></list-item>
<list-item><p><italic>Pareto optimality:</italic> no feasible alternatives should Pareto dominate the bargaining solution. In other words: there should not exist any feasible alternative to the bargaining solution that is both (a) no worse than the solution for every bargainer, and (b) better than the solution for at least one bargainer.</p></list-item>
<list-item><p><italic>Symmetry:</italic> if every bargainer has the same utility function and disagreement utility, then every bargainer should have the same utility in the bargaining solution.</p></list-item>
<list-item><p><italic>Independence of Irrelevant Alternatives:</italic> eliminating an element from the set of feasible outcomes should only make a difference to the bargaining solution if the eliminated outcome would itself have been selected as the bargaining solution had it not been eliminated.</p></list-item>
</list>
<p>In a case like <italic>Procreation</italic> with only two representatives, an act A is a Nash bargaining solution <italic>iff</italic> setting <inline-formula><alternatives><mml:math id="Eq009-mml"><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mtext>A&#x00A0;maximizes</mml:mtext></mml:mrow></mml:math></alternatives></inline-formula></p>
<disp-formula><alternatives><mml:math id="Eq010-mml"><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>a</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="true">)</mml:mo><mml:mo lspace="0em" rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>a</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<p>where <inline-formula><alternatives><mml:math id="Eq011-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> is an interval-scale choice-worthiness function for the first theory, <inline-formula><alternatives><mml:math id="Eq012-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> is an interval-scale choice-worthiness function for the second theory and <inline-formula><alternatives><mml:math id="Eq013-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> (i.e., the &#8220;disagreement point&#8221;) is what will happen if the representatives cannot agree to a contract.<xref ref-type="fn" rid="n21">21</xref> In other words, <inline-formula><alternatives><mml:math id="Eq014-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> is the proportional outcome in which each representative uses its endowment in the manner recommended by the theory that it represents.</p>
<p>One attractive feature of the Nash bargaining solution is that (all else being equal) it favors equal divisions of the choice-worthiness gains to be had from trade between theory representatives. For example, suppose that two bargainers are choosing between an option A that gives each bargainer a utility gain of 4 over the disagreement point and another option B that gives the bargainers utility gains over the disagreement point of 2 and 6 respectively. Under option A, the value of the Nash maximand is <inline-formula><alternatives><mml:math id="Eq015-mml"><mml:mrow><mml:mrow><mml:mn>4</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>16</mml:mn></mml:mrow></mml:math></alternatives></inline-formula>, whereas under option B, the value of the Nash maximand is <inline-formula><alternatives><mml:math id="Eq016-mml"><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>6</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:math></alternatives></inline-formula>. Hence, as desired, the Nash bargaining approach prefers option A over option B.</p>
<p>In the case of <italic>Procreation</italic>, <inline-formula><alternatives><mml:math id="Eq017-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> will be total utilitarianism&#8217;s choice-worthiness function, <inline-formula><alternatives><mml:math id="Eq018-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> will be anti-natalism&#8217;s choice-worthiness function and <inline-formula><alternatives><mml:math id="Eq019-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> will be outcome in which half of Jane&#8217;s fortune is donated to the fertility initiative, and half of Jane&#8217;s fortune is donated to the contraception initiative. Let <inline-formula><alternatives><mml:math id="Eq020-mml"><mml:mi>f</mml:mi></mml:math></alternatives></inline-formula>, <inline-formula><alternatives><mml:math id="Eq021-mml"><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq022-mml"><mml:mi>m</mml:mi></mml:math></alternatives></inline-formula> denote the proportions of her fortune that Jane will donate to the fertility, contraception and malaria initiatives respectively.</p>
<p>Since Jane&#8217;s wealth is small relative to global spending on fertility, contraception and malaria, it is reasonable to assume that if Jane increases her spending on one of those charities by some factor <inline-formula><alternatives><mml:math id="Eq023-mml"><mml:mi>k</mml:mi></mml:math></alternatives></inline-formula>, this will increase Jane&#8217;s impact in promoting the goals of that charity by the same factor. Thus, we can assume that <inline-formula><alternatives><mml:math id="Eq024-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq025-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> are <italic>linear</italic> in <inline-formula><alternatives><mml:math id="Eq026-mml"><mml:mi>f</mml:mi></mml:math></alternatives></inline-formula>, <inline-formula><alternatives><mml:math id="Eq027-mml"><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq028-mml"><mml:mi>m</mml:mi></mml:math></alternatives></inline-formula>. Together with our original specifications in <italic>Procreation</italic>, this suggests something like the following specifications for <inline-formula><alternatives><mml:math id="Eq029-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq030-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula>:</p>
<disp-formula><alternatives><mml:math id="Eq031-mml"><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<disp-formula><alternatives><mml:math id="Eq032-mml"><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<p>The disagreement point <inline-formula><alternatives><mml:math id="Eq033-mml"><mml:mi>d</mml:mi></mml:math></alternatives></inline-formula> in <italic>Procreation</italic> corresponds to <inline-formula><alternatives><mml:math id="Eq034-mml"><mml:mrow><mml:mo stretchy='false'>&#x27E8;</mml:mo><mml:mrow><mml:mrow><mml:mi>f</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mrow></mml:mrow><mml:mo stretchy='false'>&#x27E9;</mml:mo></mml:mrow></mml:math></alternatives></inline-formula>. Hence, <inline-formula><alternatives><mml:math id="Eq035-mml"><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></alternatives></inline-formula>. Thus, the Nash bargaining solution is the choice of <inline-formula><alternatives><mml:math id="Eq036-mml"><mml:mi>f</mml:mi></mml:math></alternatives></inline-formula>, <inline-formula><alternatives><mml:math id="Eq037-mml"><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq038-mml"><mml:mi>m</mml:mi></mml:math></alternatives></inline-formula> that maximizes</p>
<disp-formula id="FD1"><label>(1)</label><alternatives><mml:math id="Eq039-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mi>C</mml:mi></mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo rspace="0.055em" stretchy='false'>)</mml:mo></mml:mrow><mml:mo rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<p>As desired, this solution is <inline-formula><alternatives><mml:math id="Eq040-mml"><mml:mrow><mml:mo stretchy='false'>&#x27E8;</mml:mo><mml:mrow><mml:mrow><mml:mi>f</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mrow></mml:mrow><mml:mo stretchy='false'>&#x27E9;</mml:mo></mml:mrow></mml:math></alternatives></inline-formula>. MMT recommends that Jane should donate everything to the malaria initiative.</p>
</sec>
<sec>
<title><italic>4.3. MMT Does Not Depend on Intertheoretic Comparisons</italic></title>
<p>Having laid out the mechanics and illustrated the functioning of MMT in a couple of cases, we now mention several of its theoretical advantages, starting with the fact it does not require intertheoretic unit comparisons.</p>
<p>As we pointed out in &#167;3.1 above, MEC requires us to be able to make intertheoretic choice-worthiness comparisons. In order to decide whether, say, a 10% chance of acting impermissibly according to absolutist deontology is a price worth paying for a 90% chance of acting optimally according to utilitarianism, one needs to be able to commensurate between the choice-worthiness values at stake in this decision according to absolutist deontology and utilitarianism.</p>
<p>By contrast, however, the bargaining approach does not require these kinds of intertheoretic comparisons. Two agents can bargain with each other without having to first establish some kind of exchange rate between their utility functions. According to many (if not all) formal models of interpersonal bargaining, all that is required for optimal bargaining is knowing every bargainer&#8217;s preference structure over potential agreements.<xref ref-type="fn" rid="n22">22</xref> Trying to compare different bargainers&#8217; levels of satisfaction on some shared &#8216;universal scale&#8217; is irrelevant to the bargaining process. Insofar as intertheoretic choice-worthiness comparisons seem dubious (&#167;3.1 above), this is an important advantage of the bargaining approach.</p>
</sec>
<sec>
<title><italic>4.4. MMT Resists Fanaticism</italic></title>
<p>An agent is said to be fanatical if he judges a lottery with a sufficiently tiny probability of an arbitrarily high finite value as better than getting some modest value with certainty (<xref ref-type="bibr" rid="B54">Wilkinson 2022: 447</xref>).<xref ref-type="fn" rid="n23">23</xref> Some approaches to handling moral uncertainty are fanatical about choice-worthiness, including MEC (<xref ref-type="bibr" rid="B1">Baker 2024</xref>). To illustrate, consider:</p>
<disp-quote>
<p><italic>Lives or Souls:</italic> You are supremely confident that you should give to the Against Malaria Foundation, where your donation will save one child&#8217;s life. But you have seen evidence that the Against Hell Foundation reliably converts people to a certain religion, purportedly saving their souls from eternal damnation. You have almost no faith in that religion, but you accept that saving a soul is, on that religion, astronomically more valuable than saving a life.</p>
</disp-quote>
<p>Because the stakes are so much higher on the religious view, even a small credence in that religion&#8217;s truth threatens to take hostage your decision making under conditions of uncertainty. This is irksome.</p>
<p>By contrast, the bargaining approach to moral uncertainty has principled grounds for avoiding fanaticism (<xref ref-type="bibr" rid="B11">Greaves and Cotton-Barratt 2024: &#167;8</xref>). A model that represents the moral theories in which the decision maker has credence as agents bargaining with each other is unlikely to recommend as appropriate an option that one low-credence theory regards as highly choice-worthy, but that every other theory regards as not-at-all choice-worthy. Instead, the bargaining approach is much more likely to recommend an option that every positive-credence theory regards as moderately choice-worthy (if an option like this is available). The outcome of some bargaining process must be an option that is unanimously acceptable to all of the bargainers.</p>
<p>Indeed, it is easy to illustrate mathematically that the Nash bargaining approach described above is not fanatical with respect to choice-worthiness (recall &#167;4.2). For instance, imagine that <inline-formula><alternatives><mml:math id="Eq041-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> is the same as before (i.e., <inline-formula><alternatives><mml:math id="Eq042-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow></mml:math></alternatives></inline-formula>), but <inline-formula><alternatives><mml:math id="Eq043-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> is now scaled up by a factor of ten to <inline-formula><alternatives><mml:math id="Eq044-mml"><mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mi>c</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>30</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow></mml:math></alternatives></inline-formula>. Then the Nash bargaining solution in <italic>Procreation</italic> will be the choice of <inline-formula><alternatives><mml:math id="Eq045-mml"><mml:mi>f</mml:mi></mml:math></alternatives></inline-formula>, <inline-formula><alternatives><mml:math id="Eq046-mml"><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq047-mml"><mml:mi>m</mml:mi></mml:math></alternatives></inline-formula> that maximizes</p>
<disp-formula id="FD2"><label>(2)</label><alternatives><mml:math id="Eq048-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo rspace="0.055em" stretchy='false'>)</mml:mo></mml:mrow><mml:mo rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mi>c</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>30</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<p>However, this expressions can be rewritten as</p>
<disp-formula id="FD3"><label>(3)</label><alternatives><mml:math id="Eq049-mml"><mml:mrow><mml:mn>10</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo rspace="0.055em" stretchy='false'>)</mml:mo></mml:mrow><mml:mo rspace="0.222em">&#x00D7;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mi>c</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mn>30</mml:mn><mml:mi>m</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></alternatives></disp-formula>
<p>which is simply ten times expression (1).</p>
<p>Thus, any choice of <inline-formula><alternatives><mml:math id="Eq050-mml"><mml:mi>f</mml:mi></mml:math></alternatives></inline-formula>, <inline-formula><alternatives><mml:math id="Eq051-mml"><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> and <inline-formula><alternatives><mml:math id="Eq052-mml"><mml:mi>m</mml:mi></mml:math></alternatives></inline-formula> maximizes expression (2) <italic>iff</italic> it maximizes expression (1). Multiplying <inline-formula><alternatives><mml:math id="Eq053-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> or <inline-formula><alternatives><mml:math id="Eq054-mml"><mml:mrow><mml:mi>C</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula> by any positive number does not alter the Nash bargaining solution.<xref ref-type="fn" rid="n24">24</xref></p>
<p>The dust has yet to settle on whether fanaticism is wrongheaded or right but tough to swallow.<xref ref-type="fn" rid="n25">25</xref> We take it to be a virtue of the bargaining approach that it does not imply fanaticism.</p>
</sec>
<sec>
<title><italic>4.5. MMT Preserves Moral Latitude</italic></title>
<p>Theories like MEC that underwrite &#8216;dominance arguments&#8217; can be highly restrictive on morally uncertain agents. To illustrate, consider:</p>
<disp-quote>
<p><italic>Nets or Wheels:</italic> George receives a letter from the Against Malaria Foundation asking him to save a child&#8217;s life by donating the few thousand dollars that he has squirreled away for his dream car.</p>
</disp-quote>
<p>What should he do?</p>
<p>Suppose that George is torn between two moral theories. He is confident of the truth of some commonsense moral theory, which tells him that, although he is not required to send the money to the Against Malaria Foundation, he is permitted to venture beyond the call of duty. But George isn&#8217;t totally sold; he is somewhat sympathetic to Singer&#8217;s brand of utilitarianism, according to which you should give away most of your wealth to desperately needy strangers (<xref ref-type="bibr" rid="B43">Singer 1972</xref>; <xref ref-type="bibr" rid="B52">Unger 1996</xref>). He can&#8217;t shake the inkling that it&#8217;s seriously wrong to fail to save a child&#8217;s life; that continuing to drive a beat-up Honda a little longer wouldn&#8217;t be the end of the world.</p>
<p>Notice, the risk of wrongdoing in <italic>Nets or Wheels</italic> is lopsided. Failing to send the money might be gravely wrong, whereas sending the money is sure to be permissible (that is, doing so is permitted by both moral views in which George finds purchase). By his own lights, there is no chance of doing something gravely wrong by donating to the Against Malaria Foundation. Since donating dominates buying his dream car, George seems pressed to send all the money. This holds no matter how little stock he puts in that inkling, provided that George assigns it some non-trivial weight.</p>
<p>This is intuitively upsetting. It seems that accounting for moral uncertainty will lead us straight to the Singerian conclusion that we should donate all our (spare) resources to charity.<xref ref-type="fn" rid="n26">26</xref></p>
<p>What does MMT have to say about this case?</p>
<p>To begin, notice that <italic>Nets or Wheels</italic> is unlike the previous cases. The new feature is that one of the representatives is <italic>indifferent</italic> to making a contract with his fellow representatives. We can imagine the Singer representative jumping up and down, imploring the representative of the commonsense moral view to donate his endowment to the Against Malaria Foundation, whilst the commonsense representative looks back, arms folded and nonplussed. The Singer representative doesn&#8217;t have anything with which to win over the commonsense representative. Although he would be happy to enter into an agreement like this, such a contract doesn&#8217;t improve his lot over the disagreement point. He would be equally happy to spend his endowment on a new car.</p>
<p>In cases like this, it is natural to stipulate that either of these two possible uses of the commonsense representative&#8217;s endowment would count as appropriate by the lights of MMT. In other words, on the one hand it would be appropriate by the lights of MMT for George to donate all of his (spare) money to the Against Malaria Foundation. But, it would also be appropriate for George to donate only the fraction of his money corresponding to his credence in Singer&#8217;s utilitarianism, and for him to spend the rest on his dream car.</p>
<p>To us, this seems like a neat compromise. For those, like Singer, with high credence in the view that morality is highly demanding, MMT will also be demanding; for those with low credence in such views, accounting for moral uncertainty via MMT will not make moral life very demanding. In this way, MMT recovers an appropriate degree of moral latitude, and thereby isn&#8217;t guilty of being overly-demanding.</p>
</sec>
</sec>
<sec>
<title>5. Non-Divisible Resource Cases</title>
<p>All of the cases that we have considered in the preceding section involve distributing some continuously divisible resource. However, cases like <italic>Trolley</italic> do not have this structure. Instead, in these cases the uncertain agent faces a choice between several different discrete options.</p>
<p>We should understand cases like <italic>Mining Safari</italic> as also having this kind of structure. As <xref ref-type="table" rid="T1">Table 1</xref> describes, there are only four possible choice-worthiness outcomes for Singer and Kagan&#8217;s views in <italic>Mining Safari</italic>. So, individuating options by their choice-worthiness differences, there are only four possible options: block Shaft A; block Shaft B; (partially) block both shafts; block neither.</p>
<p>How should MMT handle cases of this sort? We will consider two possibilities. (While these strike us as two promising ways of extending MMT, they need not exhaust the possibilities.)</p>
<sec>
<title><italic>5.1. Lottery Tickets</italic></title>
<p>The first way to extend MMT so as to cover these discrete-choice decision problems is to stipulate that in each discrete choice situation, each theory representative will be endowed with a &#8216;lottery ticket&#8217; that gives her a chance&#8212;equal to the decision maker&#8217;s credence in the corresponding theory&#8212;of determining what the decision maker does in that choice situation.</p>
<p>Before the winner of the lottery is determined, theory representatives can make contracts with each other governing what they will do if they win the lottery. We stipulate that each representative wishes to maximize her expected utility under uncertainty about which representative will win the decision lottery. For instance, consider the decision lottery in <italic>Mining Safari</italic>. In the absence of agreeing to a contract, the Kagan and Singer representatives would block Shaft A and Shaft B, respectively, if they won the decision lottery. The Singer representative&#8217;s expected utility from a coin toss over these two outcomes is <inline-formula><alternatives><mml:math id="Eq055-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.5</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>10</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.5</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>20</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mn>15</mml:mn></mml:mrow></mml:math></alternatives></inline-formula>, and the Kagan representative&#8217;s expected utility is <inline-formula><alternatives><mml:math id="Eq056-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.5</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>10</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.5</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>5</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mn>7.5</mml:mn></mml:mrow></mml:math></alternatives></inline-formula>. However, if the representatives agreed not to block either shaft regardless of who wins the lottery, then the Singer and Kagan representatives&#8217; utilities would be 16 and 8 respectively (see <xref ref-type="table" rid="T1">Table 1</xref>). Each representative regards this contract as an improvement over the disagreement point. In fact, this contract is the Nash bargaining solution in this scenario. MMT recommends that you should not block either shaft in <italic>Mining Safari</italic>.<xref ref-type="fn" rid="n27">27</xref></p>
<p>What about <italic>Trolley</italic>? In this case, the decision maker&#8217;s two representatives do not have anything to gain by making contracts with each other. The first theory representative just wants to maximize the probability that the decision maker shoves George into the trolley&#8217;s path, and the second theory representative just wants to minimize this probability. Thus, in MMT&#8217;s economic model of <italic>Trolley</italic>, the representatives will not agree to any contracts. Under this decision lottery without contracts there is a 10% chance that the decision maker will shove George into the trolley&#8217;s path, and a 90% chance that she will do nothing.</p>
<p>Randomizing 10-90 over whether or not you should shove George into the path of the trolley will leave a bad taste in some people&#8217;s mouths, such as Newberry &amp; Ord (<xref ref-type="bibr" rid="B32">2021: 8</xref>). They will reply: which actions are super-subjectively permissible when deciding under conditions of moral uncertainty should presumably not depend on the outcomes of random processes in this manner. Moreover, they could argue that it is implausible to suppose that if the lottery resolves in favour of the theory in which the decision maker has 10% credence, then it would be appropriate to kill George (despite the fact that the decision maker has 90% credence in this action being morally reprehensible). In our experience, different people can have sharply divergent intuitions on the plausibility of randomizing in cases like <italic>Trolley</italic>. (For instance, one of the present authors regards it as intuitively plausible; but another of us regards it as wildly unattractive.)</p>
</sec>
<sec>
<title><italic>5.2. Eliminating Randomization</italic></title>
<p>A second way to extend MMT to cover discrete-choice decision problems is to concede that in cases like <italic>Trolley</italic>, MMT&#8217;s appropriateness prescriptions should come apart somewhat from the output of MMT&#8217;s economic model. In cases like <italic>Mining Safari</italic> where the decision maker&#8217;s representatives in MMT&#8217;s economic model agree to a contract which selects a determinate course of action regardless of which representative wins the decision lottery, this extension of MMT recommends that the decision maker should follow this course of action. However, in cases like <italic>Trolley</italic> where the representatives in MMT&#8217;s economic model cannot agree to a contract which selects a determinate course of action regardless of who wins the decision lottery; this second extension of MMT instead recommends that the decision maker should simply perform the course of action that has the greatest probability of being selected by the representatives in MMT&#8217;s economic model after the decision lottery occurs. For instance, in <italic>Trolley</italic>, the decision maker should <italic>not</italic> push George onto the tracks. In cases where two or more options are tied for the greatest probability, all of the tied options are super-subjectively permissible according to this second extension of MMT.</p>
<p>Under this extension of MMT, the representatives of each moral theory in which one has credence will continue to bargain within our economic model <italic>as if</italic> whoever wins the lottery will in fact get to decide (subject, of course, to any contracts which she has agreed to) which option is chosen by the decision maker. However, this extension of MMT&#8217;s prescriptions <italic>diverge</italic> from that outcome of the economic model in cases where the lottery still leaves something to chance. The decision maker should select the option that has the greatest probability of being selected by the lottery given the contracts that have been negotiated by the theory-representatives.<xref ref-type="fn" rid="n28">28</xref></p>
<p>One might worry that this decision to partially divorce MMT&#8217;s recommendations from the outputs of its economic model is ad hoc, or theoretically undermotivated. On the contrary, however, we think that this decision can be theoretically motivated by thinking about the fundamental purpose of a theory of appropriate action under conditions of moral uncertainty. The purpose of a theory like this is to recommend some concrete plan of action to the morally uncertain decision maker. The first extension of MMT does not fulfill this purpose, unlike the second.</p>
<p>We leave it open as to which extension is more plausible. Each has its merits, and opinions will doubtless be mixed. For our purposes here, it is enough to have shown that MMT <italic>can be</italic> extended to handle cases where the uncertain agent faces a choice between several different discrete options.</p>
</sec>
</sec>
<sec>
<title>6. Problem of Small Worlds</title>
<p>In this section, we address the problem of small worlds.</p>
<p>Using a Nash bargaining model to handle the problem of moral uncertainty has previously been discussed by Greaves &amp; Cotton-Barratt (<xref ref-type="bibr" rid="B11">2024</xref>). They tentatively conclude that &#8220;while the bargaining-theoretic approach is not obviously superior to an MEC approach&#8212;contra, perhaps, the hopes of many of the advocates of a &#8216;parliamentary model of moral uncertainty&#8217;&#8212;neither is it clearly inferior&#8221; (<xref ref-type="bibr" rid="B11">2024: 166</xref>).</p>
<p>There are several important differences between MMT and Greaves and Cotton-Barratt&#8217;s proposals. For instance, MMT&#8217;s disagreement point in resource division cases is different from any of the potential disagreement points proposed by Greaves &amp; Cotton-Barratt (<xref ref-type="bibr" rid="B11">2024: 140</xref>).<xref ref-type="fn" rid="n29">29</xref></p>
<p>This difference can be traced to the fact that MMT is specifically inspired by the kind of bargaining that takes place in free markets with well-defined property rights over goods and labor (see &#167;4 above). At the disagreement point, each representative is endowed with her fair share of property rights over the decision maker&#8217;s resources.<xref ref-type="fn" rid="n30">30</xref></p>
<p>Hence, some of the objections Greaves and Cotton-Barratt consider in their paper are inapplicable to MMT. For instance, Greaves &amp; Cotton-Barratt (<xref ref-type="bibr" rid="B11">2024: 152-4</xref>) worry that the proposals they consider will sometimes recommend randomizing over several possible options instead of choosing a particular course of action with certainty. However, as already noted, the extension of MMT that we considered in &#167;5.2 avoids this problem.</p>
<p>Nonetheless, one of Greaves and Cotton-Barratt&#8217;s main objections to Nash bargaining approaches is relevant to MMT. They present readers with the following scenario:</p>
<disp-quote>
<p><italic>Two Binary Choices:</italic> Jenny faces two independent binary choices. She can either kill one person, or let two die; and she can either donate a fixed philanthropic budget of $1m to support homeless people, or to mitigate extinction risk. Her credence is split equally between two moral theories. Jenny has 50% credence in a total utilitarian moral theory <inline-formula><alternatives><mml:math id="Eq057-mml"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula>, according to which it is (relatively speaking) slightly better to kill one than to let two die, but much better to direct the resources to extinction risk mitigation than to homeless support. And she has 50% credence in a common-sense moral theory <inline-formula><alternatives><mml:math id="Eq058-mml"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></alternatives></inline-formula>, according to which it is (relatively speaking) slightly better to direct resources to homeless support than to extinction risk mitigation, but much worse to kill one than to let two die (<xref ref-type="bibr" rid="B11">2024: 150</xref>).</p>
</disp-quote>
<p>One way of modelling <italic>Two Binary Choices</italic> is to treat it as a single choice situation, in which the agent has four options:</p>
<list list-type="order">
<list-item><p>Kill one and support the homeless</p></list-item>
<list-item><p>Kill one and fund extinction risk mitigation</p></list-item>
<list-item><p>Let two die and support the homeless</p></list-item>
<list-item><p>Let two die and fund extinction risk mitigation</p></list-item>
</list>
<p>Call this the &#8220;grander-world model&#8221; of <italic>Two Binary Choices</italic>. As Greaves &amp; Cotton-Barratt (<xref ref-type="bibr" rid="B11">2024: &#167;6</xref>) show, the Nash bargaining solution in the grander-word model is (4): let two die and fund extinction risk mitigation.</p>
<p>An alternative way of modelling <italic>Two Binary Choices</italic> is to treat it as two staggered choice situations. In the first choice situation, Jenny has to decide whether to kill the one or to let the two die. In the second choice situation, Jane has to decide whether to support the homeless or to fund extinction risk mitigation. Call this the &#8220;smaller-worlds model&#8221; of <italic>Two Binary Choices</italic>. The Nash bargaining solution for each of these two choice situations in the smaller-worlds model is 50-50 randomization over the two available options. Thus, on the smaller-worlds model, MMT implies that killing the one and letting the two die could <italic>both</italic> in principle be permissible for Jenny (recall &#167;5 above), and likewise that supporting the homeless and funding existential risk mitigation could <italic>both</italic> in principle be permissible (again, recall &#167;5 above). Thus, MMT implies that each of the four possible combinations (1) &#8211; (4) could in principle be (super-subjectively) permissible for Jenny in <italic>Two Binary Choices</italic>.</p>
<p>Greaves and Cotton-Barratt call this the <italic>problem of small worlds</italic> (<xref ref-type="bibr" rid="B11">2024: &#167;10</xref>). &#8220;It can make a significant difference to the verdict of the Nash approach whether one chooses a smaller- or a grander-world model of one&#8217;s decision problem&#8221; (<xref ref-type="bibr" rid="B11">2024: 165</xref>). According to them, &#8220;this is problematic, since any such choice (short of the impractical maximally grand-world model) seems arbitrary&#8221; (<xref ref-type="bibr" rid="B11">2024: 165</xref>). The options assumed to be open to an agent in a &#8220;maximally grand-world model&#8221; are presumably complete plans of action for the remainder of the agent&#8217;s lifetime, specifying how the agent would act under every possible empirical contingency.</p>
<p>However, we think that Greaves and Cotton-Barratt are too quick to conclude that a rule for deciding on some privileged small-world model of an agent&#8217;s circumstances must be &#8220;arbitrary.&#8221; Reflecting on the circumstances in which theories of appropriate action under moral uncertainty like MMT are designed to be applied suggests a principled and intuitively attractive principle for deciding how &#8220;grand&#8221; our model of an agent&#8217;s option-set should be made in any particular choice situation.</p>
<p>Theories of appropriate action like MMT are designed to be applied in circumstances where an agent is trying to decide what to do in light of her moral uncertainty. Hence, it makes sense to think of an agent&#8217;s set of options in any particular choice situation as the set of plans of action that she is <italic>capable of deciding</italic> to perform in that choice situation.<xref ref-type="fn" rid="n31">31</xref> Call this approach <italic>decisionism</italic>. It is beyond the scope of this paper to propose necessary and sufficient conditions for &#8220;being able to decide to <inline-formula><alternatives><mml:math id="Eq059-mml"><mml:mo>&#x03C6;</mml:mo></mml:math></alternatives></inline-formula>&#8221; (cf. <xref ref-type="bibr" rid="B17">Hedden 2012</xref>; <xref ref-type="bibr" rid="B41">Sheperd 2015</xref>). However, on any plausible rendering of these conditions, decisionism will almost always select worlds smaller than the maximally grand world, given that no human agent has the cognitive capacities one would require in order to decide now on one particular complete plan of action for the remainder of one&#8217;s lifetime (apart from agents who know they are at death&#8217;s door). On the other hand, almost all human agents have the cognitive capacity to decide to perform any of the four possible options (1) &#8211; (4) from Greaves and Cotton-Barratt&#8217;s grander-world model of <italic>Two Binary Choices</italic>. Thus, MMT plus decisionism implies that (4) is the only permissible option in <italic>Two Binary Choices</italic>. This strikes us as the right result.</p>
<p>Although the precise details of decisionism will depend on exactly how one analyses &#8216;being able to decide to <inline-formula><alternatives><mml:math id="Eq061-mml"><mml:mo>&#x03C6;</mml:mo></mml:math></alternatives></inline-formula>,&#8217; we are cautiously optimistic that decisionism (or something like it) can provide a principled and intuitively attractive response to Greaves and Cotton-Barratt&#8217;s problem of small worlds. Despite their objections, the Nash bargaining approach to moral uncertainty remains a viable option.<xref ref-type="fn" rid="n32">32</xref></p>
</sec>
<sec>
<title>7. Conclusion</title>
<p>In this paper, we have motivated a bargaining approach to decision making under moral uncertainty. The specific theory we presented in &#167;4, MMT, captures widespread intuitions about the appropriateness of splitting one&#8217;s donations in proportion to one&#8217;s credences in various moral theories, and provides a satisfying explanation for when and why departures from the proportional response are appropriate.</p>
<p>As was discussed in &#167;4, capturing Proportionality is not the only advantage MMT has over traditional views on appropriate choice under conditions of moral uncertainty. MMT successfully circumvents a number of potential pitfalls that have divided the field, such as intertheoretic comparability, demandingness and fanaticism. But for all that, we have only scratched the surface of the bargaining-based approach to moral uncertainty. We see MMT as a jumping off point, meant to illustrate both the viability and appeal of the bargaining approach more generally, and hopefully to spark more interest in this project.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>See Theron Pummer (<xref ref-type="bibr" rid="B37">2016</xref>) and Joe Horton (<xref ref-type="bibr" rid="B19">2017</xref>).</p></fn>
<fn id="n2"><p>It is a separate issue whether, on this view, the agent who impermissibly engages in suboptimal altruism is morally blameworthy (<xref ref-type="bibr" rid="B38">Pummer 2021</xref>).</p></fn>
<fn id="n3"><p>Some philosophers deny there are subjective norms to guide an agent&#8217;s deliberations under conditions of moral uncertainty (<xref ref-type="bibr" rid="B16">Harman 2009</xref>; <xref ref-type="bibr" rid="B53">Weatherson 2019</xref>). Important though these arguments are, this paper is not the right place to engage with them. We focus on whether these subjective norms, if there are any, could accommodate our intuitions about <italic>Torn Up</italic>.</p></fn>
<fn id="n4"><p>By &#8216;recommended&#8217; we simply mean that the moral theory describes this option as a permissible object of choice. Some theories will deem more than one option permissible while smiling most on one of these (i.e., a supererogatory option). As we will discuss in &#167;4.5, Proportionality grants the uncertain decision maker leeway here, which we take to be an attractive feature of the view. Thanks to Tim Campbell for pressing us to clarify this point.</p></fn>
<fn id="n5"><p>Effective altruism community members search for ways to do the most good and then put them into practice; see <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.effectivealtruism.org/articles/introduction-to-effective-altruism">https://www.effectivealtruism.org/articles/introduction-to-effective-altruism</ext-link>. Arguably, its moral foundation is the (Weak) Avoid Gratuitous Worseness Principle, which states &#8220;It is wrong to perform an act that is <italic>much worse</italic> than another, if it is <italic>no</italic> costlier to you to perform the better act, and if all other things are equal&#8221; (<xref ref-type="bibr" rid="B37">Pummer 2016: 84</xref>). But see Sinclair (<xref ref-type="bibr" rid="B42">2018</xref>).</p></fn>
<fn id="n6"><p>Open Philanthropy, the grant-making organization charged with executing the philanthropic plans of Moskovitz and Tuna, subscribes to an approach they label &#8220;worldview diversification,&#8221; described here: <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.openphilanthropy.org/research/worldview-diversification/">https://www.openphilanthropy.org/research/worldview-diversification/</ext-link>. The rationale given is somewhat imprecise, but the relevant features are that they split their credence across different &#8220;worldviews&#8221;&#8212;which reflect various positions one can take on both moral and empirical uncertainties&#8212;and then allocate <italic>some</italic> resources to each of those worldviews.</p></fn>
<fn id="n7"><p>Although William MacAskill is a sort of spokesperson for effective altruism, his book on the topic of moral uncertainty makes reference to neither the practice nor the intuition behind proportional diversification (<xref ref-type="bibr" rid="B27">MacAskill et al. 2020</xref>). One exception to this neglect is the short critical discussion of proportional diversification in (<xref ref-type="bibr" rid="B13">Greaves et al. 2024</xref>).</p></fn>
<fn id="n8"><p>Elsewhere, Lloyd (<xref ref-type="bibr" rid="B23">2022</xref>) shows that Proportionality is incompatible with extant theories for handling moral uncertainty. This provides us with strong reason, we think, to seek out alternatives. See also (<xref ref-type="bibr" rid="B36">Plant 2022</xref>) (an early ancestor of the present paper), which provided a more positive (though imprecise) treatment of bargaining as a solution to moral uncertainty.</p></fn>
<fn id="n9"><p>So named after Frank Jackson (<xref ref-type="bibr" rid="B20">1991</xref>), who developed this style of case. Jackson cases are now central to the debate on which subjective norms should guide decision making when morally uncertain; e.g., (<xref ref-type="bibr" rid="B9">Field 2019</xref>; <xref ref-type="bibr" rid="B28">MacAskill and Ord 2020</xref>).</p></fn>
<fn id="n10"><p>We took pity on the so-called &#8220;fat man,&#8221; a fabled victim of trolleyology, and adapted a case from Andreas Mogensen (<xref ref-type="bibr" rid="B29">2016: 215-6</xref>).</p></fn>
<fn id="n11"><p>In fact, these verdicts depend on several additional features that we brushed aside for simplicity&#8217;s sake, such as how good an average giraffe&#8217;s life is for them, the lifespan of a canary, their degrees of psychological capacity (at least, on Kagan&#8217;s view) and so forth.</p></fn>
<fn id="n12"><p>Total utilitarianism states that one should bring about the outcome in which there would be the greatest quantity of happiness (<xref ref-type="bibr" rid="B35">Parfit 1984: 387</xref>). On this view, we should create happy people, since doing so increases the total sum of happiness. (Some find this conclusion disturbing; they believe how good a state of affairs is depends only on how good it is for already existing persons. As Jonathan Bennett (<xref ref-type="bibr" rid="B4">1978: 63-4</xref>) bemoaned, &#8220;As well as deploring the situation where a person lacks happiness, [total utilitarians] also deplore the situation where some happiness lacks a person&#8221;.) Anti-natalism holds the polar opposite; it is gravely wrong to create people, even if they are on balance happy, since the anti-natalists argue that we harm people terribly when we create them but do not benefit them at all (<xref ref-type="bibr" rid="B3">Benatar 2006</xref>; cf. <xref ref-type="bibr" rid="B34">Pallies 2024</xref>). Our grasp of Benatar&#8217;s view was greatly improved by Elizabeth Harman&#8217;s (<xref ref-type="bibr" rid="B16">2009</xref>) review of it.</p></fn>
<fn id="n13"><p><italic>Procreation</italic> is a pure Each-We Dilemma since it can be solved by improving coordination between agents. See Temkin (<xref ref-type="bibr" rid="B50">2022: 238-249</xref>) for another type of Each-We Dilemma, one that cannot be solved in this fashion (<xref ref-type="bibr" rid="B8">Clark and Pummer 2019: 30</xref>).</p></fn>
<fn id="n14"><p>Other critics of intertheoretic choice-worthiness comparisons include (<xref ref-type="bibr" rid="B7">Broome 2012</xref>; <xref ref-type="bibr" rid="B14">Gustafsson 2022</xref>; <xref ref-type="bibr" rid="B15">Gustafsson and Torpman 2014</xref>; <xref ref-type="bibr" rid="B18">Hedden 2016</xref>).</p></fn>
<fn id="n15"><p>Another potential disanalogy between empirical and moral uncertainty is that how one should handle empirical uncertainty is one of the matters about which one can be morally uncertain&#8212;but not vice versa, presumably. For a discussion of the importance of this disanalogy see Lloyd (<xref ref-type="bibr" rid="B23">2022: 31-33</xref>).</p></fn>
<fn id="n16"><p>The most recent treatment of this view can be found in (<xref ref-type="bibr" rid="B32">Newberry and Ord 2021</xref>).</p></fn>
<fn id="n17"><p>A unit increase on an interval represents a certain fixed amount of the underlying thing being measured, regardless of where on that scale that unit increase occurs. For instance, &#176;F and &#176;C are both interval-scale measures of temperature because an increase of 1&#176;F or 1&#176;C represents the same amount of extra heat at freezing point as it does at boiling point. Importantly, we restrict our attention to interval-scale measurable moral theories only for sake of simplicity. Restricting one&#8217;s attention to a certain subclass of moral theories is a common strategy for making progress in the existing literature on moral uncertainty. For instance, MEC is only applicable to uncertainty over theories that can all be represented by interval-scale and intertheoretically comparable choice-worthiness rankings. Advocates of MEC have gone on to propose alternative decision procedures for cases where these two conditions are not satisfied, such as the <italic>Borda count</italic> voting-theoretic approach (<xref ref-type="bibr" rid="B25">MacAskill 2016</xref>). Similarly, Tarsney (<xref ref-type="bibr" rid="B47">2018</xref>) restricts his attention to cases of moral uncertainty over different versions of absolutist deontology when proposing a <italic>stochastic dominance</italic> approach. Additional examples include Greaves and Ord (<xref ref-type="bibr" rid="B12">2017</xref>) and Kaczmarek and Lloyd (<xref ref-type="bibr" rid="B21">forthcoming</xref>).</p></fn>
<fn id="n18"><p>Actually, MMT will use the symmetric version of Nash&#8217;s bargaining solution, whereas Greaves and Cotton-Barratt use the asymmetric version. But this difference is unimportant for our purposes here.</p></fn>
<fn id="n19"><p>By contrast, Greaves and Cotton-Barratt&#8217;s proposals need not support Proportionality, as Lloyd (<xref ref-type="bibr" rid="B23">2022: 9-10</xref>) demonstrates.</p></fn>
<fn id="n20"><p>The Nash bargaining solution can also be justified as the limiting outcome of several diachronic, &#8220;non-cooperative&#8221; models of the bargaining process (<xref ref-type="bibr" rid="B5">Binmore et al. 1986</xref>).</p></fn>
<fn id="n21"><p>Note that in order to multiply the choice-worthiness values of two moral theories, we need not assume that these theories are intertheoretically unit-comparable. For a demonstration of this feature of MMT, see &#167;4.4 below.</p></fn>
<fn id="n22"><p>Scale invariance is satisfied not only by Nash&#8217;s bargaining solution, but also by some of the most popular alternatives to it, for instance the Kalai-Smorodinsky solution. However, some bargaining solutions do violate scale invariance, for instance the well-known Kalai solution. For a useful overview of these and other cooperative bargaining solutions, see Thomson (<xref ref-type="bibr" rid="B51">1994</xref>).</p></fn>
<fn id="n23"><p>&#8220;Fanaticism&#8221; was coined by Bostrom (<xref ref-type="bibr" rid="B6">2011</xref>). It has since also been referred to as &#8220;recklessness&#8221; (<xref ref-type="bibr" rid="B2">Beckstead and Thomas 2024</xref>).</p></fn>
<fn id="n24"><p>This result also follows directly from the scale invariance axiom.</p></fn>
<fn id="n25"><p>See MacAskill et al. (<xref ref-type="bibr" rid="B27">2020: 150-155</xref>) for a defense of the latter stance.</p></fn>
<fn id="n26"><p>See Jacob Ross (<xref ref-type="bibr" rid="B39">2006</xref>) for some more detail on this argument, and see Tarsney (<xref ref-type="bibr" rid="B48">2019</xref>) for additional implications of the dominance argument elsewhere. In response, MacAskill et al. (<xref ref-type="bibr" rid="B27">2020: 52-53</xref>) have suggested falling back on prudential, or self-regarding, reasons. But, although this helps us avoid demandingness in one sense (<xref ref-type="bibr" rid="B26">MacAskill 2019: fn 2</xref>), in another sense it exacerbates the problem: we would be &#8220;prohibited from acting against our interests to a certain degree and obligated to act in accordance with our interests to a certain degree&#8221; (<xref ref-type="bibr" rid="B46">Sung 2023</xref>).</p></fn>
<fn id="n27"><p>This approach is quite similar to the one adumbrated by Newberry &amp; Ord (<xref ref-type="bibr" rid="B32">2021: 8-9</xref>). They suggest a bargaining approach inspired by analogy with a parliament that uses non-deterministic &#8220;proportional chances voting.&#8221; However, Newberry and Ord do not discuss which formal model of bargaining they think might be appropriate here, as we do in &#167;4.2 above.</p></fn>
<fn id="n28"><p>See Newberry &amp; Ord (<xref ref-type="bibr" rid="B32">2021: 8-9</xref>).</p></fn>
<fn id="n29"><p>Possible disagreement points considered by Greaves and Cotton-Barratt include:</p>
<p><list list-type="order">
<list-item><p><italic>Random dictator</italic>: a lottery is held, wherein each theory representative&#8217;s chance of winning is equal to the decision maker&#8217;s credence in the moral theory represented. The lottery winner gets to decide how the decision maker will act in the current choice situation.</p></list-item>
<list-item><p><italic>Anti-utopia</italic>: a (hypothetical) outcome whose choiceworthiness according to any given moral theory is the minimum choiceworthiness possible in the current choice situation according to that moral theory.</p></list-item>
<list-item><p><italic>Do nothing</italic>: the outcome that would eventuate if the decision maker did nothing.</p></list-item>
</list></p></fn>
<fn id="n30"><p>Thus, MMT&#8217;s disagreement point is motivated by the analogy with the marketplace. By contrast, Greaves and Cotton-Barratt&#8217;s methodology is simply to select some &#8220;reasonably simple and elegant&#8221; disagreement point such that Nash bargaining theory with that choice of disagreement point supplies a satisfactory metanormative theory (<xref ref-type="bibr" rid="B11">2024: 139</xref>). (In other words: the disagreement point is a theoretical free variable for Greaves and Cotton-Barratt, the only constraint on which is extensional adequacy.)</p></fn>
<fn id="n31"><p>This approach is inspired by Hedden (<xref ref-type="bibr" rid="B17">2012</xref>).</p></fn>
<fn id="n32"><p>It is also worth noting here that the Maximize Expected <italic>Normalized</italic> Choice-Worthiness extension of MEC designed to cover cases of intertheoretic unit-incomparability also suffers from a problem of small worlds (<xref ref-type="bibr" rid="B24">Lloyd ms</xref>).</p></fn>
</fn-group>
<sec>
<title>Acknowledgements</title>
<p>Special thanks to Tim Campbell, Paul Forrester, Niels Br&#248;gger, two reviewers and the editors of <italic>Ergo</italic>. Harry R. Lloyd and Michael Plant are grateful to both of the Forethought Foundation and the Happier Lives Institute for their financial support. Michael Plant is also grateful to the Wellbeing Research Centre, University of Oxford, for its support.</p>
</sec>
<ref-list>
<ref id="B1"><mixed-citation publication-type="journal"><string-name><surname>Baker</surname>, <given-names>Calvin</given-names></string-name> (<year>2024</year>). <article-title>Expected Choiceworthiness and Fanaticism</article-title>. <source>Philosophical Studies</source>, <volume>181</volume>(<issue>5</issue>), <fpage>1237</fpage>&#8211;<lpage>1256</lpage>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><string-name><surname>Beckstead</surname>, <given-names>Nick</given-names></string-name> and <string-name><given-names>Teruji</given-names> <surname>Thomas</surname></string-name> (<year>2024</year>). <article-title>A Paradox for Tiny Probabilities and Enormous Values</article-title>. <source>No&#251;s</source>, <volume>58</volume>(<issue>2</issue>), <fpage>431</fpage>&#8211;<lpage>455</lpage>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="book"><string-name><surname>Benatar</surname>, <given-names>David</given-names></string-name> (<year>2006</year>). <source>Better Never to Have Been: the Harm of Coming into Existence</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="book"><string-name><surname>Bennett</surname>, <given-names>Jonathan</given-names></string-name> (<year>1978</year>). <chapter-title>On Maximizing Happiness</chapter-title>. In <string-name><given-names>R. I.</given-names> <surname>Sikora</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Barry</surname></string-name> (Eds.), <source>Obligations to Future Generations</source> (<fpage>61</fpage>&#8211;<lpage>73</lpage>). <publisher-name>Temple University Press</publisher-name>.</mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Binmore</surname>, <given-names>Ken</given-names></string-name>, <string-name><given-names>Ariel</given-names> <surname>Rubinstein</surname></string-name>, and <string-name><given-names>Asher</given-names> <surname>Wolinsky</surname></string-name> (<year>1986</year>). <article-title>The Nash Bargaining Solution in Economic Modelling</article-title>. <source>RAND Journal of Economics</source>, <volume>17</volume>(<issue>2</issue>), <fpage>176</fpage>&#8211;<lpage>188</lpage>.</mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Bostrom</surname>, <given-names>Nick</given-names></string-name> (<year>2011</year>). <article-title>Infinite Ethics</article-title>. <source>Analysis and Metaphysics</source>, <volume>10</volume>, <fpage>9</fpage>&#8211;<lpage>59</lpage>.</mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="book"><string-name><surname>Broome</surname>, <given-names>John</given-names></string-name> (<year>2012</year>). <source>Climate Matters: Ethics in a Warming World</source>. <publisher-name>W. W. Norton</publisher-name>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><string-name><surname>Clark</surname>, <given-names>Matthew</given-names></string-name> and <string-name><given-names>Theron</given-names> <surname>Pummer</surname></string-name> (<year>2019</year>). <article-title>Each-We Dilemmas and Effective Altruism</article-title>. <source>Journal of Practical Ethics</source>, <volume>7</volume>(<issue>1</issue>), <fpage>24</fpage>&#8211;<lpage>32</lpage>.</mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>Field</surname>, <given-names>Claire</given-names></string-name> (<year>2019</year>). <article-title>Recklessness and Uncertainty: Jackson Cases and Merely Apparent Asymmetry</article-title>. <source>Journal of Moral Philosophy</source>, <volume>16</volume>(<issue>4</issue>), <fpage>391</fpage>&#8211;<lpage>413</lpage>.</mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Gracely</surname>, <given-names>Edward J.</given-names></string-name> (<year>1996</year>). <article-title>On the Noncomparability of Judgements Made by Different Ethical Theories</article-title>. <source>Metaphilosophy</source>, <volume>27</volume>(<issue>3</issue>), <fpage>327</fpage>&#8211;<lpage>332</lpage>.</mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Greaves</surname>, <given-names>Hilary</given-names></string-name> and <string-name><given-names>Owen</given-names> <surname>Cotton-Barratt</surname></string-name> (<year>2024</year>). <article-title>A Bargaining-Theoretic Approach to Moral Uncertainty</article-title>. <source>Journal of Moral Philosophy</source>, <volume>21</volume>(<issue>1-2</issue>), <fpage>127</fpage>&#8211;<lpage>169</lpage>.</mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Greaves</surname>, <given-names>Hilary</given-names></string-name> and <string-name><given-names>Toby</given-names> <surname>Ord</surname></string-name> (<year>2017</year>). <article-title>Moral Uncertainty About Population Axiology</article-title>. <source>Journal of Ethics and Social Philosophy</source>, <volume>12</volume>(<issue>2</issue>), <fpage>135</fpage>&#8211;<lpage>167</lpage>.</mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><string-name><surname>Greaves</surname>, <given-names>Hilary</given-names></string-name>, <string-name><given-names>William</given-names> <surname>MacAskill</surname></string-name>, <string-name><given-names>Andreas</given-names> <surname>Mogensen</surname></string-name>, and <string-name><given-names>Teruji</given-names> <surname>Thomas</surname></string-name> (<year>2024</year>). <article-title>On the Desire to Make a Difference</article-title>. <source>Philosophical Studies</source>, <volume>181</volume>(<issue>6-7</issue>), <fpage>1599</fpage>&#8211;<lpage>1626</lpage>.</mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Gustafsson</surname>, <given-names>Johan</given-names></string-name> (<year>2022</year>). <article-title>Second Thoughts About My Favourite Theory</article-title>. <source>Pacific Philosophical Quarterly</source>, <volume>103</volume>(<issue>3</issue>), <fpage>448</fpage>&#8211;<lpage>470</lpage>.</mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Gustafsson</surname>, <given-names>Johan</given-names></string-name> and <string-name><given-names>Olle</given-names> <surname>Torpman</surname></string-name> (<year>2014</year>). <article-title>In Defence of My Favourite Theory</article-title>. <source>Pacific Philosophical Quarterly</source>, <volume>95</volume>(<issue>2</issue>), <fpage>159</fpage>&#8211;<lpage>174</lpage>.</mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>Harman</surname>, <given-names>Elizabeth</given-names></string-name> (<year>2009</year>). <article-title>Critical Study: David Benatar&#8217;s Better Never to Have Been: The Harm of Coming into Existence</article-title>. <source>No&#251;s</source>, <volume>43</volume>(<issue>4</issue>), <fpage>776</fpage>&#8211;<lpage>785</lpage>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Hedden</surname>, <given-names>Brian</given-names></string-name> (<year>2012</year>). <article-title>Options and the Subjective Ought</article-title>. <source>Philosophical Studies</source>, <volume>158</volume>(<issue>2</issue>), <fpage>343</fpage>&#8211;<lpage>360</lpage>.</mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="book"><string-name><surname>Hedden</surname>, <given-names>Brian</given-names></string-name> (<year>2016</year>). <chapter-title>Does MITE Make Right?</chapter-title> In <string-name><given-names>R.</given-names> <surname>Shafer-Landau</surname></string-name> (Ed.), <source>Oxford Studies in Metaethics, Volume 11</source> (<fpage>102</fpage>&#8211;<lpage>128</lpage>). <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><string-name><surname>Horton</surname>, <given-names>Joe</given-names></string-name> (<year>2017</year>). <article-title>The All of Nothing Problem</article-title>. <source>Journal of Philosophy</source>, <volume>114</volume>(<issue>2</issue>), <fpage>94</fpage>&#8211;<lpage>104</lpage>.</mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><string-name><surname>Jackson</surname>, <given-names>Frank</given-names></string-name> (<year>1991</year>). <article-title>Decision-Theoretic Consequentialism and the Nearest and Dearest Objection</article-title>. <source>Ethics</source>, <volume>101</volume>(<issue>3</issue>), <fpage>461</fpage>&#8211;<lpage>482</lpage>.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><string-name><surname>Kaczmarek</surname>, <given-names>Patrick</given-names></string-name> and <string-name><given-names>Harry R.</given-names> <surname>Lloyd</surname></string-name> (forthcoming). <article-title>Moral Uncertainty, Pure Justifiers, and Agent-Centred Options</article-title>. <source>Australasian Journal of Philosophy</source>, <volume>00</volume>, <fpage>1</fpage>&#8211;<lpage>29</lpage>.</mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="book"><string-name><surname>Kagan</surname>, <given-names>Shelly</given-names></string-name> (<year>2019</year>). <source>How to Count Animals, More or Less</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>Lloyd</surname>, <given-names>Harry R.</given-names></string-name> (<year>2022</year>). <article-title>The Property Rights Approach to Moral Uncertainty</article-title>. <source>Happier Lives Institute Working Paper</source>, <volume>00</volume>, <fpage>1</fpage>&#8211;<lpage>40</lpage>.</mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="journal"><string-name><surname>Lloyd</surname>, <given-names>Harry R.</given-names></string-name> (ms). <article-title>Moral Uncertainty, Expected Choiceworthiness, and Variance Normalization</article-title>.</mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>MacAskill</surname>, <given-names>William</given-names></string-name> (<year>2016</year>). <article-title>Normative Uncertainty as a Voting Problem</article-title>. <source>Mind</source>, <volume>125</volume>(<issue>500</issue>), <fpage>967</fpage>&#8211;<lpage>1004</lpage>.</mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><string-name><surname>MacAskill</surname>, <given-names>William</given-names></string-name> (<year>2019</year>). <article-title>Practical Ethics Given Moral Uncertainty</article-title>. <source>Utilitas</source>, <volume>31</volume>(<issue>3</issue>), <fpage>231</fpage>&#8211;<lpage>245</lpage>.</mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="book"><string-name><surname>MacAskill</surname>, <given-names>William</given-names></string-name>, <string-name><given-names>Krister</given-names> <surname>Bykvist</surname></string-name>, and <string-name><given-names>Toby</given-names> <surname>Ord</surname></string-name> (<year>2020</year>). <source>Moral Uncertainty</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><string-name><surname>MacAskill</surname>, <given-names>William</given-names></string-name> and <string-name><given-names>Toby</given-names> <surname>Ord</surname></string-name> (<year>2020</year>). <article-title>Why Maximize Expected Choice-Worthiness?</article-title> <source>No&#251;s</source>, <volume>54</volume>(<issue>2</issue>), <fpage>327</fpage>&#8211;<lpage>353</lpage>.</mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><string-name><surname>Mogensen</surname>, <given-names>Andreas</given-names></string-name> (<year>2016</year>). <article-title>Should We Prevent Optimific Wrongs?</article-title> <source>Utilitas</source>, <volume>28</volume>(<issue>2</issue>), <fpage>215</fpage>&#8211;<lpage>226</lpage>.</mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="journal"><string-name><surname>Mu&#241;oz</surname>, <given-names>Daniel</given-names></string-name> and <string-name><given-names>Jack</given-names> <surname>Spencer</surname></string-name> (<year>2021</year>). <article-title>Knowledge of Objective &#8216;Oughts&#8217;: Monotonicity and the New Miners Puzzle</article-title>. <source>Philosophy and Phenomenological Research</source>, <volume>103</volume>(<issue>1</issue>), <fpage>77</fpage>&#8211;<lpage>91</lpage>.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Nash</surname> <suffix>Jr.</suffix>, <given-names>John F.</given-names></string-name> (<year>1950</year>). <article-title>The Bargaining Problem</article-title>. <source>Econometrica</source>, <volume>18</volume>(<issue>2</issue>), <fpage>155</fpage>&#8211;<lpage>162</lpage>.</mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="journal"><string-name><surname>Newberry</surname>, <given-names>Toby</given-names></string-name> and <string-name><given-names>Toby</given-names> <surname>Ord</surname></string-name> (<year>2021</year>). <article-title>The Parliamentary Approach to Moral Uncertainty</article-title>. <source>FHI Technical Report No. 2021-2</source>, <volume>00</volume>, <fpage>1</fpage>&#8211;<lpage>16</lpage>.</mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="journal"><string-name><surname>Ord</surname>, <given-names>Toby</given-names></string-name> (<year>2015</year>). <article-title>Moral Trade</article-title>. <source>Ethics</source>, <volume>126</volume>(<issue>1</issue>), <fpage>118</fpage>&#8211;<lpage>138</lpage>.</mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="journal"><string-name><surname>Pallies</surname>, <given-names>Daniel</given-names></string-name> (<year>2024</year>). <article-title>Pessimism and Procreation</article-title>. <source>Philosophy and Phenomenological Research</source>, <volume>108</volume>(<issue>3</issue>), <fpage>751</fpage>&#8211;<lpage>771</lpage>.</mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="book"><string-name><surname>Parfit</surname>, <given-names>Derek</given-names></string-name> (<year>1984</year>). <source>Reasons and Persons</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><string-name><surname>Plant</surname>, <given-names>Michael</given-names></string-name> (<year>2022</year>). <article-title>Wheeling and Dealing: An Internal Bargaining Approach to Moral Uncertainty</article-title>. <source>Happier Lives Institute Working Paper</source>, <volume>00</volume>, <fpage>1</fpage>&#8211;<lpage>20</lpage>.</mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="journal"><string-name><surname>Pummer</surname>, <given-names>Theron</given-names></string-name> (<year>2016</year>). <article-title>Whether and Where to Give</article-title>. <source>Philosophy &amp; Public Affairs</source>, <volume>44</volume>(<issue>1</issue>), <fpage>77</fpage>&#8211;<lpage>95</lpage>.</mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="journal"><string-name><surname>Pummer</surname>, <given-names>Theron</given-names></string-name> (<year>2021</year>). <article-title>Impermissible Yet Praiseworthy</article-title>. <source>Ethics</source>, <volume>131</volume>(<issue>4</issue>), <fpage>697</fpage>&#8211;<lpage>726</lpage>.</mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="journal"><string-name><surname>Ross</surname>, <given-names>Jacob</given-names></string-name> (<year>2006</year>). <article-title>Rejecting Ethical Deflationism</article-title>. <source>Ethics</source>, <volume>116</volume>(<issue>4</issue>), <fpage>742</fpage>&#8211;<lpage>768</lpage>.</mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="thesis"><string-name><surname>Sepielli</surname>, <given-names>Andrew</given-names></string-name> (<year>2010</year>). <source>Along an Imperfectly Lighted Path: Practical Rationality and Normative Uncertainty</source>. PhD diss. <publisher-name>Rutgers University</publisher-name>.</mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><string-name><surname>Sheperd</surname>, <given-names>Joshua</given-names></string-name> (<year>2015</year>). <article-title>Deciding as Intentional Action: Control Over Decisions</article-title>. <source>Australasian Journal of Philosophy</source>, <volume>93</volume>(<issue>2</issue>), <fpage>335</fpage>&#8211;<lpage>351</lpage>.</mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="journal"><string-name><surname>Sinclair</surname>, <given-names>Thomas</given-names></string-name> (<year>2018</year>). <article-title>Are We Conditionally Obligated to be Effective Altruists?</article-title> <source>Philosophy &amp; Public Affairs</source>, <volume>46</volume>(<issue>1</issue>), <fpage>36</fpage>&#8211;<lpage>59</lpage>.</mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><string-name><surname>Singer</surname>, <given-names>Peter</given-names></string-name> (<year>1972</year>). <article-title>Famine, Affluence, and Morality</article-title>. <source>Philosophy &amp; Public Affairs</source>, <volume>1</volume>(<issue>3</issue>), <fpage>229</fpage>&#8211;<lpage>243</lpage>.</mixed-citation></ref>
<ref id="B44"><mixed-citation publication-type="book"><string-name><surname>Singer</surname>, <given-names>Peter</given-names></string-name> (<year>2009</year>). <source>Animal Liberation</source>. <publisher-name>HarperCollins</publisher-name>.</mixed-citation></ref>
<ref id="B45"><mixed-citation publication-type="journal"><string-name><surname>Sung</surname>, <given-names>Leora</given-names></string-name> (<year>2022</year>). <article-title>Never Just Save the Few</article-title>. <source>Utilitas</source>, <volume>34</volume>(<issue>3</issue>), <fpage>275</fpage>&#8211;<lpage>288</lpage>.</mixed-citation></ref>
<ref id="B46"><mixed-citation publication-type="journal"><string-name><surname>Sung</surname>, <given-names>Leora</given-names></string-name> (<year>2023</year>). <article-title>Supererogation, Suberogation, and Maximizing Expected Choiceworthiness</article-title>. <source>Canadian Journal of Philosophy</source>, <volume>53</volume>(<issue>5</issue>), <fpage>418</fpage>&#8211;<lpage>432</lpage>.</mixed-citation></ref>
<ref id="B47"><mixed-citation publication-type="journal"><string-name><surname>Tarsney</surname>, <given-names>Christian</given-names></string-name> (<year>2018</year>). <article-title>Moral Uncertainty for Deontologists</article-title>. <source>Ethical Theory and Moral Practice</source>, <volume>21</volume>(<issue>3</issue>), <fpage>505</fpage>&#8211;<lpage>520</lpage>.</mixed-citation></ref>
<ref id="B48"><mixed-citation publication-type="journal"><string-name><surname>Tarsney</surname>, <given-names>Christian</given-names></string-name> (<year>2019</year>). <article-title>Rejecting Supererogationsim</article-title>. <source>Pacific Philosophical Quarterly</source>, <volume>100</volume>(<issue>2</issue>), <fpage>599</fpage>&#8211;<lpage>623</lpage>.</mixed-citation></ref>
<ref id="B49"><mixed-citation publication-type="journal"><string-name><surname>Tarsney</surname>, <given-names>Christian</given-names></string-name> (<year>2021</year>). <article-title>Vive la Diff&#233;rence? Structrual Diversity as a Challenge for Metanormative Theories</article-title>. <source>Ethics</source>, <volume>131</volume>(<issue>2</issue>), <fpage>151</fpage>&#8211;<lpage>182</lpage>.</mixed-citation></ref>
<ref id="B50"><mixed-citation publication-type="book"><string-name><surname>Temkin</surname>, <given-names>Larry</given-names></string-name> (<year>2022</year>). <source>Being Good in a World of Need</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B51"><mixed-citation publication-type="book"><string-name><surname>Thomson</surname>, <given-names>William</given-names></string-name> (<year>1994</year>). <chapter-title>Cooperative Models of Bargaining</chapter-title>. In <string-name><given-names>R.</given-names> <surname>Aumann</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Hart</surname></string-name> (Eds.), <source>Handbook of Game Theory with Economic Applications, Volume 2</source> (<fpage>1237</fpage>&#8211;<lpage>1284</lpage>). <publisher-name>Elsevier</publisher-name>.</mixed-citation></ref>
<ref id="B52"><mixed-citation publication-type="book"><string-name><surname>Unger</surname>, <given-names>Peter</given-names></string-name> (<year>1996</year>). <source>Living High and Letting Die: Our Illusions of Innocence</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B53"><mixed-citation publication-type="book"><string-name><surname>Weatherson</surname>, <given-names>Brian</given-names></string-name> (<year>2019</year>). <source>Normative Externalism</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B54"><mixed-citation publication-type="journal"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2022</year>). <article-title>In Defense of Fanaticism</article-title>. <source>Ethics</source>, <volume>132</volume>(<issue>2</issue>), <fpage>445</fpage>&#8211;<lpage>477</lpage>.</mixed-citation></ref>
</ref-list>
</back>
</article>