<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2330-4014</journal-id>
<journal-title-group>
<journal-title>Ergo AN OPEN ACCESS JOURNAL OF PHILOSOPHY</journal-title>
</journal-title-group>
<issn pub-type="epub">2330-4014</issn>
<publisher>
<publisher-name>Michigan Publishing Services</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3998/ergo.9269</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The Unexpected Value of the Future</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Wilkinson</surname>
<given-names>Hayden</given-names>
</name>
<email>hayden.wilkinson@uwa.edu.au</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>University of Oxford</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-16">
<day>16</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date pub-type="collection">
<year>2026</year>
</pub-date>
<volume>13</volume>
<elocation-id>14</elocation-id>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2026 The Author(s)</copyright-statement>
<copyright-year>2026</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">https://creativecommons.org/licenses/by-nc-nd/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://journals.publishing.umich.edu/ergo/articles/10.3998/ergo.9269/"/>
<abstract>
<p>Various philosophers accept moral views that are <italic>impartial, additive</italic>, and <italic>risk-neutral</italic> with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire <italic>reductio ad absurdum</italic>. If the expected sum of value in humanity&#8217;s future is <italic>undefined</italic>&#8212;if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution&#8212;then those views say that no real-world option is <italic>ever</italic> better than any other. And, as I argue, our evidence plausibly supports such a probability distribution. Indeed, it supports a probability distribution that cannot be evaluated even if we extend expected value theory according to one of several extensions proposed in the literature. Must we therefore reject all impartial, additive, risk-neutral moral theories? It turns out that we need not. I show that, by instead adopting a strong enough extension of expected value theory, we can evaluate that problematic distribution and potentially salvage those moral views.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Consider three moral claims, each seemingly plausible and, in conjunction, accepted by various philosophers.<xref ref-type="fn" rid="n1">1</xref></p>
<p>The first is <italic>Impartiality</italic>: that the moral value of a life does not depend intrinsically on when or where it occurs; that, for instance, a human life lived millions of years in the future would be no more or less valuable than an otherwise identical life lived today.<xref ref-type="fn" rid="n2">2</xref></p>
<p>The second claim is that what matters for comparisons of moral betterness is the total sum of such value. Call this <italic>Additivity</italic>, the claim that: an outcome is at least as good as another if and only if the former contains at least as great a total sum of (some cardinal measure of) the value within each individual life.<xref ref-type="fn" rid="n3">3</xref></p>
<p>The third claim is that, when comparing <italic>risky</italic> options, <italic>expected value theory</italic> holds. This common view says that the morally best <italic>prospects</italic> over outcomes are those with the highest expected moral value&#8212;if we also accept Additivity, those with the highest probability-weighted sum of total moral value.<xref ref-type="fn" rid="n4">4</xref></p>
<p>But, the conjunction of these three claims has troubling implications. If we face options with certain probability distributions over total moral value, those options&#8217; expected values will be <italic>undefined</italic>. We will be unable to say that their expected values are greater than, equal to, or less than any other. Nor, by expected value theory (and no stronger principle for comparing risky options) can we say which of our options are best. The probability distributions that generate this result include well-known troublemakers from decision theory, such as: the Pasadena game (originating in <xref ref-type="bibr" rid="B32">Nover &#38; H&#225;jek 2004</xref>), and the Agnesi game (see <xref ref-type="bibr" rid="B2">Alexander 2012</xref>; <xref ref-type="bibr" rid="B35">Poisson 1824</xref>). But the problem is not merely hypothetical, as those trouble-making cases from decision theory are. As I will argue, these three claims lead to a practical ethical problem. In practice, there is reason to think that the total moral value of the future follows a similarly problematic probability distribution. And, if so, <italic>every</italic> option we might ever choose in practice will have undefined expected value.<xref ref-type="fn" rid="n5">5</xref></p>
<p>If we have one of these problematic probability distributions over the total value of the future, and we accept Impartiality, Additivity, and expected value theory alone, then we face a dire reductio ad absurdum. For every option ever available to us in practice, we cannot evaluate it; we cannot compare it to any other such option, not even to options <italic>identical</italic> to itself. We can <italic>never</italic> say how our options compare in terms of moral betterness.</p>
<p>This implication seems absurd. But it is not immediately clear how we might avoid it, at least without abandoning Impartiality or Additivity&#8212;without admitting that the time at which a life is lived <italic>can</italic> matter morally, nor admitting that the ranking of outcomes deviates from that of their total values. If we find both claims compelling, can we hold onto them and extend our comparisons to prospects without slipping into absurdity?</p>
<p>One way we might do so&#8212;which I will discuss but not ultimately endorse&#8212;is by replacing expected value theory with an alternative theory that exhibits sensitivity to risk (e.g., expected <italic>utility</italic> theory with a non-linear utility function, or a version of <italic>risk-weighted</italic> expected utility theory). With the right profile of risk aversion and risk seeking, such theories may effectively replace prospects like the Pasadena game with better-behaved ones. Given this, we may have a novel argument for risk sensitivity in the moral context: it seems we may need to be risk-sensitive to compare our options <italic>at all</italic> in practice.</p>
<p>In this paper, I seek to determine, in effect, whether risk sensitivity is the <italic>only</italic> way out. If you find Impartiality, Additivity, and the risk neutrality of expected value theory convincing, is there some way to salvage them?<xref ref-type="fn" rid="n6">6</xref> Can the above <italic>reductio</italic> be avoided without allowing risk sensitivity, or denying Impartiality or Additivity?</p>
<p>To preserve risk neutrality, it is necessary to <italic>extend</italic> expected value theory to compare troublesome options. The literature already features various proposals for how to do so (e.g., <xref ref-type="bibr" rid="B10">Colyvan 2008</xref>; <xref ref-type="bibr" rid="B15">Easwaran 2008</xref>; <xref ref-type="bibr" rid="B17">2014b</xref>; <xref ref-type="bibr" rid="B29">Meacham 2019</xref>). But, as it turns out, many of these existing proposals fail&#8212;I argue that they cannot compare the options we face in practice. Despite this, I describe a theory that may succeed in doing so. With such a theory, we can compare the problematic options I describe. And so we may be able to avoid the <italic>reductio</italic> that expected value theory, Impartiality, and Additivity brought upon us, and do so without endorsing risk sensitivity.</p>
</sec>
<sec>
<title>2. Why Would the Expected Value of the Future be Undefined?</title>
<p>Decision theorists have long recognised prospects that lack well-defined, finite expected values. Some prospects lack such expected values because they feature outcomes with <italic>infinite</italic> value, such as in Pascal&#8217;s Wager. But I will set aside such prospects in this paper, and assume that outcomes must have only finite value.<xref ref-type="fn" rid="n7">7</xref></p>
<p>But even if we exclude infinitely valuable outcomes, some prospects still lack well-defined expected values. One frequently discussed such prospect is that of the <italic>Pasadena game</italic>.<xref ref-type="fn" rid="n8">8</xref></p>
<disp-quote>
<p><italic>Pasadena game</italic>. (An outcome with) value 2 with probability <inline-formula><mml:math id="Eq001-mml"><mml:mrow><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula>;</p>
<p>value <inline-formula><mml:math id="Eq002-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq003-mml"><mml:mrow><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula>;</p>
<p>value <inline-formula><mml:math id="Eq004-mml"><mml:mrow><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>8</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq005-mml"><mml:mrow><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>8</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula>;</p>
<p>&#8230;</p>
<p>value <inline-formula><mml:math id="Eq007-mml"><mml:mrow><mml:mfrac><mml:msup><mml:mn>2</mml:mn><mml:mi>n</mml:mi></mml:msup><mml:mi>n</mml:mi></mml:mfrac><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq008-mml"><mml:mrow><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mi>n</mml:mi></mml:msup></mml:mrow></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula> for each positive integer <inline-formula><mml:math id="Eq009-mml"><mml:mi>n</mml:mi></mml:math></inline-formula>.</p>
</disp-quote>
<p>What is the game&#8217;s expected value? If we try to calculate it in the order the outcomes are listed, we obtain the series <inline-formula><mml:math id="Eq010-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>3</mml:mn></mml:mfrac></mml:mstyle><mml:mo>&#x2212;</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac bevelled='true'><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>+</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo></mml:mrow></mml:math></inline-formula>. This series, also known as the alternating harmonic series, fails to be absolutely convergent. If we were to naively add it up in one order or another, so long as we picked the right order, we could obtain <italic>any</italic> total we wanted.<xref ref-type="fn" rid="n9">9</xref> So, we cannot say that the game has any particular expected value at all (see <xref ref-type="bibr" rid="B32">Nover &#38; H&#225;jek 2004</xref>)&#8212;in this sense, the Pasadena game <italic>defies expectations</italic> (or is <italic>expectation-defying</italic>). And so expected value theory cannot tell us how it compares to any outcome, nor to any other option, nor even to itself. If options were to be compared by expected value theory alone, then the Pasadena game would be no better than, no worse than, nor equally good as <italic>any</italic> other option.</p>
<p>A similar prospect is the <italic>Agnesi game</italic>. Unlike the Pasadena game, it gives a continuous (rather than discrete) probability distribution over possible values. It can result in an outcome of <italic>any</italic> real value <inline-formula><mml:math id="Eq011-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>; its probability density over value is given by the following function, also known as the <italic>Witch of Agnesi</italic> or (an example of) the Cauchy distribution.<xref ref-type="fn" rid="n10">10</xref></p>
<disp-formula id="FD1"><mml:math id="Eq012-mml"><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mo>&#x03C0;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msup><mml:mi>v</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>On a graph, its distribution looks like this, symmetric about 0.</p>
<fig id="F1">
<caption>
<p><bold>Figure 1:</bold> Probability density function <inline-formula><mml:math id="Eq013-mml"><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> for the Agnesi game.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g1.png"/>
</fig>
<p>Try to take the expected value of this prospect and you will find that it has none (<xref ref-type="bibr" rid="B35">Poisson 1824</xref>). For continuous distributions like this, the expected value is given by the integral of <inline-formula><mml:math id="Eq014-mml"><mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> from negative infinity to positive infinity (analogous to an expected sum: <inline-formula><mml:math id="Eq015-mml"><mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> over all possible values <inline-formula><mml:math id="Eq016-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>). But, for the Agnesi game, that integral between 0 and positive infinity is positively infinite! And, from 0 to negative infinity, it is negatively infinite! Sum these integrals together&#8212;equivalently, take the integral over <italic>all</italic> possible values of <inline-formula><mml:math id="Eq017-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>&#8212;and the total is undefined. Much like the Pasadena game&#8217;s expected sum, the Agnesi game&#8217;s expected integral fails to converge absolutely. You might expect its expected value to converge to 0, since the distribution is symmetric about 0. But, no, it too defies expecations. So, expected value theory will fail to compare it to any outcome, to any other option, and even to itself.</p>
<p>You might think that neither of these prospects are realistic&#8212;that they are merely contrived, hypothetical options that we are sure never to encounter beyond the pages of a philosophy journal. As H&#225;jek (<xref ref-type="bibr" rid="B22">2014: 565</xref>) says of one description of the Pasadena game, you might think that considering such an option is &#8220;&#8230;a highly idealised thought experiment about a physically impossible game.&#8221; If so, you might not be troubled that expected value theory cannot compare either of these prospects to any other. You might think that we should simply ignore such prospects, and that expected value theory still suffices for real-world decision-making.</p>
<p>Unfortunately, we cannot&#8212;there is reason to think we face such prospects in practice. When evaluating our options morally, if we consider the prospects for the moral value of the distant future and we maintain Impartiality and Additivity, then we have reason to think that every option ever available to us defies expectations. In the remainder of this section, I give two discrete arguments to this effect, the second more compelling than the first.<xref ref-type="fn" rid="n11">11</xref></p>
<p>But, first, a brief note on probabilities. I assume that the notion of an outcome&#8217;s probability that ultimately determines moral betterness must be one of two notions. The first is its <italic>evidential</italic> probability: how probable that outcome is to result from the given option, on the present evidence of the agent deciding between that option and others (see <xref ref-type="bibr" rid="B53">Williamson 2000: 209</xref>). The second possible notion is the outcome&#8217;s <italic>subjective</italic> probability: how confident the decision-making agent is that that outcome will result from a given option.<xref ref-type="fn" rid="n12">12</xref> If evidential probabilities are the morally relevant ones, and if our evidence prescribes expectation-defying prospects, then we will face difficulties. Or, if subjective probabilities are the relevant ones, agents who form their beliefs rationally given their evidence will face difficulties.</p>
<sec>
<title><italic>2.1. A Possibility of Pasadena</italic></title>
<p>A simple argument that our prospects for the total value of the future defy expectations goes like this.</p>
<p>It seems <italic>possible</italic> that a Pasadena game (or Agnesi game, or similar such game) will be played at some point in the future. It is possible that a stranger may approach you (or some other agent) and offer to toss a coin until it first lands heads, and effect some event with moral value determined by the number of coin flips. Likewise, it is possible that some other mechanism will produce moral value according to the same distribution of objective chances. Although perhaps physically unrealistic, we can at least <italic>conceive</italic> of this happening. It would be no (logical or metaphysical) impossibility for this to occur. And, given how little we know about the far future, you might think it overconfident to assign probability zero to any agent ever being subjected to such a game. So, the evidential probability of a Pasadena game someday being played, it seems, must be greater than zero.<xref ref-type="fn" rid="n13">13</xref></p>
<p>And, as has often been discussed before, <italic>any</italic> prospect with real, non-zero probability <inline-formula><mml:math id="Eq019-mml"><mml:mi>p</mml:mi></mml:math></inline-formula> of the Pasadena game, no matter what other prospects it is mixed with, inherits the problems of the game itself&#8212;like the game itself, having any such probability <inline-formula><mml:math id="Eq020-mml"><mml:mi>p</mml:mi></mml:math></inline-formula> of the Pasadena game brings undefined expected value (<xref ref-type="bibr" rid="B24">H&#225;jek and Smithson 2012: 39&#8211;42</xref>; <xref ref-type="bibr" rid="B3">Bartha 2016: 802&#8211;803</xref>). So, as long as we have some probability <inline-formula><mml:math id="Eq021-mml"><mml:mi>p</mml:mi></mml:math></inline-formula> of such a Pasadena game over moral value being run somewhere in the future, the overall prospect for the total value of the future will be undefined.</p>
<p>But is there such a probability of the Pasadena game someday being played? I do not think it clear that the answer must be yes. One reason for doubt is that the correct theory of epistemic rationality may be <italic>knowledge-based</italic>: it may include as evidence everything the agent <italic>knows</italic>, and so require that evidential probabilities be assigned only after conditionalising on the agent&#8217;s knowledge (see <xref ref-type="bibr" rid="B53">Williamson 2000: &#167;10.3</xref>).<xref ref-type="fn" rid="n14">14</xref> And you might think that we <italic>know</italic> that no one will ever be subjected to the Pasadena game. Why? Perhaps you know that some particular physical law holds, and any version of the Pasadena game that you can imagine would violate it. Or perhaps you note that there are infinitely many different <italic>possible</italic> games that future people might face in their lives, but at most finitely many that anyone actually faces&#8212;from this, perhaps you can know that the Pasadena game, specifically, won&#8217;t be among them. Or perhaps you simply think it so implausible or subjectively improbable that the Pasadena game is ever played that you conclude that you know it will not be. Whatever the reason, you might then conditionalise on this knowledge and assign the game evidential probability zero.</p>
<p>Another reason to doubt that the evidential probability of the Pasadena game is non-zero is this. It&#8217;s one thing to think that any possible <italic>outcome</italic> should be assigned non-zero probability. But it&#8217;s quite another to think that any possible probability distribution <italic>over</italic> outcomes should be assigned non-zero probability. It may be too overconfident to assign probability zero to the future having value <inline-formula><mml:math id="Eq022-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> or greater, for any <inline-formula><mml:math id="Eq023-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>.<xref ref-type="fn" rid="n15">15</xref> But it would be a strictly stronger, and so less plausible, claim to say the same of assigning probability zero to the future having any possible <italic>probability distribution</italic> over values <inline-formula><mml:math id="Eq024-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> and above. Perhaps doing the latter would not be too overconfident. Or at least, given the dire implications if you do so, perhaps epistemic rationality should not require that you entertain every such possible probability distribution (even if it <italic>does</italic> require you to entertain every possible <italic>outcome</italic>).</p>
<p>For either of these reasons, or perhaps others, you might be unconvinced of this argument for us facing expectation-defying moral prospects in practice. To be truly worried that expected value theory is not up to the task of comparing our moral prospects, we may need a more compelling motivation&#8212;more compelling than the observation that facing the Pasadena game is merely <italic>possible</italic>.</p>
</sec>
<sec>
<title><italic>2.2. A Model of the Distant Future</italic></title>
<p>Here is a more compelling argument that we face expectation-defying prospects in practice.</p>
<p>Consider some point in the distant future after which our empirical evidence tells us almost nothing about what will occur when. Specifically, let <bold>t</bold> be some future time (or more accurately, for reasons to do with general relativity, a point in spacetime)<xref ref-type="fn" rid="n16">16</xref> such that all of our specific predictions of events <italic>after</italic> <bold>t</bold> are merely the uniform continuation of continuous physical trends from <italic>before</italic> it. In effect, <bold>t</bold> is a point after which all of our particular predictions of valuable future events are exhausted. Perhaps <bold>t</bold> is a billion years in the future; perhaps just 1,000 years in the future.<xref ref-type="fn" rid="n17">17</xref></p>
<p>However late <bold>t</bold> is, it is possible that humanity survives until then (or at least that <italic>some</italic> form of morally valuable life in our causal future survives until then). Regardless of how pessimistic you are about humanity&#8217;s prospects, it seems wildly overconfident to assign probability zero to us not making it until after <bold>t</bold>, or to say that we <italic>know</italic> that we will not survive until then. (Indeed, it seems <italic>far more</italic> overconfident than assigning probability zero to the Pasadena game someday being played, or claiming knowledge that it won&#8217;t be.) Then, conditional on us surviving until <bold>t</bold>, what of the prospects for life <italic>beyond</italic> that, as time stretches out indefinitely? What is the conditional probability of a further value <inline-formula><mml:math id="Eq025-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> arising after <bold>t</bold>? Since we have no empirical evidence about events beyond <bold>t</bold>, by definition, the answer is not so clear.</p>
<p>Consider one way we might model value after <bold>t</bold>, albeit a very speculative one. In broad strokes, it will be a reasonably plausible one, but certainly not the only plausible model we might adopt. (For reasons explained below, the existence of other plausible models won&#8217;t detract too much from the lessons we can draw from this one.)</p>
<p>We might model the moral value occurring after <bold>t</bold> as the sum of value at discrete, isolated, and reproducing <italic>clusters</italic> of life. Focusing on humanity and other Earth-bound life, at present, we are clustered together at one location, on a single planet. If we were to stay in this situation, it would be appropriate to assign a constant probability to all such life going extinct each year (or, since the risk of extinction may vary over time, at least a minimum, non-zero probability). But, more realistically, humanity might <italic>not</italic> remain so clustered; perhaps we will spread through space into many such clusters. As we spread further and further, some such clusters will be more and more isolated from others. For instance, if we imagine life spreading to different planet-like bodies throughout space (perhaps in different galaxies, or as far from each other as we like), the maximum spatial distance between one planet and its most distant counterparts will become greater and greater. Each such planet thereby becomes more and more isolated from its most distant counterparts&#8212;its inhabitants become better and better protected from calamities that arise on the most distant planets.</p>
<p>Indeed, given enough time, it most likely becomes <italic>physically impossible</italic> for events within one such cluster of life to affect other discrete clusters. This is implied by the most widely-accepted cosmological model (the &#8220;flat-lambda&#8221; model), which predicts that, as our universe evolves in the distant future, it will continue to expand at an ever-accelerating rate&#8212;many star systems, galaxies, groups of galaxies, and other bodies about which civilisation might cluster will be pulled apart. Eventually, such clusters will be moving away from another so quickly (and continuing to accelerate) that events in one cluster will never be able to affect any other cluster, even if their effects travel at the speed of light (<xref ref-type="bibr" rid="B31">Nagamine and Loeb 2003</xref>; <xref ref-type="bibr" rid="B9">Busha et al. 2003</xref>; see <xref ref-type="bibr" rid="B33">Ord 2021</xref> for an accessible survey).<xref ref-type="fn" rid="n18">18</xref> And, <italic>if</italic> our descendants successfully isolate themselves from one another in this way, their extinction then seems far less likely. The extinction of humanity as a whole (and indeed all morally valuable life) would then require great calamities to happen <italic>independently</italic> in each of many isolated clusters of civilisation. This is far less likely than any individual calamity.<xref ref-type="fn" rid="n19">19</xref> And, the more clusters, the lower the probability of overall extinction in a given time period.<xref ref-type="fn" rid="n20">20</xref></p>
<p>In this model of the future, absent such calamities, the number of clusters increases over time, at least in expectation. We can assume that each existing isolated cluster has the same (independent) probability of &#8216;reproducing&#8217; by settling a new location that will eventually be isolated from it, and of thereby creating a new cluster. I will also assume, as seems at least possible, that the probability of a cluster reproducing in a given time period is at least as great as its probability of dying off.</p>
<p>And, the more clusters, the more moral value there plausibly is. We can assume&#8212;conservatively, as it ignores growth within each cluster&#8212;that the total moral value arising in the world in a given year is proportional to the number of such clusters that then exist. The total (absolute) value after <bold>t</bold> then, again assuming Impartiality and Additivity, will be roughly proportional to the sum of the lifetimes of every such cluster to ever exist. But that total value may be positive or negative&#8212;there is some risk that the future of life in our universe may be one of immense misery. Or, at least, we should be uncertain about the relation between total number of cluster-years and total value&#8212;uncertain of the average value of a year of such a cluster existing. For simplicity, I will assume that there is a simple distribution over what this average value will be: probability 0.5 that it is some value <inline-formula><mml:math id="Eq026-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> and probability 0.5 that it is <inline-formula><mml:math id="Eq027-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:math></inline-formula>; and this is (roughly) independent of our uncertainty of how <italic>many</italic> clusters there are. (This distribution is unrealistic but, with some further tweaks below, will be realistic enough to draw some useful lessons.)</p>
<p>If we combine these assumptions, the arrangement of clusters forms a stochastic process known as a <italic>birth-and-death</italic> process (or, more specifically, a <italic>Kendall process</italic>&#8212;see <xref ref-type="bibr" rid="B26">Kendall 1948</xref>). Individual clusters reproduce and die off independently, much like members of a population. And what we care about is the total number of cluster-years that are ever lived, weighted by the average moral value of each cluster-year. (By assumption, it is equiprobable that the average cluster-year is positive or negative in value.) This gives us a rather complicated probability distribution over value.<xref ref-type="fn" rid="n21">21</xref> But, fortunately, there is a prospect with a simpler distribution that shares its key properties: the Aquila game.<xref ref-type="fn" rid="n22">22</xref> For simplicity, I will focus on the Aquila game, as given by the equation and plot below.</p>
<disp-formula id="FD2"><mml:math id="Eq028-mml"><mml:mrow><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mfrac><mml:mi>a</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:msqrt><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow></mml:msqrt></mml:mrow></mml:mrow></mml:mfrac><mml:mspace width="1em"/><mml:mrow><mml:mtext> for some constant&#x00A0;</mml:mtext><mml:mi>a</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi>b</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mrow></mml:math></disp-formula>
<fig id="F2">
<caption>
<p><bold>Figure 2:</bold> The probability density function over value for the Aquila game.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g2.png"/>
</fig>
<p>Just like the Pasadena and Agnesi games, attempt to take the Aquila game&#8217;s expected value and you will find that it has none. Its distribution is symmetric about 0, so you might expect it to have expectation 0. But, like the Agnesi game above, the probability density in its tails&#8212;as <inline-formula><mml:math id="Eq029-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> approaches <inline-formula><mml:math id="Eq030-mml"><mml:mrow><mml:mo>&#x00B1;</mml:mo><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:mrow></mml:math></inline-formula>&#8212;approaches 0 sufficiently slowly that the expected value integral is undefined.</p>
<p>The same goes for the prospect for the total value of the world <italic>overall</italic>, even if we take into account other very different models of what occurs after <bold>t</bold>, and even if we include value occuring both before and after <bold>t</bold>. Why? Like Pasadena, we can mix the Aquila game with any other prospect and the overall prospect will defy expectations too. Similarly, we can add the payoff of Aquila to any other prospect (such as the prospect over the value of events before <bold>t</bold>) and the prospect over the overall payoff will defy expectations.<xref ref-type="fn" rid="n23">23</xref> So, if (a prospect that behaves like) the Aquila game is at least one minimally probable prospect for what happens after <bold>t</bold>, then expected value theory will fail to compare <italic>every</italic> pair of options we might ever come across in practice.</p>
<p>But is the model described above even a <italic>plausible</italic> model for the value of the distant future? Must we really assign <italic>any</italic> probability to a prospect like the Aquila game being generated, such that the overall prospect inherits its expectation-defying property? You might be sceptical. Here are three reasons why, and why I do not think they undermine the claim that prospects for the total value of the world will behave like the Aquila game.</p>
<p>The first reason for scepticism: perhaps the number of clusters of, and value of, civilisation simply couldn&#8217;t continue growing forever. Perhaps eternal exponential growth, whether it is achieved by spreading outwards in an ever-expanding cosmos or by creating baby universes, is physically (or metaphysically) impossible. This may well be true! But, most plausibly, we do not <italic>know</italic> that it is. And that could be enough to make it rational to assign at least <italic>some</italic> non-zero probability to an average such growth rate (at least in the absence of catastrophes). But even if we did know that eternal exponential growth were impossible, the above model does not require it. Indeed, the Aquila game assigns probability 0 to the total survival time, or the total number of cluster-years, being infinite. We cannot rule out the model on these grounds.</p>
<p>The second reason why the above model may seem unrealistic: you might think that some possible extinction scenarios would strike every cluster of civilisation at once; perhaps some exotic physical phenomenon could simultaneously remove the conditions necessary for morally valuable life everywhere. If so, the annual probability of extinction of each cluster would not be entirely independent of others&#8217;. And, given this, the annual probability of overall extinction would not be brought arbitrarily close to 0 by simply adding more and more clusters. But still this does not prevent the prospect of overall future value from resembling the Aquila game. Even if there is some annual probability of civilisation-wide extinction, whether we avoid extinction in one year (conditional on having survived until the previous year) is not independent of whether we avoid it in every other year (conditional on having survived until the year before). In some states of the world, phenomena that extinguish all life at once are physically possible; in some states they are not. In states of the latter kind, having arbitrarily many isolated clusters of life does provide arbitrarily much protection from extinction. So, we should assign at least <italic>some</italic> non-zero probability to such extinction-causing phenomena being physically impossible. And so we can treat the overall prospect as a mixture of the prospect in which such phenomena are impossible and the prospect in which they are possible&#8212;in effect, a gamble between the Aquila game and something else. And so the overall prospect we obtain will still have tails resembling the Aquila game, since it offers some non-zero probability of playing such a game. And, since the Aquila game defies expectations, then the overall prospect will too. So it suffices to analyse the Aquila game in place of the more complicated overall prospect.</p>
<p>The third reason: it seems implausible that the average life is just as likely to be negative in value as it is to be positive, and of equal absolute value (on whichever interval scale we use to represent value). It seems to me at least that any future civilisation will more likely aim to make its descendants happy than aim to make them miserable (or, more generally, to have valuable experiences rather than disvaluable ones), and that its probability of success in this goal is better than chance. This probability of success seems <italic>far</italic> better than chance once we recognise that humanity in the far future, if it&#8217;s still around, will likely have access to far more advanced technologies and greater resources than we do. Or perhaps you are pessimistic about humanity&#8217;s future technological level, its available resources, or its inclination to benefit posterity. Perhaps our descendants are particularly likely to succumb to scenarios of widespread misery (for discussion of such possibilities, see <xref ref-type="bibr" rid="B4">Baumann 2017</xref>). If you think so, you might think the prospect for the average life skews towards misery rather than happiness. Either way, my earlier assumption that the average life has probability 0.5 of having value some <inline-formula><mml:math id="Eq031-mml"><mml:mrow><mml:mi>v</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and probability 0.5 of <inline-formula><mml:math id="Eq032-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:math></inline-formula> would be false. Rather, one of these possibilities will have a higher probability than the other, and so the distribution will skew one way or the other.<xref ref-type="fn" rid="n24">24</xref></p>
<p>Given this skew, the true distribution over future moral value will not be symmetric like the Aquila game. It will be skewed in either the positive or negative direction, as illustrated below. This more general <italic>Skewed Aquila Game</italic> has a probability distribution given by the following equation (for some positive <inline-formula><mml:math id="Eq033-mml"><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq034-mml"><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, representing the relative probabilities of total value being positive or negative).</p>
<disp-formula id="FD3"><mml:math id="Eq035-mml"><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnspacing="5pt" displaystyle="true" rowspacing="0pt"><mml:mtr><mml:mtd><mml:mrow><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mi>b</mml:mi><mml:mo rspace="0em">+</mml:mo><mml:mo fence="false" rspace="0.167em" stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo fence="false" rspace="0.167em" stretchy='false'>&#x007C;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msqrt><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow></mml:msqrt></mml:mrow></mml:mrow></mml:mfrac><mml:mspace width="1em"/></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mrow><mml:mtext>for&#x00A0;</mml:mtext><mml:mi>v</mml:mi></mml:mrow><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mi>b</mml:mi><mml:mo rspace="0em">+</mml:mo><mml:mo fence="false" rspace="0.167em" stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo fence="false" rspace="0.167em" stretchy='false'>&#x007C;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msqrt><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow></mml:msqrt></mml:mrow></mml:mrow></mml:mfrac><mml:mspace width="1em"/></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mrow><mml:mtext>for&#x00A0;</mml:mtext><mml:mi>v</mml:mi></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:math></disp-formula>
<fig id="F3">
<caption>
<p><bold>Figure 3:</bold> A probability density function over value for (a version of) the Skewed Aquila game.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g3.png"/>
</fig>
<p>For simplicity, in much of what follows, I will focus on the more basic Aquila game. The problems I will describe that arise for comparing the Aquila game to alternatives will arise with equal force if we substitute in the Skewed Aquila Game, a mixture of the Aquila game with something else, or other more complicated variations.</p>
</sec>
</sec>
<sec>
<title>3. Challenges for Decision-Making</title>
<p>If you model the value of the distant future as involving <italic>any</italic> non-zero probability of the Aquila game, the Skewed Aquila game, or any similar such game, then you face a serious challenge. In practice, you cannot assign an expected moral value to any of the options ever available to you. So, if expected value theory (and nothing stronger) were the correct theory of moral betterness under risk, then no option ever available to you would be morally better or worse than any other. But to accept this implication would be absurd.</p>
<p>To plausibly compare any of our available future prospects, we must replace expected value theory with some stronger alternative. In later sections, I will discuss such alternative theories. But, first, what do we want them to achieve?</p>
<p>I propose five problem cases that those theories must be able to deal with&#8212;and deal with in the intuitively <italic>correct</italic> way&#8212;to be extensionally adequate.<xref ref-type="fn" rid="n25">25</xref> Extensional adequacy may require dealing with cases even more complicated than those I consider here. These cases will be crude simplifications of the options we face in practice: they exclude all sources of value in the world other than the possibility of an Aquila game generated after <bold>t</bold>. In practice, we face options in which many valuable events will occur before <bold>t</bold>, in which there is perhaps only a small probability of life surviving until <bold>t</bold> to generate something resembling an Aquila game, and in which the prospect for events after <bold>t</bold> is far more complicated than the Aquila game. Nonetheless, it will largely suffice to consider simplified cases like these&#8212;as noted above, our available options will inherit the problems of the Aquila game. If expected value theory fails in the cases below, it will fail in practice. And it turns out that it does fail, as do many stronger theories designed to deal with the original Pasadena and Agnesi games.</p>
<p>The first problem case, <italic>No Change</italic>, is (a simplification of) the decision scenario an agent faces when their available actions all produce exactly the same future prospect. For instance, an agent may choose between eating Sugar Puffs for breakfast and eating Frosties, but have no evidence for either option being more or less likely to influence the future in any particular way. (Agents with great foresight may have access to evidence supporting some story of why one cereal is more likely to produce better long-run outcomes, but suppose that the agent here lacks any such evidence.)</p>
<p><bold>Scenario 1: No Change</bold></p>
<disp-quote>
<p><italic>Sugar Puffs:</italic> The Aquila game with particular values of <inline-formula><mml:math id="Eq036-mml"><mml:mrow><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p><italic>Frosties:</italic> The Aquila game with the same <inline-formula><mml:math id="Eq037-mml"><mml:mrow><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
</disp-quote>
<p>Note that both options have identical probability distributions over value. But, still, bare expected value theory cannot say how they compare&#8212;neither option has well-defined expected value, so that value cannot be equal to itself. (The same goes if we swap the Aquila game for the Skewed Aquila game.) And this is all the more troubling when, intuitively, the correct ranking of options seems clear: Sugar Puffs and Frosties are equally good. It would be desirable for our theory to say this, that any instance of the Aquila game with such and such parameters is equally as good as any other with the same parameters.<xref ref-type="fn" rid="n26">26</xref></p>
<p>The second problem case, <italic>Improving the Present</italic>, is that which an agent faces when they can improve some aspect of the world with certainty,<xref ref-type="fn" rid="n27">27</xref> without otherwise changing the prospect. For instance, an agent may choose whether to save the life of a child in the present day. And, regardless of whether they do so or do not, their evidence may entail an identical probability distribution over what happens in the very distant future. If so, then, for my purposes, their options are equivalent to the following:</p>
<p><bold>Scenario 2: Improving the Present</bold></p>
<disp-quote>
<p><italic>Do Nothing:</italic> The Aquila game (with particular <inline-formula><mml:math id="Eq038-mml"><mml:mrow><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:math></inline-formula>).</p>
<p><italic>Save a Life:</italic> The Aquila game (with the same <inline-formula><mml:math id="Eq039-mml"><mml:mrow><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:math></inline-formula>) with value <inline-formula><mml:math id="Eq040-mml"><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> added to every outcome.</p>
</disp-quote>
<p>Here, both options are identical <italic>except</italic> that the latter has its probability distribution shifted by some bonus value <inline-formula><mml:math id="Eq041-mml"><mml:mi>s</mml:mi></mml:math></inline-formula>.<xref ref-type="fn" rid="n28">28</xref> But, again, expected value theory cannot compare any options of this sort. (And again, the same goes if we swap the Aquila game for the Skewed Aquila game here.) And, again, this is all the more troubling given that the intuitively correct ranking is clear: that, as long as <inline-formula><mml:math id="Eq042-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> is positive, Save a Life should be better than Do Nothing. Improving every outcome should improve the option overall (so long as the outcomes&#8217; probabilities are held fixed).</p>
<p>The third problem case, <italic>Improving the Future</italic>, is one that an agent may face if they attempt to improve events occurring after <bold>t</bold>, conditional on us surviving until then. In this case, the agent&#8217;s choices don&#8217;t make it more or less likely that we survive but, if we do survive until then, those choices make it more or less likely that the average life afterwards will have positive value (on whichever interval scale we represent value). It is a decision of whether to alter the skew of the Skewed Aquila game one way or the other. And, to an approximation, this is the sort of case an agent might face when they can affect humanity&#8217;s long-run prospects in some manner that is extremely persistent. Perhaps it applies when a political activist decides whether to campaign for a change to political institutions that would foreseeably improve decision-making. Doing so may make it ever so slightly more likely that humanity at large has better political institutions indefinitely far into the future, perhaps increasing the probability that future lives (and clusters of civilisation) have positive value on average.</p>
<p><bold>Scenario 3: Improving the Future</bold></p>
<disp-quote>
<p><italic>Campaign:</italic> The Skewed Aquila game with some <inline-formula><mml:math id="Eq043-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula> and <inline-formula><mml:math id="Eq044-mml"><mml:mi>b</mml:mi></mml:math></inline-formula>.</p>
<p><italic>Don&#8217;t Campaign:</italic> The Skewed Aquila game with a <italic>lower</italic> <inline-formula><mml:math id="Eq045-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula> (and the same <inline-formula><mml:math id="Eq046-mml"><mml:mi>b</mml:mi></mml:math></inline-formula>).</p>
</disp-quote>
<p>Again, expected value theory alone cannot compare the two. Nor can it say that Campaign is better than Don&#8217;t Campaign&#8212;it cannot say that it is better to make it more likely that future lives are very good and less likely that they are very bad. Intuition demands that Campaign be ranked as better than Don&#8217;t Campaign.<xref ref-type="fn" rid="n29">29</xref></p>
<p>The fourth problem case, <italic>Reducing Extinction Risk</italic>, is that which an agent faces when they can affect humanity&#8217;s probability of long-term survival (and, a fortiori, the probability of morally valuable life surviving). If the agent does nothing, humanity will have some probability of surviving to <bold>t</bold> and beyond. If they intervene, humanity will have a <italic>greater</italic> probability of doing so. For my purposes, both options can be represented by some mixture of a low-value outcome (which, for simplicity, we can set to value 0) and the prospect obtained conditional on surviving the near term. For our purposes, those options are equivalent to the following.</p>
<p><bold>Scenario 4: Reducing Extinction Risk</bold></p>
<disp-quote>
<p><italic>Intervene:</italic> A mixture of the (Skewed) Aquila game (with some <inline-formula><mml:math id="Eq047-mml"><mml:mi>a</mml:mi></mml:math></inline-formula> or <inline-formula><mml:math id="Eq048-mml"><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq049-mml"><mml:mi>b</mml:mi></mml:math></inline-formula>) with probability <inline-formula><mml:math id="Eq050-mml"><mml:mrow><mml:mi>p</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and an outcome of value <inline-formula><mml:math id="Eq051-mml"><mml:mn>0</mml:mn></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq052-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p><italic>Do Nothing</italic> A mixture of the (Skewed) Aquila game (with the same <inline-formula><mml:math id="Eq053-mml"><mml:mi>a</mml:mi></mml:math></inline-formula> or <inline-formula><mml:math id="Eq054-mml"><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq055-mml"><mml:mi>b</mml:mi></mml:math></inline-formula>) with probability <inline-formula><mml:math id="Eq056-mml"><mml:mrow><mml:mi>q</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula> and an outcome of value <inline-formula><mml:math id="Eq057-mml"><mml:mn>0</mml:mn></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq058-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
</disp-quote>
<p>Here, both options are equivalent to having some probability of playing the Aquila game or Skewed Aquila game (with such and such parameters), with Intervene giving the higher probability. But, again, expected value theory cannot compare any two options fitting these descriptions. Expected value theory cannot say that Intervene is better; it cannot say reducing the risk of extinction is an improvement. Again, this is troubling.</p>
<p>Take the Aquila-game versions of Intervene and Do Nothing&#8212;each gives a probability of Aquila and a probability of value 0. It seems all the more troubling for expected value theory to say nothing in this case, given that the correct verdict may seem obvious. Both options seem equally good, in so far as expected value theory is plausible in the first place. After all, the Aquila game&#8217;s distribution is <italic>symmetric</italic> about 0. For any value <inline-formula><mml:math id="Eq059-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>, it has the same probability (density) of <inline-formula><mml:math id="Eq060-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq061-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:math></inline-formula>. These should cancel out. According to expected value theory, they would do if we were dealing with a prospect whose expectation were defined. To uphold the spirit of expected value theory, they must cancel out for the Aquila game too, such that the game is valued at 0. What, then, of Intervene and Do Nothing? Each is a mixture of that option valued at 0 and a further outcome with the same value. They should each be valued at 0 overall, and so be equally good.</p>
<p>Or consider the <italic>Skewed</italic>-Aquila-game versions of the two options. If the Skewed Aquila game in question is skewed in the <italic>positive</italic> direction, then it can be obtained from the Aquila game by shifting probabilities such that it is more likely that future lives are very good and less likely that they are very bad. This is clearly an improvement, and so must be better than obtaining value 0. Then, Intervene would surely be better than Do Nothing&#8212;between the Skewed Aquila game and the outcome of value 0, it provides the higher probability of the better one. Or, if the Skewed Aquila game in question is skewed in the <italic>negative</italic> direction, it can be obtained from the Aquila game by making it more likely that future lives are very <italic>bad</italic> and less likely that they are very good. This is clearly worse than the original Aquila game, and so must be worse than an outcome of value 0. Then, Do Nothing must be better than Intervene&#8212;between the Skewed Aquila game and the outcome of value 0, it provides the higher probability of the better one.<xref ref-type="fn" rid="n30">30</xref></p>
<p>The fifth and most challenging problem case, <italic>Multifarious Changes</italic>, is a combination of the previous three. The agent is not merely improving/worsening the present with certainty, nor changing the probability of human extinction before <bold>t</bold>, nor changing the probability of a good future conditional on survival. They have all three effects, or any subset of them, at once. This, I think, is a more realistic representation of many of our options. For instance, attempts to improve the long-term future often have some moral cost in the present&#8212;e.g., the opportunity cost of spending one&#8217;s resources on lobbying for institutional change is that the same resources aren&#8217;t directly used to help the poor. Or, when attempting to reduce the risk of extinction, there is often a further effect on the well-being of future people in the event of survival&#8212;e.g., implementing some measure to reduce the incidence of deadly pandemics not only reduces the risk of extinction, but likely also causes future people to experience fewer pandemics in general, whether or not they rise to the level of threatening extinction. Likewise, attempting to make future lives better conditional on survival will often affect the probability of extinction&#8212;e.g., if one succeeds in changing political institutions to better respond to the public&#8217;s interests, those institutions would then likely also be better at responding to threats of extinction.</p>
<p>If an agent has any options that differ by at least two of those effects at once then, for my purposes, we can model their decision as follows. (Recall that <inline-formula><mml:math id="Eq062-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula> represents the skew of the Skewed Aquila game&#8212;the greater the fraction, the greater the game&#8217;s skew towards positive value.</p>
<p><bold>Scenario 5: Multifarious Changes</bold></p>
<disp-quote>
<p><italic>Intervene:</italic> A mixture of 1) the Skewed Aquila game with some <inline-formula><mml:math id="Eq063-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula> and <inline-formula><mml:math id="Eq064-mml"><mml:mi>b</mml:mi></mml:math></inline-formula>, with value <inline-formula><mml:math id="Eq065-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> added to every outcome, with probability <inline-formula><mml:math id="Eq066-mml"><mml:mi>p</mml:mi></mml:math></inline-formula>, and 2) an outcome of value <inline-formula><mml:math id="Eq067-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq068-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p><italic>Do Nothing</italic> A mixture of 1) the Skewed Aquila game with some (perhaps different) <inline-formula><mml:math id="Eq069-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula> and <inline-formula><mml:math id="Eq070-mml"><mml:mi>b</mml:mi></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq071-mml"><mml:mrow><mml:mi>q</mml:mi><mml:mrow><mml:mo>&#x2260;</mml:mo></mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>, and 2) an outcome of value 0 with probability <inline-formula><mml:math id="Eq072-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
</disp-quote>
<p>From above, a fortiori, we know that expected value cannot compare (at least some) options fitting these descriptions. But nor, it turns out, can it compare <italic>any</italic> such options&#8212;any two such options will defy expectations. And again, this silence is troubling. It is not merely troubling because the correct ranking of the options is intuitively obvious; the correct ranking often won&#8217;t be. But it is troubling that our normative theories may fall silent in a decision that we plausibly face in practice. If an agent ever has the opportunity to influence humanity&#8217;s long-term future, it is plausible that they face this scenario, and they need guidance. For a decision theory to be plausible, it must offer such guidance in at least the cases we actually face in practice. But expected value theory cannot.</p>
</sec>
<sec>
<title>4. One Escape: Risk Sensitivity</title>
<p>Given its failure in all five problem cases above, expected value theory alone cannot be the correct theory of instrumental moral betterness. If it were, no option ever available to us would be better than (or even comparable to) any other. And that would be absurd.</p>
<p>In later sections of this paper, I will argue that this absurdity can be avoided without rejecting the verdicts of expected value theory altogether&#8212;that the theory can be <italic>extended</italic> to deal with the problem cases raised above. But, before that, it&#8217;s worth briefly considering an alternative solution. That solution is to reject expected value theory altogether, not in favour of some extension of it, but rejecting even the verdicts it makes in less troublesome cases. In its place, we could adopt a <italic>risk-sensitive</italic> decision theory. This would allow us to avoid absurdity in the above problem cases, as I will show in this section. In effect, the discussion up to this point might be seen as forming a surprising argument <italic>in favour</italic> of risk sensitivity.</p>
<p>To illustrate how risk sensitivity avoids absurdity in the above cases, consider one theory that exhibits it: <italic>expected utility theory</italic> (specifically, a risk-sensitive version of it). This theory works much like expected value theory does. Where expected value theory says that the best options are those with the highest expected moral <italic>value</italic>, expected utility theory says that the best options are those with the highest expected <italic>utility</italic>.</p>
<p>What is utility? For my purposes, it is some representation of the betterness ranking over outcomes. But it need not be the <italic>same</italic> representation as the moral value function. Utility here is not the same thing as what moral theorists sometimes call utility&#8212;a cardinal measure of total welfare&#8212;but instead a purely decision-theoretic construct.<xref ref-type="fn" rid="n31">31</xref></p>
<p>In general, the utility of an outcome may be <italic>any</italic> increasing real-valued function of its moral value (at least when determining instrumental moral betterness), risk-sensitive or not, so long as that function is strictly increasing. In particular, the correct utility function for use in moral decisions might sometimes be <italic>concave</italic>: the higher the <italic>value</italic> of outcomes, the less their <italic>utility</italic> increases for each additional unit of value that is added to them. This tends to lead to risk-averse preferences. And/or the utility function may sometimes be <italic>convex</italic>: the higher the value of the outcomes, the <italic>more</italic> their utility increases for each additional unit of value. This tends to lead to risk-<italic>inclined</italic> preferences. One possible function, <inline-formula><mml:math id="Eq073-mml"><mml:mrow><mml:mi>u</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, that is sometimes concave and sometimes convex is plotted below.</p>
<fig id="F4">
<caption>
<p><bold>Figure 4:</bold> A utility function that is concave for <inline-formula><mml:math id="Eq074-mml"><mml:mrow><mml:mi>v</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and convex for <inline-formula><mml:math id="Eq075-mml"><mml:mrow><mml:mi>v</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g4.png"/>
</fig>
<p>But how does switching from expected <italic>value</italic> theory to expected <italic>utility</italic> theory, with a non-linear utility function, affect our comparisons of expectation-defying options? To see how, note that such options posed a problem for expected value theory only because the probability densities of outcomes didn&#8217;t approach zero quickly enough as value approaches positive and negative infinity. If those extreme outcomes just had lower (perhaps <italic>much</italic> lower) absolute values, the options would no longer defy expectations, and expected value theory could evaluate them. But, in effect, they <italic>do</italic> have lower absolute &#8220;values&#8221; if we switch to expected utility theory with a utility function like that plotted above&#8212;we lower the contribution that those extreme outcomes make to the expected utility calculation. Then, for the purpose of calculating expected utility, an expectation-defying option no longer defies expectations!</p>
<p>Take, for example, the Aquila game. Its troublesome distribution was given by <inline-formula><mml:math id="Eq076-mml"><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mi>a</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:msqrt><mml:mi>v</mml:mi></mml:msqrt></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> (for some <inline-formula><mml:math id="Eq077-mml"><mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi></mml:mrow><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>). With a utility function that is concave enough for large positive values and convex enough for large negative values, we can turn that expectation-defying distribution <italic>over value</italic> into a much tamer distribution <italic>over utility</italic>. For instance, set <inline-formula><mml:math id="Eq078-mml"><mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow></mml:msqrt></mml:mrow></mml:math></inline-formula> such as that given by <inline-formula><mml:math id="Eq079-mml"><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>4</mml:mn><mml:mi>a</mml:mi><mml:msup><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mi>b</mml:mi><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mn>3</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> (which gives an expected utility of 0).</p>
<p>This works in all five of the problem cases described above. In the first (No Change), where we must compare the Aquila game to an identical prospect, a utility function as above lets us say that the two options are equally good&#8212;each gets an expected utility and, since the prospects are identical, those expected utilities are equal. So, not only can expected utility theory say something here, but it says the intuitively correct thing. In the second case (Improving the Present), expected utility theory with a utility function as above lets us say that the Aquila game sweetened by value <inline-formula><mml:math id="Eq080-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> is indeed better than the same Aquila game without it. In the third case (Improving the Future), it says that increasing the Skewed Aquila game&#8217;s skew in the positive direction is indeed an improvement. In the fourth case (Reducing Extinction Risk), where we compare two mixtures of the (Skewed) Aquila game, expected utility theory can again provide a comparison (although what it says will depend on the exact utility function). And, in the fifth (Multifarious Changes), again, it can compare any (perhaps sweetened mixture of) one Skewed Aquila game to another. In all five cases, it satisfies the desiderata I gave above.</p>
<p>And we can do the same with <italic>any</italic> pair of expectation-defying options; we need only adopt a utility function that is concave (convex) <italic>enough</italic> for large positive (negative) values. We need only accept a certain sensitivity to risk and the problem is solved. Thus, expected utility theory can deliver verdicts in those scenarios where expected value theory did not.<xref ref-type="fn" rid="n32">32</xref></p>
<p>And, thus, we seem to have a new and surprising argument in favour of risk sensitivity. Risk-sensitive decision theories can be compatible with Impartiality, Additivity, and the various empirical claims above, while expected value theory alone is not. Following this argument, perhaps we must reject expected value theory in favour of risk sensitivity.</p>
<p>But whether this argument succeeds depends on whether this is the <italic>only</italic> way to deal with those problem cases. Ultimately, it turns out that risk sensitivity is unnecessary to solve the problem. As I show below, we can extend expected value theory to deal with those problem cases. But it is worth keeping in mind what is at stake here: the result not only exonerates expected value theory from being incompatible with Impartiality and Additivity; it also undermines an otherwise compelling argument <italic>in favour</italic> of alternative theories that accommodate risk sensitivity.<xref ref-type="fn" rid="n33">33</xref></p>
</sec>
<sec>
<title>5. Preserving Risk Neutrality</title>
<p>Can we deal with prospects like the Aquila game without embracing risk sensitivity? Rather than rejecting expected value theory altogether, can we extend the theory to deal with such problem cases? And, if so, how?</p>
<p>In this section, I consider several possible extensions. As we will see, many of the extensions so far proposed in the literature cannot deal with a prospect as troublesome as the Aquila game. But some can.<xref ref-type="fn" rid="n34">34</xref></p>
<sec>
<title><italic>5.1. Relative Expectation Theory</italic></title>
<p>The first such extension is <italic>Relative Expectation Theory</italic>, first proposed by Colyvan (<xref ref-type="bibr" rid="B10">2008</xref>). Here, I will focus on the strengthened version suggested by both Colyvan and H&#225;jek (<xref ref-type="bibr" rid="B11">2016: 837&#8211;838</xref>) and Meacham (<xref ref-type="bibr" rid="B29">2019: 13&#8211;17</xref>).</p>
<p>According to Relative Expectation Theory, we no longer attempt to assign some value to each option separately and compare those values. Instead, for each <italic>pair</italic> of options, we evaluate a <italic>relative expectation</italic> (RE): the expected <italic>difference</italic> in value between the two options; but, in calculating this difference, we match up the outcomes of each option by how far along the option&#8217;s probability distribution they are. For any options <inline-formula><mml:math id="Eq081-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq082-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>, we match up the lowest value of the possible outcomes of <inline-formula><mml:math id="Eq083-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> to the lowest possible value for <inline-formula><mml:math id="Eq084-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>; we match up the median values of each; we match up the best possible values of each; and likewise for every other possible value, matching each value from <inline-formula><mml:math id="Eq085-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> with the value in <inline-formula><mml:math id="Eq086-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> that is equally far along <inline-formula><mml:math id="Eq087-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>&#8217;s distribution. Put differently, we match each possible value in <inline-formula><mml:math id="Eq088-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> to the value lying at the same <italic>quantile</italic> in <inline-formula><mml:math id="Eq089-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>.</p>
<p>Formally, we identify the value that is fraction <inline-formula><mml:math id="Eq090-mml"><mml:mi>P</mml:mi></mml:math></inline-formula> of the way along the probability distribution of <inline-formula><mml:math id="Eq091-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> with the quantile function <inline-formula><mml:math id="Eq092-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>&#8212;the function that, for each probability <inline-formula><mml:math id="Eq093-mml"><mml:mi>P</mml:mi></mml:math></inline-formula>, gives you the largest value <inline-formula><mml:math id="Eq094-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> such that <inline-formula><mml:math id="Eq095-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> has probability <inline-formula><mml:math id="Eq096-mml"><mml:mi>P</mml:mi></mml:math></inline-formula> (or less) of resulting in value <inline-formula><mml:math id="Eq097-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> or less. For instance, <inline-formula><mml:math id="Eq098-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>0.5</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> would be the median, and <inline-formula><mml:math id="Eq099-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>0.9</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> would be the value that <inline-formula><mml:math id="Eq100-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> has only a probability 0.1 of exceeding. (Equivalently, <inline-formula><mml:math id="Eq101-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is the inverse of <inline-formula><mml:math id="Eq102-mml"><mml:mi>O</mml:mi></mml:math></inline-formula>&#8217;s cumulative probability distribution; for an illustration, see below.) With this function, Relative Expectation Theory can be stated as follows.</p>
<disp-quote>
<p><italic>Relative Expectation Theory</italic>: A option <inline-formula><mml:math id="Eq103-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as another option <inline-formula><mml:math id="Eq104-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if</p>
<disp-formula id="FD4"><mml:math id="Eq105-mml"><mml:mrow><mml:mrow><mml:mtext>RE</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.111em">=</mml:mo><mml:mrow><mml:msubsup><mml:mo rspace="0em">&#x222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mrow><mml:mrow><mml:mo maxsize="120%" minsize="120%">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo maxsize="120%" minsize="120%">)</mml:mo></mml:mrow><mml:mo lspace="0em"></mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></disp-formula>
</disp-quote>
<p>Relative Expectation Theory agrees with all of the verdicts given by expected value theory. (To see this, note that, an option <inline-formula><mml:math id="Eq106-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula>&#8217;s expected value can always be expressed with the integral <inline-formula><mml:math id="Eq107-mml"><mml:mrow><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo lspace="0em"></mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. So, when expectations exist, RE<inline-formula><mml:math id="Eq108-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> simply becomes their difference.) But how does the theory fare in cases where expected value theory says nothing, such as those from earlier? Recall, for instance, the case of No Change. Relative Expectation Theory says that both options are equally good. Where <inline-formula><mml:math id="Eq109-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq110-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> are both the Aquila game&#8212;precisely the same distribution&#8212;both will have the same quantile function <inline-formula><mml:math id="Eq111-mml"><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub></mml:math></inline-formula> (matching the function labelled &#8220;Aquila game&#8221; in the figure below). So <inline-formula><mml:math id="Eq112-mml"><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> will always be 0, the integral from 0 to 1 will be 0, and they will be equally good.</p>
<p>Or consider Improving the Present. Relative Expectation Theory says that Save a Life is better than Do Nothing. Recall that Do Nothing was simply the Aquila game, while Save a Life was the same Aquila game but with every outcome sweetened by value <inline-formula><mml:math id="Eq113-mml"><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>. These options will have quantile functions as plotted below&#8212;functions that are identical, except that Save a Life&#8217;s function is shifted up by value <inline-formula><mml:math id="Eq114-mml"><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> for all <inline-formula><mml:math id="Eq115-mml"><mml:mi>P</mml:mi></mml:math></inline-formula>. The difference between the functions for Save a Life and Do Nothing is always positive, so the integral of <inline-formula><mml:math id="Eq116-mml"><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> from 0 to 1 (matching the area between the two graphs below) will be positive too, and Save a Life will be better. So, not only can Relative Expectation Theory compare the two, but it gives the intuitively correct verdict. For similar reasons, it also gives the intuitively correct verdict in the third case, Improving the Future.</p>
<fig id="F5">
<caption>
<p><bold>Figure 5:</bold> The quantile functions <inline-formula><mml:math id="Eq117-mml"><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub></mml:math></inline-formula> of the options in Improving the Present: Do Nothing (the Aquila game); and Save a Life (the same Aquila game with each outcome sweetened by <inline-formula><mml:math id="Eq118-mml"><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g5.png"/>
</fig>
<p>But Relative Expectation Theory cannot say anything in the fourth and fifth scenarios (Reducing Extinction Risk and Multifarious Changes). As has been noted elsewhere<xref ref-type="fn" rid="n35">35</xref>, the theory cannot compare an expectation-defying option to a sure outcome of value 0&#8212;RE<inline-formula><mml:math id="Eq119-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> becomes the expected value of the expectation-defying option which, by definition, is undefined. And, so, options that differ by increases or decreases in the probability of the Aquila game (as do the options in both Reducing Extinction Risk and Multifarious Changes) will have undefined RE<inline-formula><mml:math id="Eq120-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> too. Where other authors have observed this implication, they have accepted it&#8212;Pasadena and its kin are peculiar prospects, so it is not clear how we should compare them to the status quo, nor how good they are. But it cannot be a proper implication of decision theory that it falls silent in a wide range of decisions we actually face in practice, particularly moral decisions. And yet, with that verdict, Relative Expectation Theory does&#8212;it must fall silent in Reducing Extinction Risk and, a fortiori, in Multifarious Changes. Whenever an agent faces a decision that affects the probability that humanity survives rather than perishes, this theory fails us. So, I suggest, it proves inadequate.</p>
</sec>
<sec>
<title><italic>5.2. Weak Expectation Theory</italic></title>
<p>Another such extension comes from Easwaran (<xref ref-type="bibr" rid="B15">2008</xref>). By this proposal, an option is assigned a value called a <italic>weak expectation</italic>, where it exists, and options are ranked according to these values. Specifically, an option&#8217;s weak expectation WE<inline-formula><mml:math id="Eq121-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> is the value to which, if the option of <inline-formula><mml:math id="Eq122-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> were rerun in arbitrarily many independent trials, its <italic>average</italic> payoff (given by <inline-formula><mml:math id="Eq123-mml"><mml:mfrac><mml:msubsup><mml:mi>O</mml:mi><mml:mi>a</mml:mi><mml:mrow><mml:mi/><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>n</mml:mi></mml:mfrac></mml:math></inline-formula>) would be arbitrarily likely to be very close. Put formally:</p>
<disp-quote>
<p><italic>Weak Expectation Theory</italic>: An option <inline-formula><mml:math id="Eq124-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as an option <inline-formula><mml:math id="Eq125-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if WE<inline-formula><mml:math id="Eq126-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mi/></mml:mrow></mml:math></inline-formula>WE<inline-formula><mml:math id="Eq127-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>, for some WE<inline-formula><mml:math id="Eq128-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> and WE<inline-formula><mml:math id="Eq129-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> such that, for any small <inline-formula><mml:math id="Eq130-mml"><mml:mrow><mml:mo>&#x03B5;</mml:mo><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>:</p>
<disp-formula id="FD5"><mml:math id="Eq131-mml"><mml:mrow><mml:munder><mml:mo movablelimits="false">lim</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:mrow></mml:munder><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mrow><mml:mo maxsize="160%" minsize="160%">(</mml:mo><mml:mo fence="false" maxsize="160%" minsize="160%" rspace="0.167em">&#x007C;</mml:mo><mml:mfrac><mml:msubsup><mml:mi>O</mml:mi><mml:mi>a</mml:mi><mml:mrow><mml:mi/><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x2013;</mml:mo><mml:mtext>WE</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo fence="false" maxsize="160%" minsize="160%">&#x007C;</mml:mo><mml:mo lspace="0.167em">&#x003C;</mml:mo><mml:mo>&#x03B5;</mml:mo><mml:mo maxsize="160%" minsize="160%">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mspace width="4em"/><mml:mtext>(and similarly for&#x00A0;</mml:mtext><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
</disp-quote>
<p>But many options that defy expectations also defy weak expectations. For instance, both the Aquila and Skewed Aquila game do. (As does any mixture, or independent sum, of them with other prospects.) You could run arbitrarily many independent trials of one of these games, sum together the values of their outcomes and average them out, and the probability distribution you end up with for the average value will be just as spread out as the Aquila game or Skewed Aquila game you started with. There is no weak expectation to which the average is guaranteed to converge.</p>
<p>As a result, Weak Expectation Theory cannot evaluate <italic>any</italic> of the options featured in the five cases above. Nor can it compare any of those options to any other. Whenever we face any moral decision whatsoever, the theory fails us. Like Relative Expectation Theory, it is inadequate.</p>
</sec>
<sec>
<title><italic>5.3. Invariant Value Theory</italic></title>
<p>But the silence of the the previous two proposals does not mean that <italic>no</italic> extension of expected value theory can sensibly compare the Aquila game to alternatives. Several proposals do so but, for ease of exposition, I will focus on just one here: <italic>Invariant Value Theory</italic>.<xref ref-type="fn" rid="n36">36</xref></p>
<p>Under this theory, like Relative Expectation Theory, we focus on the quantile function of each option. (That is, for each probability <inline-formula><mml:math id="Eq132-mml"><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mi>P</mml:mi><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>, the largest value <inline-formula><mml:math id="Eq133-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> such that the option has probability <inline-formula><mml:math id="Eq134-mml"><mml:mi>P</mml:mi></mml:math></inline-formula> of outcomes with greater value.) The quantile function for the Aquila game, for instance, is plotted below. We also take advantage of the fact that an option&#8217;s expected value can be given as the integral of its quantile function (and so the area between the quantile function and the horizontal axis) from 0 to 1.</p>
<p>For options like the Aquila game, of course, the expected value is undefined. But we can consider <italic>truncated</italic> versions of options: we can ignore the portion of the quantile function close to 0 (to the left of some small <inline-formula><mml:math id="Eq135-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula>) and the portion close to 1 (to the right of <inline-formula><mml:math id="Eq136-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow></mml:math></inline-formula>), as illustrated below, and take the expected value as the area under the quantile function between those endpoints.<xref ref-type="fn" rid="n37">37</xref> We can also consider how that expected value changes as we truncate closer and closer to 0 and 1 (as <inline-formula><mml:math id="Eq137-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula> approaches 0). If, as we get closer to the full quantile function, the expected value approaches some finite limit, then that limit seems an appropriate value to assign to the option. We can call that limit the <italic>invariant value</italic> of the option, and it is by this value that Invariant Value Theory has us evaluate options.</p>
<fig id="F6">
<caption>
<p><bold>Figure 6:</bold> The quantile function <inline-formula><mml:math id="Eq138-mml"><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub></mml:math></inline-formula> of the Aquila game.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g6.png"/>
</fig>
<p>Put precisely, the theory says the following.<xref ref-type="fn" rid="n38">38</xref></p>
<disp-quote>
<p><italic>Invariant Value Theory</italic>: An option <inline-formula><mml:math id="Eq139-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as another option <inline-formula><mml:math id="Eq140-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if <inline-formula><mml:math id="Eq141-mml"><mml:mrow><mml:mrow><mml:mtext>IV</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mrow><mml:mtext>IV</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>, where:</p>
<disp-formula id="FD6"><mml:math id="Eq142-mml"><mml:mrow><mml:mrow><mml:mtext>IV</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>O</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.1389em">=</mml:mo><mml:mrow><mml:munder><mml:mo lspace="0.1389em" movablelimits="false" rspace="0em">lim</mml:mo><mml:mrow><mml:mo>&#x03B5;</mml:mo><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:msup><mml:mn>0</mml:mn><mml:mo>+</mml:mo></mml:msup></mml:mrow></mml:munder><mml:mrow><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mo>&#x03B5;</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo lspace="0em"></mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>
</disp-quote>
<p>This proposal maintains all of the verdicts of expected value theory. (Since the expected value of an option is the integral of its quantile function from 0 to 1, where that integral is defined, it will match the limit given above.) It also happens to maintain all of the verdicts of Weak Expectation Theory (see <xref ref-type="bibr" rid="B51">Wilkinson 2024b</xref>).</p>
<p>But it goes much further than either of those theories. For instance, it assigns a value to the Aquila game&#8212;it assigns the game an invariant value of 0. To see this, note that the Aquila game&#8217;s probability distribution is symmetric about value 0. So too, its quantile function is rotationally symmetric about probability 0.5 (where its value is 0). We can take any probability <inline-formula><mml:math id="Eq143-mml"><mml:mrow><mml:mo>&#x03B5;</mml:mo><mml:mo>&#x003C;</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq144-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x03B5;</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> will be the same as <inline-formula><mml:math id="Eq145-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, but negative. So too, for any such <inline-formula><mml:math id="Eq146-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula>, the integral of <inline-formula><mml:math id="Eq147-mml"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>O</mml:mi></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> from <inline-formula><mml:math id="Eq148-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula> to 0.5 will be the same as that from 0.5 to <inline-formula><mml:math id="Eq149-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow></mml:math></inline-formula>, but negative; they will perfectly cancel out. No matter how small <inline-formula><mml:math id="Eq150-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula> gets, even approaching 0, the integral will be 0. So the invariant value of the Aquila game will be 0&#8212;as it seems it should, since the game&#8217;s probability distribution is symmetric about 0. Likewise, in general, any option with probability distribution symmetric about some value <inline-formula><mml:math id="Eq151-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> will be given invariant value <inline-formula><mml:math id="Eq152-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>.</p>
<p>This tells us all we need to know to deal with the first and second problem cases from earlier. In No Change, Invariant Value Theory assigns invariant value 0 to both options. Both options are evaluable, and both are equally good. And, in Improving the Present, it assigns value 0 to Do Nothing and value <inline-formula><mml:math id="Eq153-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> to Save a Life (i.e., the Aquila game with value <inline-formula><mml:math id="Eq154-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> added to every outcome). If <inline-formula><mml:math id="Eq155-mml"><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> then Save a Life is better than Do Nothing, as intuition suggests it must be.</p>
<p>To deal with the latter three cases, we can extend the theory slightly. In effect, we can combine it with Relative Expectation Theory.<xref ref-type="fn" rid="n39">39</xref> Instead of taking the invariant value of each option, we can take a <italic>relative invariant value</italic> between any two options&#8212;the relative expectation of the two (as described above), but truncated at <inline-formula><mml:math id="Eq156-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math id="Eq157-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow></mml:math></inline-formula>, and taking the limit as <inline-formula><mml:math id="Eq158-mml"><mml:mo>&#x03B5;</mml:mo></mml:math></inline-formula> approaches 0.</p>
<disp-quote>
<p><italic>Invariant Value Theory</italic>*: An option <inline-formula><mml:math id="Eq159-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as another option <inline-formula><mml:math id="Eq160-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if</p>
<disp-formula id="FD7"><mml:math id="Eq161-mml"><mml:mrow><mml:mrow><mml:mtext>IV*</mml:mtext><mml:mrow><mml:mo maxsize="120%" minsize="120%">(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo maxsize="120%" minsize="120%">)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.1389em">=</mml:mo><mml:mrow><mml:munder><mml:mo lspace="0.1389em" movablelimits="false" rspace="0em">lim</mml:mo><mml:mrow><mml:mo>&#x03B5;</mml:mo><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:msup><mml:mn>0</mml:mn><mml:mo>+</mml:mo></mml:msup></mml:mrow></mml:munder><mml:mrow><mml:msubsup><mml:mo rspace="0em">&#x222B;</mml:mo><mml:mo>&#x03B5;</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mo>&#x03B5;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:mrow><mml:mo maxsize="160%" minsize="160%">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo maxsize="160%" minsize="160%">)</mml:mo></mml:mrow><mml:mo lspace="0em"></mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></disp-formula>
</disp-quote>
<p>Take the third problem case, Improving the Future. In it, we must compare two Skewed Aquila games: one corresponding to Campaign; and another, corresponding to Don&#8217;t Campaign, with a lower probability of the average future life having positive value and a higher probability of negative value (i.e., a lower ratio <inline-formula><mml:math id="Eq162-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula>). As illustrated below, the quantile function of Campaign is <italic>always</italic> higher than that of Don&#8217;t Campaign. That is, the difference <inline-formula><mml:math id="Eq163-mml"><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> will always be positive, and so too must be the integral of that difference. Hence, so will the IV* of Campaign relative to Don&#8217;t Campaign&#8212;the theory will judge Campaign as better, as intuition demands.</p>
<p>Or consider the fourth problem case, Reducing Extinction Risk. In it, we must compare an option called Do Nothing, with some probability of the (perhaps Skewed) Aquila game, to another option called Intervene, with a <italic>lower</italic> probability of the same (perhaps Skewed) Aquila game, each of which otherwise result in an outcome of value 0. If we are dealing with the standard, unskewed Aquila game then, for the same reasons as above, both options have invariant value 0&#8212;the theory will say that they are equally good. If we are dealing with a Skewed Aquila game, the situation is more complicated, but can still be dealt with. If the Skewed Aquila game in question is skewed towards positive values, as illustrated below, IV*(Intervene, Do Nothing<inline-formula><mml:math id="Eq165-mml"><mml:mo stretchy='false'>)</mml:mo></mml:math></inline-formula> will be positive&#8212;the area between the curves where Intervene has higher quantile function will be counted more quickly than the area where Do Nothing has the higher quantile function. So, the theory will say that Intervene is better, as seems intuitively correct. Similarly, where the Skewed Aquila game is skewed towards negative values, the theory will say that Do Nothing is better.</p>
<fig id="F7">
<caption>
<p><bold>Figure 7:</bold> The quantile functions of the options in Improving the Future: Campaign (the Skewed Aquila game) and Don&#8217;t Campaign (the Skewed Aquila game with a lower <inline-formula><mml:math id="Eq164-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula>).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g7.png"/>
</fig>
<p>What of the fifth case, Multifarious Changes? Again, we must compare different mixtures <inline-formula><mml:math id="Eq166-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>p</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq167-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:math></inline-formula> of (perhaps Skewed) Aquila games, but those mixtures may be sweetened by different constant amounts; they may also involve <italic>different</italic> Skewed Aquila games (with different values of <inline-formula><mml:math id="Eq168-mml"><mml:mfrac><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:math></inline-formula>). We are no longer comparing transformations of the same underlying option; we must now compare transformations of entirely <italic>different</italic> Aquila games. But, again, it turns out that Invariant Value Theory* can do so.<xref ref-type="fn" rid="n40">40</xref> Even in this most challenging of the problem cases, the theory succeeds in providing guidance.</p>
<p>Thus, expected value theory can be extended to provide verdicts in all of the problem cases raised above; and not just any verdicts, but those aligning with intuition. As Invariant Value Theory(*) demonstrates, we need not abandon expected value theory and resort to risk sensitivity to evaluate our future prospects, at least not due to the models of the future described above.</p>
<fig id="F8">
<caption>
<p><bold>Figure 8:</bold> The quantile functions of the options in a version of Reducing Extinction Risk. Both options, Do Nothing and Intervene, are mixtures of a Skewed Aquila game with positive skew and an outcome with value 0.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="ergo-9269_wilkinson-g8.png"/>
</fig>
<p>Before moving on, it is worth noting that Invariant Value Theory(*) (as well as other extensions of expected value theory that can deal with these cases) faces certain objections. If fatal, these objections may mean that we <italic>cannot</italic> deal with these cases while preserving the verdicts of expected value theory, at least not in any plausible way. But I think that, fortunately, they are not fatal.</p>
<p>The first such objection is that Invariant Value Theory* violates a prima facie very plausible principle, and common axiom of expected value theory: <italic>Independence</italic>.</p>
<disp-quote>
<p><italic>Independence</italic>: For any options <inline-formula><mml:math id="Eq169-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="Eq170-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>, and <inline-formula><mml:math id="Eq171-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> and any probability <inline-formula><mml:math id="Eq172-mml"><mml:mi>p</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math id="Eq173-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as <inline-formula><mml:math id="Eq174-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if and only if a mixture of <inline-formula><mml:math id="Eq175-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq176-mml"><mml:mi>p</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq177-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> with probability <inline-formula><mml:math id="Eq178-mml"><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula> is at least as good as a mixture of <inline-formula><mml:math id="Eq179-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq180-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> with the same probabilities.</p>
</disp-quote>
<p>By Independence, it does not matter what we mix <inline-formula><mml:math id="Eq181-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq182-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> with; if one is better, it remains better even when we mix each of them with some further option. Insofar as we find the verdicts of expected value theory plausible, this principle seems highly plausible too. But it turns out to be violated by Invariant Value Theory*.<xref ref-type="fn" rid="n41">41</xref> This may be reason to reject the theory. Or it may instead be seen as a feature, rather than a bug&#8212;impossibility results given elsewhere (<xref ref-type="bibr" rid="B51">Wilkinson 2024b: &#167;6.2</xref>) tell us that a decision theory <italic>cannot</italic> satisfy Independence without violating one of several other very plausible conditions. Nonetheless, this is one reason we might doubt that this extension of expected value theory is satisfactory.</p>
<p>A second objection is that the definition of invariant value may be unavoidably arbitrary.<xref ref-type="fn" rid="n42">42</xref> Like expected values, invariant values are a form of probability-weighted sum of the value an option might result in. But, unlike expected values, they are obtained only by summing in a particular order (or, equivalently, by taking a particular limit)&#8212;we start from the median of the option&#8217;s distribution, and take the probability-weighted sum of value according to its quantile, summing outwards towards quantiles ever closer to 0 and 1. Why sum in this order rather than any other?<xref ref-type="fn" rid="n43">43</xref> Why not start from the quantile of 0.33 and sum twice as quickly towards 1 as towards 0? The choice may seem arbitrary. Perhaps it is. But there is one reason to think that it is not. There is only one order in which we can sum that will consistently evaluate symmetric prospects in the intuitively correct way&#8212;that assigns value <inline-formula><mml:math id="Eq183-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> to any option with probability distribution symmetric about <inline-formula><mml:math id="Eq184-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>. The only such order is that used by invariant value theory: starting at the median and approaching the quantiles 0 and 1 at equal speed. So, if we accept that we ought to be indifferent between any value <inline-formula><mml:math id="Eq185-mml"><mml:mi>v</mml:mi></mml:math></inline-formula> and a symmetric spread about <inline-formula><mml:math id="Eq186-mml"><mml:mi>v</mml:mi></mml:math></inline-formula>&#8212;as I think we should, insofar as we find expected value theory plausible in the first place&#8212;this order is not arbitrary after all.</p>
</sec>
</sec>
<sec>
<title>6. A Notable Implication</title>
<p>As demonstrated above, expected value theory can be extended to deal with the problem cases from earlier&#8212;even if our prospects for the total value of the future are described by versions of the (Skewed) Aquila game, we can still compare the options available to us. But the comparisons we reach are perhaps surprising.</p>
<p>Recall Multifarious Changes. In that case, our options may be <italic>any</italic> two versions of the Skewed Aquila game, sweetened by some value <inline-formula><mml:math id="Eq187-mml"><mml:mi>s</mml:mi></mml:math></inline-formula> and/or mixed with an outcome of value 0. Specifically, consider a version of the case where we must choose between a) sweetening the game by some large amount <inline-formula><mml:math id="Eq188-mml"><mml:mi>s</mml:mi></mml:math></inline-formula>, and b) slightly increasing the positive skew of the Skewed Aquila game and/or slightly increasing its probability. According to Invariant Value Theory*, the latter will <italic>always</italic> be better (provided that it is skewed in the positive direction).<xref ref-type="fn" rid="n44">44</xref> And this holds <italic>no matter</italic> how large the sweetener in (a) is, and <italic>no matter</italic> how slight the changes in probability in (b). In effect, changes of the sort made in (b) are <italic>infinitely</italic> more valuable than any finite sweetening as in (a). This is perhaps surprising and counterintuitive. If the Skewed Aquila game does describe our prospects over the total moral value of the future, then some interventions focused on the long-term future will be, in effect, infinitely valuable. Increasing the probability that future lives are positive and/or get lived at all will <italic>always</italic> be more valuable than improving (finite numbers of) present lives. And this will still hold no matter how small those changes in probability, and no matter how many present lives you might otherwise improve.</p>
<p>Notably, this implication is not peculiar to Invariant Value Theory*. It is also implied by the other extant extensions of expected value theory that can deal with each of these cases (i.e., those of <xref ref-type="bibr" rid="B17">Easwaran 2014b</xref>; <xref ref-type="bibr" rid="B29">Meacham 2019</xref>). And this is to be expected&#8212;the differences in the probability distributions of (a) and (b) are roughly analogous to the St Petersburg game, which any genuinely risk-neutral theory must say is better than any finite value (see <xref ref-type="bibr" rid="B23">H&#225;jek &#38; Nover 2006: 706</xref>). I suspect that <italic>no</italic> faithful extension of expected value theory will be able to avoid saying that it is always more valuable to increase the skew or probability of a (positively) Skewed Aquila game than to gain any finite value with certainty.</p>
<p>Even if the Skewed Aquila game <italic>doesn&#8217;t</italic> accurately describe our options in practice, we may still encounter a similar implication. Invariant Value Theory* says much the same for analogous changes to <italic>any</italic> probability distribution with undefined expected value, so long as it can evaluate those changes at all. (So, I suspect, will other faithful extensions of expected value theory.) So we might accept a model of the future very different from the one from earlier that gave us the Skewed Aquila game, or we might assign only a small probability to the earlier model alongside many others. And still, if the resulting probability distributions have undefined expected values (and if Invariant Value Theory* can compare them at all), then much the same will hold&#8212;it will be more valuable to increase the probability of future lives overall being positive or, if they are positive, the probability of future lives being lived at all than to improve (finite numbers of) present lives. Admittedly, this result is only suggestive&#8212;there might be a single correct model of the future which gives a defined expected value or, more plausibly, the collection of plausible models might result in probability distributions so troublesome that even Invariant Value Theory* cannot compare them. In either such case, the lessons drawn here from the Skewed Aquila game would have no bearing on practical decision-making. But, if neither is the case, we have quite radical implications for how to evaluate actions that have a small probability of greatly altering the future.</p>
</sec>
<sec>
<title>7. Conclusion</title>
<p>This discussion started with three seemingly plausible normative claims: Impartiality, Additivity, and expected value theory. In practice, are these claims compatible? Or, in practice, do they lead to absurdity?</p>
<p>As I have argued, there is some reason to think that the total expected moral value of the future is undefined. There is at least one plausible model of morally valuable events in the distant future that, if we accept Impartiality and Additivity, gives a probability distribution (the Aquila game or Skewed Aquila game) over moral value that has undefined expectation. Assign <italic>any</italic> non-zero probability to this model and, no matter what other models we might consider nor what we might expect to happen in the near-term future, our overall prospects for total moral value will inherit that undefined expected value. And this is bad news for expected value theory. If it alone were the correct decision theory, then no option ever available to us in practice would ever be morally better than any other. And this would be absurd.</p>
<p>One possible response to this absurdity is to abandon the verdicts of expected value theory altogether, in favour of some alternative theory that exhibits risk <italic>sensitivity</italic>. As demonstrated above, by doing so, we can effectively turn any expectation-defying option into a better-behaved one. If this is the only way to avoid absurdity, while holding onto Impartiality and Additivity, we may have a surprising argument in favour of risk sensitivity. But is this the only possible solution? Or can we preserve the risk-neutral verdicts of expected value theory somehow?</p>
<p>It turns out that we can&#8212;risk sensitivity may be unnecessary. We can <italic>extend</italic> expected value theory to deal with the expectation-defying options described here. Admittedly, not just any old extension of expected value theory will do&#8212;some proposals are insufficient (e.g., Relative Expectation Theory and Weak Expectation Theory). But other proposals do better, including Invariant Value Theory(*). As demonstrated above, with such an extension, we can deliver comparisons even in those various problem cases involving the (Skewed) Aquila game. (Doing so also brings on some surprising implications, as described in the previous section.)</p>
<p>Does this mean that Impartiality, Additivity, and (some extension of) expected value theory are perfectly compatible in practice; that we need not accept risk sensitivity? Maybe; maybe not. If the five problem cases given above accurately describe the decisions we face in practice, then yes. Even if they loosely describe our real-world decisions&#8212;if our real-world options have probability distributions that behave sufficiently like the (Skewed) Aquila game&#8212;then the answer is yes. But our real-world options may also be far more complicated. For instance, the model of the distant future described above is just one possible model. There may be far more complicated models to which we should assign some probability. And, if those models give us even more challenging probability distributions, perhaps neither Invariant Value Theory(*) nor even further extensions of expected value theory will be able to compare the options we then face. If so, we may have a compelling argument for risk sensitivity once more.</p>
<p>This somewhat limits the conclusions that can be drawn here. We cannot conclude that Impartiality, Additivity, and expected value theory are <italic>guaranteed</italic> to be compatible in practice. But at least one seemingly troublesome argument against their compatibility has been undermined&#8212;on one fairly plausible model of our future, on which they seemed to conflict, they have been shown to cohere perfectly well. Perhaps there are other plausible models of the future on which they do still conflict, but this remains to be seen. For now, absent such models being proposed, it seems that we can safely endorse all three principles.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>Note that these three claims do not imply a consequentialist moral theory. They are each a claim about moral <italic>betterness</italic>, rather than about what we <italic>ought</italic> to do. They are perfectly compatible with nonconsequentialist views as long as they recognise some notion of betterness <italic>simpliciter</italic>.</p></fn>
<fn id="n2"><p>Impartiality is defended by Sidgwick (<xref ref-type="bibr" rid="B39">1907: 414</xref>), Ramsey (<xref ref-type="bibr" rid="B37">1928: 541</xref>), and Parfit (<xref ref-type="bibr" rid="B34">1984: 486</xref>), among others.</p></fn>
<fn id="n3"><p>For strong independent justification of Additivity, see the arguments of Broome (<xref ref-type="bibr" rid="B7">2004: ch. 18</xref>) and Thomas (<xref ref-type="bibr" rid="B44">2022b: &#167;5</xref>). Note also that Additivity, as defined here, is compatible with critical-level and prioritarian views of how valuable each individual life is.</p></fn>
<fn id="n4"><p>For compelling arguments in favour of expected (moral) value theory, see, for instance, Harsanyi (<xref ref-type="bibr" rid="B25">1955</xref>), Tarsney (forthcoming), and Zhao (<xref ref-type="bibr" rid="B54">2021</xref>).</p></fn>
<fn id="n5"><p>Crucially, the problem I will discuss is not simply that our actions may have a non-zero probability of resulting in <italic>infinite</italic> value. The probability distributions involved assign <italic>no</italic> probability to outcomes of infinite value. And, so, the problem I discuss arises even if we treat infinitely-valued outcomes as a conceptual impossibility. Likewise, an analogous problem arises if we recognise infinitely-valued outcomes but we replace expected value theory with a decision theory that brackets off infinitely-valued outcomes and compares our options by only their finitely-valued outcomes (as is proposed by <xref ref-type="bibr" rid="B5">Bostrom 2011: 37&#8211;38</xref>).</p></fn>
<fn id="n6"><p>This question bears importantly on recent debates concerning our obligations to future generations. It has been argued that Impartiality, Additivity, and expected value theory (or views that imply them) provide a justification for <italic>Axiological Longtermism</italic>: the view that the best options available to us, at least in many important practical decisions, are those that most increase the <italic>ex ante</italic> moral value of the far future (<xref ref-type="bibr" rid="B21">Greaves &#38; MacAskill 2025: 3</xref>). But, if those three claims bring on absurdity, this justification for Axiological Longtermism is undermined. If so, they do not imply the verdicts needed for longtermism; they imply no practical verdicts at all.</p></fn>
<fn id="n7"><p>My reasons for setting aside such prospects are threefold. The first: it is independently interesting if we can solve the problems raised by prospects over finitely-valued outcomes alone. The second: you might in fact think that outcomes of infinite value are metaphysically or logically impossible, and so assign them probability zero in practice (cf. <xref ref-type="bibr" rid="B1">Al-Kind&#299; 1974</xref>; <xref ref-type="bibr" rid="B12">Craig 1979</xref>). The third: the problems of infinitely-valued outcomes seem solvable, but in a way that leaves intact the problems of the Pasadena game and its kin (see <xref ref-type="bibr" rid="B42">Tarsney &#38; Wilkinson 2025</xref>; <xref ref-type="bibr" rid="B47">Wilkinson 2021</xref>; <xref ref-type="bibr" rid="B49">2023</xref>).</p></fn>
<fn id="n8"><p>This game is typically presented with payoffs in terms of dollars or (decision-theoretic) utility, in amounts matching those below (e.g., <xref ref-type="bibr" rid="B32">Nover and H&#225;jek 2004</xref>; <xref ref-type="bibr" rid="B17">Easwaran 2014b</xref>; <xref ref-type="bibr" rid="B3">Bartha 2016</xref>). Such versions of the game pose problems for expected dollar maximisers and expected utility maximisers. Here, the game is presented in terms of <italic>moral value</italic> and will pose structurally identical problems for expected <italic>value</italic> maximisers.</p></fn>
<fn id="n9"><p>Since the series is conditionally convergent, this result follows from the Riemann Rearrangement Theorem.</p></fn>
<fn id="n10"><p>This curve was first described in print by de Fermat (<xref ref-type="bibr" rid="B13">1659</xref>) and first analysed as a probability distribution by Poisson (<xref ref-type="bibr" rid="B35">1824</xref>). For a discussion of this distribution in the context of decision theory, see Alexander (<xref ref-type="bibr" rid="B2">2012</xref>).</p></fn>
<fn id="n11"><p>I focus on the prospects of our causal future rather than of the world as a whole, for three reasons. The first is simplicity. The second is that there are moral views on which the proper objects of comparison are not worlds as a whole but instead consequences&#8212;the portion of the world that it is (nomologically) possible to influence in a given decision (see, e.g., <xref ref-type="bibr" rid="B5">Bostrom 2011: &#167;3.2</xref>). And the third is that, if our future prospects have undefined expected value, then so too will the prospects of the world as a whole (unless the value of events inside and outside our causal future are very strongly anti-correlated, and we have no reason to think that they are). So, it suffices to focus on the value of our causal future.</p></fn>
<fn id="n12"><p>Other notions, of <italic>objective chances</italic>, may also sometimes be morally relevant, but only insofar as they constrain the evidential and subjective probabilities. They do not <italic>ultimately</italic> determine moral betterness, I assume.</p></fn>
<fn id="n13"><p>This line of thinking might be captured in the much-discussed principle of <italic>Regularity</italic>: that only logically (or perhaps metaphysically, or doxastically) impossible propositions have evidential probability zero (see <xref ref-type="bibr" rid="B18">Edwards et al. 1963</xref>; <xref ref-type="bibr" rid="B16">Easwaran 2014a</xref>). But this principle is controversial (see, for instance, <xref ref-type="bibr" rid="B36">Pruss 2013</xref>).</p></fn>
<fn id="n14"><p>To similar effect, you might instead think that the correct <italic>decision</italic> theory is knowledge-based: that, when comparing prospects, we can evaluate each prospect once we conditionalise on our knowledge (see <xref ref-type="bibr" rid="B27">Liu nd</xref>).</p></fn>
<fn id="n15"><p>This claim could be treated as a weakened form of Regularity (see fn 13), such as: that, only for a logically (or perhaps metaphysically, or doxastically) impossible outcome <inline-formula><mml:math id="Eq189-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> can the proposition &#8220;Outcome <inline-formula><mml:math id="Eq190-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> occurs.&#8221; have an evidential probability of zero.</p></fn>
<fn id="n16"><p>The general theory of relativity tells us that there is no absolute notion of a <italic>time</italic> <inline-formula><mml:math id="Eq191-mml"><mml:mi>t</mml:mi></mml:math></inline-formula>, nor of the period before time <inline-formula><mml:math id="Eq192-mml"><mml:mi>t</mml:mi></mml:math></inline-formula>, nor the period after it&#8212;the set of events that we carve out as occurring at the same time <inline-formula><mml:math id="Eq193-mml"><mml:mi>t</mml:mi></mml:math></inline-formula> (or, equivalently, as being simultaneous with one another) is sensitive to the velocity at which we do the carving. But, when talking of a point in spacetime such as <bold>t</bold>, there is a set of events that occur <italic>after</italic> it when observed at any velocity. This set corresponds to those events within <bold>t</bold>&#8217;s <italic>future lightcone</italic>: the region to which, if you started from <bold>t</bold>, you could hypothetically reach while travelling at the speed of light or slower.</p>
<p><styled-content style="display: block">Note that, in what follows, &#8220;after <bold>t</bold>&#8221; is meant as intuitive shorthand for &#8220;in the future lightcone of <bold>t</bold>&#8221; and &#8220;before <bold>t</bold>&#8221; as shorthand for &#8220;outside the future lightcone of <bold>t</bold>&#8221;. Note also that there are many possible points that we could define as <bold>t</bold> here; indeed, infinitely many! And with different choices of <bold>t</bold> may come different prospects over the value arising after <bold>t</bold>. Fortunately, what I say below will hold on <italic>any</italic> choice of <bold>t</bold>.</styled-content></p></fn>
<fn id="n17"><p>Perhaps <bold>t</bold> lies after the so-called heat death of the universe. But note that even that predicted heat death is a continuation of a long-running trend of cosmological expansion&#8212;of the universe increasing in entropy and, beyond some point, qualifying as having undergone heat death. Still, the universe will never quite reach a state of perfect entropy, so there is no genuine categorical difference between the time before heat death and the time after it (<xref ref-type="bibr" rid="B14">Dyson et al. 2002</xref>).</p></fn>
<fn id="n18"><p>This is not the only way that clusters of life may become completely physically isolated&#8212;for instance, such isolated clusters would be generated if humanity created and populated new &#8220;baby universes.&#8221; The possibility of doing this is somewhat supported by the prominent <italic>inflationary view</italic> of cosmology, under which our own universe was created by a quantum tunnelling event (see <xref ref-type="bibr" rid="B45">Vilenkin 1983</xref>). It is far from settled whether inflationary cosmology would indeed allow this but, on our current understanding, it is certainly a live possibility (<xref ref-type="bibr" rid="B19">Farhi et al. 1990</xref>). And, independently, there is theoretical support for it being possible to create new universes via the formation of black holes, and that universes created in this way may be temporarily accessible to their creators (<xref ref-type="bibr" rid="B6">Brandenberger et al. 2021</xref>; <xref ref-type="bibr" rid="B20">Frolov et al. 1990</xref>). The science is far from settled but, based on our current evidence, it is a live possibility. (For an accessible survey of this topic, see <xref ref-type="bibr" rid="B30">Merali 2017</xref>.)</p></fn>
<fn id="n19"><p>Cf. Sandberg and Armstrong (<xref ref-type="bibr" rid="B38">2012</xref>).</p></fn>
<fn id="n20"><p>As above, relativity makes things more complicated here (see also fn 16). Our carving up of events in spacetime into time periods, and our measurement of the duration of such time periods, is sensitive to the velocity at which we do the carving up and the measuring. But still, at any such velocity, it will hold that more clusters means a lower probability of overall extinction. Here and in what follows, by &#8220;at a time&#8221; or &#8220;in a given time period&#8221; or &#8220;the number of years,&#8221; I mean this as determined by some observer travelling at <italic>any</italic> given velocity.</p></fn>
<fn id="n21"><p>The above model, as a Kendall process with death and birth rates of <inline-formula><mml:math id="Eq194-mml"><mml:mo>&#x03BC;</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math id="Eq195-mml"><mml:mo>&#x03BB;</mml:mo></mml:math></inline-formula> respectively, gives the following probability distribution over total cluster-years, multiplied by the average moral value of each (from <xref ref-type="bibr" rid="B28">McNeil 1970: &#167;5.b</xref>).</p>
<p><disp-formula id="FD8"><mml:math id="Eq196-mml"><mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msqrt><mml:mfrac><mml:mo>&#x03BC;</mml:mo><mml:mo>&#x03BB;</mml:mo></mml:mfrac></mml:msqrt><mml:mfrac><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:msqrt><mml:mrow><mml:mo>&#x03BC;</mml:mo><mml:mo>&#x03BB;</mml:mo></mml:mrow></mml:msqrt></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x03BC;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x03BB;</mml:mo></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:math></disp-formula></p>
<p><styled-content style="display: block">Here, <inline-formula><mml:math id="Eq197-mml"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is the first-order modified Bessel function of the first kind, which is equivalent to <inline-formula><mml:math id="Eq198-mml"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mo>&#x03C0;</mml:mo></mml:mfrac><mml:mrow><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x03C0;</mml:mo></mml:msubsup><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo lspace="0.167em"></mml:mo><mml:mrow><mml:mi>cos</mml:mi><mml:mo lspace="0.167em"></mml:mo><mml:mo>&#x03B8;</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mo lspace="0.167em"></mml:mo><mml:mrow><mml:mi>cos</mml:mi><mml:mo lspace="0.167em"></mml:mo><mml:mrow><mml:mo>&#x03B8;</mml:mo><mml:mi>d</mml:mi><mml:mo>&#x03B8;</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. (I am grateful to Alexander Barry for assistance with these details.)</styled-content></p>
<p><styled-content style="display: block">Crucially, when <inline-formula><mml:math id="Eq199-mml"><mml:mrow><mml:mo>&#x03BC;</mml:mo><mml:mo>&#x2264;</mml:mo><mml:mo>&#x03BB;</mml:mo></mml:mrow></mml:math></inline-formula>, that distribution lacks a defined expectation. It also matches the equation for the Aquila game above in that the variants of it raised in the following section behave the same as the corresponding variants of the Aquila game, both under expected value theory and under the various theories I introduce in &#167;5. For my purposes, then, it suffices to focus on the (much simpler) Aquila game.</styled-content></p></fn>
<fn id="n22"><p>Given its connection to the St Petersburg game and its cosmic motivation, the game takes its name from the location of the Petra system in our night sky: the Aquila constellation.</p></fn>
<fn id="n23"><p>As above, I assume that events before and after <bold>t</bold> are not strongly anti-correlated (see fn 13).</p></fn>
<fn id="n24"><p>The distribution will likely also be far more spread out than this, but I will put that complication aside, as it will simply result in an overall distribution with tails that approach 0 even more slowly than the Aquila game. The same problems as below will arise and the same solutions will hold.</p></fn>
<fn id="n25"><p>Note that dealing with these five problem cases is <italic>necessary</italic> but perhaps not <italic>sufficient</italic> for extensional adequacy.</p></fn>
<fn id="n26"><p>Failure to rank these options as equally good can also be characterised as a violation of <italic>Stochastic Equivalence</italic>.</p>
<p><disp-quote>
<p><italic>Stochastic Equivalence</italic>: For any options <inline-formula><mml:math id="Eq200-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq201-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>, if for every possible outcome <inline-formula><mml:math id="Eq202-mml"><mml:mi>O</mml:mi></mml:math></inline-formula> both <inline-formula><mml:math id="Eq203-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq204-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> have the same probability of resulting in an outcome equally as good as <inline-formula><mml:math id="Eq205-mml"><mml:mi>O</mml:mi></mml:math></inline-formula>, then <inline-formula><mml:math id="Eq206-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq207-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> are equally good.</p>
</disp-quote></p>
<p>This principle is both intuitively very plausible and one that expected value theory easily satisfies for finitely-supported prospects.</p></fn>
<fn id="n27"><p>If that improvement is less-than-certain, we have a slightly different scenario. Fortunately, each of the proposed theories below that give the correct verdict in Improving the Present happen to give the same verdict in this different scenario, so I will not dwell on that scenario here.</p></fn>
<fn id="n28"><p>This case is an analogue of the widely-discussed comparison of the Pasadena game to the <italic>Altadena</italic> game (introduced by <xref ref-type="bibr" rid="B32">Nover &#38; H&#225;jek 2004: 241</xref>). In both cases, a failure to rank the latter option as better is a violation of <italic>Weak Stochastic Dominance</italic>.</p>
<p><disp-quote>
<p><italic>Weak Stochastic Dominance</italic>: If, for every possible outcome <inline-formula><mml:math id="Eq208-mml"><mml:mi>O</mml:mi></mml:math></inline-formula>, one option <inline-formula><mml:math id="Eq209-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> has a strictly higher probability than another option <inline-formula><mml:math id="Eq210-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> of an outcome at least as good as <inline-formula><mml:math id="Eq211-mml"><mml:mi>O</mml:mi></mml:math></inline-formula>, then <inline-formula><mml:math id="Eq212-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is strictly better than <inline-formula><mml:math id="Eq213-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula>.</p>
</disp-quote></p>
<p><styled-content style="display: block">Like Stochastic Equivalence, this principle is both intuitively very plausible and one that expected value theory easily satisfies for finitely-supported prospects.</styled-content></p></fn>
<fn id="n29"><p>Similar to the failure in Improving the Present, a failure to rank Campaign as better than Don&#8217;t Campaign is a violation of Weak Stochastic Dominance (see fn 28).</p></fn>
<fn id="n30"><p>A failure to rank these options in the ways described would also violate Weak Stochastic Dominance (see fn 28).</p></fn>
<fn id="n31"><p>As von Neumann and Morgenstern (<xref ref-type="bibr" rid="B46">1953: 28</xref>) put it, utility is simply &#8220;&#8230;that thing for which the calculus of mathematical expectations is legitimate.&#8221;</p></fn>
<fn id="n32"><p>A similar result could be achieved with a modified version of Buchak&#8217;s (<xref ref-type="bibr" rid="B8">2013</xref>) <italic>risk-weighted expected utility</italic> (REU) theory (even with utility linear with respect to moral value). That theory says that an option <inline-formula><mml:math id="Eq214-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> should be evaluated by REU<inline-formula><mml:math id="Eq215-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>L</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo rspace="0.055em">+</mml:mo><mml:mrow><mml:msubsup><mml:mo rspace="0em">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo rspace="0.055em" stretchy='false'>)</mml:mo></mml:mrow><mml:mo rspace="0.222em">&#x22C5;</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mo maxsize="120%" minsize="120%">(</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>L</mml:mi><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo maxsize="120%" minsize="120%">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>, where the utilities of possible outcomes are given in ascending order by <inline-formula><mml:math id="Eq216-mml"><mml:mrow><mml:mo stretchy='false'>{</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo></mml:mrow></mml:math></inline-formula>&#8230;<inline-formula><mml:math id="Eq217-mml"><mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>}</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq218-mml"><mml:mrow><mml:mi>r</mml:mi><mml:mo lspace="0.278em" rspace="0.278em">:</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>]</mml:mo></mml:mrow><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> is some non-decreasing function describing a particular risk attitude. When applied to prospects with continuous distributions and over outcomes with unbounded values, we might adjust the theory in two ways: 1) replace the discrete sum with an integral; and 2) take separately the REU of the conditional prospects i) <inline-formula><mml:math id="Eq219-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula>, conditional on <inline-formula><mml:math id="Eq220-mml"><mml:mrow><mml:mi>u</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, and ii) <inline-formula><mml:math id="Eq221-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula>, conditional on <inline-formula><mml:math id="Eq222-mml"><mml:mrow><mml:mi>u</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, with the latter calculated &#8220;in reverse,&#8221; using the equation REU<inline-formula><mml:math id="Eq223-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo rspace="0.055em">&#x2013;</mml:mo><mml:mrow><mml:msubsup><mml:mo rspace="0em">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo rspace="0.055em" stretchy='false'>)</mml:mo></mml:mrow><mml:mo rspace="0.222em">&#x22C5;</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mtext>*</mml:mtext><mml:mrow><mml:mo maxsize="120%" minsize="120%">(</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>L</mml:mi><mml:mo>&#x003C;</mml:mo><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo maxsize="120%" minsize="120%">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> (and a suitable <inline-formula><mml:math id="Eq224-mml"><mml:mi>r</mml:mi></mml:math></inline-formula>* function). Doing so has an effect similar to that under expected utility of adopting the utility function illustrated above. But proponents of REU theory may baulk at this modification of their theory&#8212;particularly (2)&#8212;which may seem ad hoc, arbitrary, and poorly motivated.</p></fn>
<fn id="n33"><p>There are other possible objections to this argument for risk sensitivity. The first is that the risk sensitivity needed to solve the problem will lack independent justification. Unlike the view advocated by Buchak (<xref ref-type="bibr" rid="B8">2013</xref>), the risk sensitivity necessary here will not arise from nor match the agent&#8217;s own preferences over different means to their desired ends. Here, it must be imposed externally, and will often diverge from the agent&#8217;s own attitudes. If anything, the standard motivation for risk sensitivity may tell <italic>against</italic> this argument.</p>
<p><styled-content style="display: block">A second, related objection is methodological. Even if we do accept that there is a correct universal, agent-neutral attitude to risk, we might think that the correct method to set this is by considering simple, idealised cases in which our normative intuitions are especially clear (cf. <xref ref-type="bibr" rid="B8">Buchak 2013: &#167;2.3</xref>) and to reason from those to more complicated practical cases. If we instead determine the correct risk attitude based on the presence of options we <italic>in fact</italic> face, as we may need to to solve the problem described here, it may seem that we make a methodological mistake. (I am grateful to Johanna Thoma for suggesting this objection.)</styled-content></p>
<p><styled-content style="display: block">A third objection is that, particularly in the moral setting, risk <italic>neutrality</italic> has some powerful arguments in its favour. These include Harsanyi&#8217;s (<xref ref-type="bibr" rid="B25">1955</xref>) classic social aggregation theorem and various others (e.g., <xref ref-type="bibr" rid="B41">Tarsney 2025</xref>; <xref ref-type="bibr" rid="B43">Thomas 2022a</xref>; <xref ref-type="bibr" rid="B48">Wilkinson 2022</xref>; <xref ref-type="bibr" rid="B50">2024a</xref>; <xref ref-type="bibr" rid="B52">nd</xref>; <xref ref-type="bibr" rid="B54">Zhao 2021</xref>). By such arguments, if we adopt an aggregative theory of moral betterness but admit sensitivity to risk, we must violate one or another highly plausible principle.</styled-content></p></fn>
<fn id="n34"><p>Note that I am interested here only in whether we <italic>can</italic> accommodate plausible verdicts without giving up risk neutrality&#8212;whether we can find an extensionally adequate theory&#8212;not the further question of whether we can independently motivate such a theory, which is beyond the scope of this paper. For independent motivation for the theory I endorse below, see Wilkinson (<xref ref-type="bibr" rid="B51">2024b</xref>).</p></fn>
<fn id="n35"><p>Colyvan and H&#225;jek (<xref ref-type="bibr" rid="B11">2016</xref>); Meacham (<xref ref-type="bibr" rid="B29">2019</xref>).</p></fn>
<fn id="n36"><p>Other proposals include Easwaran&#8217;s (<xref ref-type="bibr" rid="B17">2014b</xref>) <italic>Principal Value Theory</italic>, which can deal with the first three cases, and Meacham&#8217;s (<xref ref-type="bibr" rid="B29">2019</xref>) further extension of <italic>Difference Minimising Theory</italic>, which can deal with the latter two cases. Principal Value Theory operates much like Invariant Value Theory, except it truncates the option&#8217;s probability distribution rather than its quantile function, and takes the limit of the option&#8217;s expectation as the truncated distribution approaches the true distribution. If we define <inline-formula><mml:math id="Eq225-mml"><mml:msubsup><mml:mi>O</mml:mi><mml:mi>a</mml:mi><mml:mrow><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mo>&#x2A7D;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> as the prospect that assigns the same probability as <inline-formula><mml:math id="Eq226-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> to every possible value with absolute value up to <inline-formula><mml:math id="Eq227-mml"><mml:mi>n</mml:mi></mml:math></inline-formula>, and redistributes the remaining probability mass (taken from values below <inline-formula><mml:math id="Eq228-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></inline-formula> and above <inline-formula><mml:math id="Eq229-mml"><mml:mrow><mml:mo>+</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></inline-formula>) to value 0, then the theory can be characterised as follows.</p>
<p><disp-quote>
<p><italic>Principal Value Theory</italic>: A prospect <inline-formula><mml:math id="Eq230-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> is at least as good as another prospect <inline-formula><mml:math id="Eq231-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> if PV<inline-formula><mml:math id="Eq232-mml"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mi/></mml:mrow></mml:math></inline-formula>PV<inline-formula><mml:math id="Eq233-mml"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>, where</p>
<p><disp-formula id="FD9"><mml:math id="Eq234-mml"><mml:mrow><mml:mrow><mml:mtext>PV</mml:mtext><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>O</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.1389em">=</mml:mo><mml:mrow><mml:munder><mml:mo lspace="0.1389em" movablelimits="false" rspace="0.167em">lim</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>O</mml:mi><mml:mrow><mml:mrow><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mo>&#x2A7D;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula></p>
<p>and <inline-formula><mml:math id="Eq235-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq236-mml"><mml:msub><mml:mi>O</mml:mi><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> each satisfy a technical condition called <italic>stability</italic> (see <xref ref-type="bibr" rid="B17">Easwaran 2014b: 524&#8211;525</xref>).</p>
</disp-quote></p>
<p><styled-content style="display: block">It turns out that, in the five problem cases from earlier, each option has a defined principal value PV (and satisfies Easwaran&#8217;s stability condition). As a result, the theory can successfully compare all five pairs of options. (Meacham&#8217;s proposal can too, as it is a further extension of Principal Value Theory, effectively combining it with Relative Expectation Theory.)</styled-content></p></fn>
<fn id="n37"><p>Smith (<xref ref-type="bibr" rid="B40">2014</xref>) proposes a similar method of truncation, although his final proposal is very different.</p></fn>
<fn id="n38"><p>For a fuller discussion of the advantages and disadvantages of this theory relative to its rivals, see Wilkinson (<xref ref-type="bibr" rid="B51">2024b</xref>).</p></fn>
<fn id="n39"><p>This is analogous to Meacham&#8217;s (<xref ref-type="bibr" rid="B29">2019: 1021</xref>) method of extending Easwaran&#8217;s (<xref ref-type="bibr" rid="B17">2014b</xref>) Principal Value Theory.</p></fn>
<fn id="n40"><p>This very general claim follows from Theorems 2 and 3 in Wilkinson (<xref ref-type="bibr" rid="B51">2024b</xref>). Any pair of perhaps sweetened, perhaps skewed, and perhaps mixed versions of the Aquila game in this case will satisfy the conditions given therein, and so be comparable by Invariant Value Theory*.</p></fn>
<fn id="n41"><p>For a demonstration of this, see <xref ref-type="bibr" rid="B51">Wilkinson 2024b: &#167;6.2</xref>. Notably, Independence is also violated by other extensions of expected value theory that can deal with the above cases, such as Meacham&#8217;s (<xref ref-type="bibr" rid="B29">2019</xref>) Difference Minimising Theory&#8212;see Wilkinson (<xref ref-type="bibr" rid="B51">2024b</xref>).</p></fn>
<fn id="n42"><p>I am grateful to two anonymous reviewers for pressing me to address this objection.</p></fn>
<fn id="n43"><p>A different order is used in Easwaran&#8217;s (<xref ref-type="bibr" rid="B17">2014b</xref>) proposal of Principal Value Theory: start at value 0, take the probability-weighted sum of value between values <inline-formula><mml:math id="Eq237-mml"><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq238-mml"><mml:mi>n</mml:mi></mml:math></inline-formula>, and let <inline-formula><mml:math id="Eq239-mml"><mml:mi>n</mml:mi></mml:math></inline-formula> tend to infinity. Much the same objection is raised against that theory by Alexander (<xref ref-type="bibr" rid="B2">2012: 720&#8211;721</xref>) and Easwaran (<xref ref-type="bibr" rid="B17">2014b: 528</xref>).</p></fn>
<fn id="n44"><p>Similarly, if the latter option is equally skewed and in the negative direction, then it will always be worse.</p></fn>
</fn-group>
<sec>
<title>Acknowledgements</title>
<p>I am grateful to Adam Bales, Jacob Barrett, Tomi Francis, Harvey Lederman, Andreas Mogensen, Toby Newberry, Jeffrey Russell, Christian Tarsney, Johanna Thoma, Teru Thomas, David Thorstad, Timothy L. Williamson, and two anonymous reviewers for their generous feedback on various versions of this paper. In particular, for his extensive assistance with the mathematical details in &#167;2.2, I thank Alexander Barry.</p>
</sec>
<ref-list>
<ref id="B1"><mixed-citation publication-type="book"><collab>Al-Kind&#299;</collab> (<year>1974</year>). <source>Al-Kind&#299;&#8217;s Metaphysics: A Translation of Ya&#8217;q&#363;b ibn Ish&#257;q al-Kind&#299;&#8217;s Treatise &#8216;On First Philosophy&#8217;</source>. <publisher-name>State University of New York Press</publisher-name>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><string-name><surname>Alexander</surname>, <given-names>J. McKenzie</given-names></string-name> (<year>2012</year>). <article-title>Decision Theory Meets the Witch of Agnesi</article-title>. <source>Journal of Philosophy</source>, <volume>109</volume>(<issue>12</issue>), <fpage>712</fpage>&#8211;<lpage>727</lpage>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="journal"><string-name><surname>Bartha</surname>, <given-names>Paul F. A.</given-names></string-name> (<year>2016</year>). <article-title>Making Do Without Expectations</article-title>. <source>Mind</source>, <volume>125</volume>(<issue>499</issue>), <fpage>799</fpage>&#8211;<lpage>827</lpage>.</mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="webpage"><string-name><surname>Baumann</surname>, <given-names>Tobias</given-names></string-name> (<year>2017</year>). <article-title>S-risks: An introduction</article-title>. <source>Center for Reducing Suffering</source>. Available at: <uri>https://centerforreducingsuffering.org/research/intro/</uri> (accessed March 2022).</mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Bostrom</surname>, <given-names>Nick</given-names></string-name> (<year>2011</year>). <article-title>Infinite Ethics</article-title>. <source>Analysis and Metaphysics</source>, <volume>10</volume>, <fpage>9</fpage>&#8211;<lpage>59</lpage>.</mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Brandenberger</surname>, <given-names>Robert</given-names></string-name>, <string-name><given-names>Lavinia</given-names> <surname>Heisenberg</surname></string-name>, and <string-name><given-names>Jakob</given-names> <surname>Robnik</surname></string-name> (<year>2021</year>). <article-title>Through a Black Hole into a New Universe</article-title>. <source>International Journal of Modern Physics D</source>, <volume>30</volume>(<issue>14</issue>), <elocation-id>2142001</elocation-id>.</mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="book"><string-name><surname>Broome</surname>, <given-names>John</given-names></string-name> (<year>2004</year>). <source>Weighing Lives</source>. <publisher-name>Blackwell</publisher-name>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="book"><string-name><surname>Buchak</surname>, <given-names>Lara</given-names></string-name> (<year>2013</year>). <source>Risk and Rationality</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>Busha</surname>, <given-names>Michael T.</given-names></string-name>, <string-name><given-names>Fred C.</given-names> <surname>Adams</surname></string-name>, <string-name><given-names>Risa H.</given-names> <surname>Wechsler</surname></string-name>, and <string-name><given-names>August E.</given-names> <surname>Evrard</surname></string-name> (<year>2003</year>). <article-title>Future Evolution of Cosmic Structure in an Accelerating Universe</article-title>. <source>The Astrophysical Journal</source>, <volume>596</volume>(<issue>2</issue>), <elocation-id>713</elocation-id>.</mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Colyvan</surname>, <given-names>Mark</given-names></string-name> (<year>2008</year>). <article-title>Relative Expectation Theory</article-title>. <source>Journal of Philosophy</source>, <volume>105</volume>(<issue>1</issue>), <fpage>37</fpage>&#8211;<lpage>44</lpage>.</mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Colyvan</surname>, <given-names>Mark</given-names></string-name> and <string-name><given-names>Alan</given-names> <surname>H&#225;jek</surname></string-name> (<year>2016</year>). <article-title>Making Ado Without Expectations</article-title>. <source>Mind</source>, <volume>125</volume>(<issue>499</issue>), <fpage>829</fpage>&#8211;<lpage>857</lpage>.</mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Craig</surname>, <given-names>William Lane</given-names></string-name> (<year>1979</year>). <article-title>Whitrow and Popper on the Impossibility of an Infinite Past</article-title>. <source>British Journal for the Philosophy of Science</source>, <volume>30</volume>(<issue>2</issue>), <fpage>165</fpage>&#8211;<lpage>170</lpage>.</mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="book"><string-name><surname>de Fermat</surname>, <given-names>Pierre</given-names></string-name> (c. <year>1659</year>). <chapter-title>De aequationum localium transmutatione et emendatione ad multimodaum curvilineorum inter se vel cum rectilineis comparationem, cui annectitur proportionis geometricae in quadrandis infinitis parabolis et hyperbolis usus</chapter-title>. In <string-name><given-names>Paul</given-names> <surname>Tannery</surname></string-name> and <string-name><given-names>Charles</given-names> <surname>Henry</surname></string-name> (Eds.), <source>&#338;uvres de Pierre Fermat</source> (<fpage>216</fpage>&#8211;<lpage>237</lpage>). <publisher-name>Gauthier-Villars</publisher-name>.</mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Dyson</surname>, <given-names>Lisa</given-names></string-name>, <string-name><given-names>Matthew</given-names> <surname>Kleban</surname></string-name>, and <string-name><given-names>Leonard</given-names> <surname>Susskind</surname></string-name> (<year>2002</year>). <article-title>Disturbing Implications of a Cosmological Constant</article-title>. <source>Journal of High Energy Physics</source>, <volume>2002</volume>(<issue>10</issue>), <elocation-id>011</elocation-id>.</mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Easwaran</surname>, <given-names>Kenny</given-names></string-name> (<year>2008</year>). <article-title>Strong and Weak Expectations</article-title>. <source>Mind</source>, <volume>117</volume>(<issue>467</issue>), <fpage>633</fpage>&#8211;<lpage>641</lpage>.</mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>Easwaran</surname>, <given-names>Kenny</given-names></string-name> (<year>2014a</year>). <article-title>Regularity and Hyperreal Credences</article-title>. <source>Philosophical Review</source>, <volume>123</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>41</lpage>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Easwaran</surname>, <given-names>Kenny</given-names></string-name> (<year>2014b</year>). <article-title>Principal Values and Weak Expectations</article-title>. <source>Mind</source>, <volume>123</volume>(<issue>490</issue>), <fpage>517</fpage>&#8211;<lpage>531</lpage>.</mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><string-name><surname>Edwards</surname>, <given-names>Ward</given-names></string-name>, <string-name><given-names>Harold</given-names> <surname>Lindman</surname></string-name>, and <string-name><given-names>Leonard J.</given-names> <surname>Savage</surname></string-name> (<year>1963</year>). <article-title>Bayesian Statistical Inference for Psychological Research</article-title>. <source>Psychological Review</source>, <volume>70</volume>(<issue>3</issue>), <fpage>193</fpage>&#8211;<lpage>242</lpage>.</mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><string-name><surname>Farhi</surname>, <given-names>Edward</given-names></string-name>, <string-name><given-names>Alan H.</given-names> <surname>Guth</surname></string-name>, and <string-name><given-names>Jemal</given-names> <surname>Guven</surname></string-name> (<year>1990</year>). <article-title>Is It Possible to Create a Universe in the Laboratory by Quantum Tunneling?</article-title> <source>Nuclear Physics B</source>, <volume>339</volume>(<issue>2</issue>), <fpage>417</fpage>&#8211;<lpage>490</lpage>.</mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><string-name><surname>Frolov</surname>, <given-names>V. P.</given-names></string-name>, <string-name><given-names>M. A.</given-names> <surname>Markov</surname></string-name>, and <string-name><given-names>V. F.</given-names> <surname>Mukhanov</surname></string-name> (<year>1990</year>). <article-title>Black Holes as Possible Sources of Closed and Semiclosed Worlds</article-title>. <source>Physical Review D</source>, <volume>41</volume>(<issue>2</issue>), <elocation-id>383</elocation-id>.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="book"><string-name><surname>Greaves</surname>, <given-names>Hilary</given-names></string-name> and <string-name><given-names>William</given-names> <surname>MacAskill</surname></string-name> (<year>2025</year>). <chapter-title>The Case for Strong Longtermism</chapter-title>. In <string-name><given-names>Jacob</given-names> <surname>Barrett</surname></string-name>, <string-name><given-names>Hilary</given-names> <surname>Greaves</surname></string-name> and <string-name><given-names>David</given-names> <surname>Thorstad</surname></string-name> (Eds.), <source>Essays on Longtermism</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><string-name><surname>H&#225;jek</surname>, <given-names>Alan</given-names></string-name> (<year>2014</year>). <article-title>Unexpected Expectations</article-title>. <source>Mind</source>, <volume>123</volume>(<issue>490</issue>), <fpage>533</fpage>&#8211;<lpage>567</lpage>.</mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>H&#225;jek</surname>, <given-names>Alan</given-names></string-name> and <string-name><given-names>Harris</given-names> <surname>Nover</surname></string-name> (<year>2006</year>). <article-title>Perplexing Expectations</article-title>. <source>Mind</source>, <volume>115</volume>(<issue>459</issue>), <fpage>703</fpage>&#8211;<lpage>720</lpage>.</mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="journal"><string-name><surname>H&#225;jek</surname>, <given-names>Alan</given-names></string-name> and <string-name><given-names>Michael</given-names> <surname>Smithson</surname></string-name> (<year>2012</year>). <article-title>Rationality and Indeterminate Probabilities</article-title>. <source>Synthese</source>, <volume>187</volume>, <fpage>33</fpage>&#8211;<lpage>48</lpage>.</mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>Harsanyi</surname>, <given-names>John C.</given-names></string-name> (<year>1955</year>). <article-title>Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility</article-title>. <source>Journal of Political Economy</source>, <volume>63</volume>(<issue>4</issue>), <fpage>309</fpage>&#8211;<lpage>321</lpage>.</mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><string-name><surname>Kendall</surname>, <given-names>David G.</given-names></string-name> (<year>1948</year>). <article-title>On the Generalized &#8220;Birth-and-Death&#8221; Process</article-title>. <source>The Annals of Mathematical Statistics</source>, <volume>19</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>15</lpage>.</mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="book"><string-name><surname>Liu</surname>, <given-names>Sebastian</given-names></string-name> (n.d.). <chapter-title>Don&#8217;t Bet the Farm: Decision Theory</chapter-title>, <publisher-name>Inductive Knowledge, and the St. Petersburg Paradox</publisher-name>. Unpublished manuscript.</mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><string-name><surname>McNeil</surname>, <given-names>Donald R.</given-names></string-name> (<year>1970</year>). <article-title>Integral Functionals of Birth and Death Processes and Related Limiting Distributions</article-title>. <source>The Annals of Mathematical Statistics</source>, <volume>41</volume>(<issue>2</issue>), <fpage>480</fpage>&#8211;<lpage>485</lpage>.</mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><string-name><surname>Meacham</surname>, <given-names>Christopher</given-names></string-name> (<year>2019</year>). <article-title>Difference Minimizing Theory</article-title>. <source>Ergo</source>, <volume>6</volume>(<issue>35</issue>).</mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="book"><string-name><surname>Merali</surname>, <given-names>Zeeya</given-names></string-name> (<year>2017</year>). <source>A Big Bang in a Little Room: The Quest to Create New Universes</source>. <publisher-name>Hachette UK</publisher-name>.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Nagamine</surname>, <given-names>Kentaro</given-names></string-name> and <string-name><given-names>Abraham</given-names> <surname>Loeb</surname></string-name> (<year>2003</year>). <article-title>Future Evolution of Nearby Large-Scale Structures in a Universe Dominated by a Cosmological Constant</article-title>. <source>New Astronomy</source>, <volume>8</volume>(<issue>5</issue>), <fpage>439</fpage>&#8211;<lpage>448</lpage>.</mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="journal"><string-name><surname>Nover</surname>, <given-names>Harris</given-names></string-name> and <string-name><given-names>Alan</given-names> <surname>H&#225;jek</surname></string-name> (<year>2004</year>). <article-title>Vexing Expectations</article-title>. <source>Mind</source>, <volume>113</volume>(<issue>450</issue>), <fpage>237</fpage>&#8211;<lpage>249</lpage>.</mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="webpage"><string-name><surname>Ord</surname>, <given-names>Toby</given-names></string-name> (<year>2021</year>). <article-title>The Edges of our Universe</article-title>. Unpublished manuscript. Available at <uri>https://arxiv.org/abs/2104.01191</uri></mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="book"><string-name><surname>Parfit</surname>, <given-names>Derek</given-names></string-name> (<year>1984</year>). <source>Reasons and Persons</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="book"><string-name><surname>Poisson</surname>, <given-names>Sim&#233;on D.</given-names></string-name> (<year>1824</year>). <chapter-title>Sur la probabilit&#233; des r&#233;sultats moyens des observations</chapter-title>. In <source>Connaissance des Temps pour l&#8217;an 1824</source> (<fpage>273</fpage>&#8211;<lpage>302</lpage>).</mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><string-name><surname>Pruss</surname>, <given-names>Alexander R.</given-names></string-name> (<year>2013</year>). <article-title>Probability, Regularity, and Cardinality</article-title>. <source>Philosophy of Science</source>, <volume>80</volume>(<issue>2</issue>), <fpage>231</fpage>&#8211;<lpage>240</lpage>.</mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="journal"><string-name><surname>Ramsey</surname>, <given-names>Frank P.</given-names></string-name> (<year>1928</year>). <article-title>A Mathematical Theory of Saving</article-title>. <source>The Economic Journal</source>, <volume>38</volume>(<issue>152</issue>), <fpage>543</fpage>&#8211;<lpage>559</lpage>.</mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="webpage"><string-name><surname>Sandberg</surname>, <given-names>Anders</given-names></string-name> and <string-name><given-names>Stuart</given-names> <surname>Armstrong</surname></string-name> (<year>2012</year>). <source>Indefinite Survival Through Backup Copies</source>. <article-title>Future of Humanity Institute Technical Report 2012-1</article-title>. Available at <uri>https://www.fhi.ox.ac.uk/reports/2012-1.pdf</uri>.</mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="book"><string-name><surname>Sidgwick</surname>, <given-names>Henry</given-names></string-name> (<year>1907</year>). <source>The Methods of Ethics, 7th edn</source>. <publisher-name>Macmillan</publisher-name>.</mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="journal"><string-name><surname>Smith</surname>, <given-names>Nicholas</given-names></string-name> (<year>2014</year>). <article-title>Is Evaluative Compositionality a Requirement of Rationality?</article-title> <source>Mind</source>, <volume>123</volume>(<issue>490</issue>), <fpage>457</fpage>&#8211;<lpage>502</lpage>.</mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><string-name><surname>Tarsney</surname>, <given-names>Christian</given-names></string-name> (<year>2025</year>). <article-title>Expected Value, to a Point: Moral Decision-Making Under Background Uncertainty</article-title>. <source>No&#251;s</source>.</mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="book"><string-name><surname>Tarsney</surname>, <given-names>Christian</given-names></string-name> and <string-name><given-names>Hayden</given-names> <surname>Wilkinson</surname></string-name> (<year>2025</year>). <chapter-title>Longtermism in an Infinite World</chapter-title>. In <string-name><given-names>Jacob</given-names> <surname>Barrett</surname></string-name>, <string-name><given-names>Hilary</given-names> <surname>Greaves</surname></string-name> and <string-name><given-names>David</given-names> <surname>Thorstad</surname></string-name> (Eds.), <source>Essays on Longtermism: Present Action for the Distant Future</source>.</mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><string-name><surname>Thomas</surname>, <given-names>Teruji</given-names></string-name> (<year>2022a</year>). <article-title>The Asymmetry, Uncertainty, and the Long Term</article-title>. <source>Philosophy and Phenomenological Research</source>, <volume>107</volume>(<issue>2</issue>), <fpage>470</fpage>&#8211;<lpage>500</lpage>.</mixed-citation></ref>
<ref id="B44"><mixed-citation publication-type="journal"><string-name><surname>Thomas</surname>, <given-names>Teruji</given-names></string-name> (<year>2022b</year>). <article-title>Separability and Population Ethics</article-title>. <source>The Oxford Handbook of Population Ethics</source>, <fpage>271</fpage>&#8211;<lpage>296</lpage>.</mixed-citation></ref>
<ref id="B45"><mixed-citation publication-type="journal"><string-name><surname>Vilenkin</surname>, <given-names>Alexander</given-names></string-name> (<year>1983</year>). <article-title>Birth of Inflationary Universes</article-title>. <source>Physical Review D</source>, <volume>27</volume>(<issue>12</issue>), <elocation-id>2848</elocation-id>.</mixed-citation></ref>
<ref id="B46"><mixed-citation publication-type="book"><string-name><surname>von Neumann</surname>, <given-names>John</given-names></string-name> and <string-name><given-names>Oskar</given-names> <surname>Morgenstern</surname></string-name> (<year>1953</year>). <source>Theory of Games and Economic Behavior, 2nd edn</source>. <publisher-name>Princeton University Press</publisher-name>.</mixed-citation></ref>
<ref id="B47"><mixed-citation publication-type="thesis"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2021</year>). <source>Infinite Aggregation</source>. PhD diss. <publisher-name>Australian National University</publisher-name>.</mixed-citation></ref>
<ref id="B48"><mixed-citation publication-type="journal"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2022</year>). <article-title>In Defence of Fanaticism</article-title>. <source>Ethics</source>, <volume>132</volume>(<issue>2</issue>), <fpage>445</fpage>&#8211;<lpage>477</lpage>.</mixed-citation></ref>
<ref id="B49"><mixed-citation publication-type="journal"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2023</year>). <article-title>Infinite Aggregation and Risk</article-title>. <source>Australasian Journal of Philosophy</source>, <volume>101</volume>(<issue>2</issue>), <fpage>340</fpage>&#8211;<lpage>359</lpage>.</mixed-citation></ref>
<ref id="B50"><mixed-citation publication-type="journal"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2024a</year>). <article-title>Egyptology and Fanaticism</article-title>. <source>Philosophical Studies</source>, <volume>181</volume>(<issue>8</issue>), <fpage>1903</fpage>&#8211;<lpage>1923</lpage>.</mixed-citation></ref>
<ref id="B51"><mixed-citation publication-type="journal"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (<year>2024b</year>). <article-title>Flummoxing Expectations</article-title>. <source>No&#251;s</source>, <volume>59</volume>(<issue>3</issue>), <fpage>700</fpage>&#8211;<lpage>728</lpage>.</mixed-citation></ref>
<ref id="B52"><mixed-citation publication-type="book"><string-name><surname>Wilkinson</surname>, <given-names>Hayden</given-names></string-name> (n.d.). <source>Chaos, add infinitum</source>. Unpublished manuscript.</mixed-citation></ref>
<ref id="B53"><mixed-citation publication-type="book"><string-name><surname>Williamson</surname>, <given-names>Timothy</given-names></string-name> (<year>2000</year>). <source>Knowledge and Its Limits</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B54"><mixed-citation publication-type="journal"><string-name><surname>Zhao</surname>, <given-names>Michael</given-names></string-name> (<year>2021</year>). <article-title>Ignore Risk; Maximize Expected Moral Value</article-title>. <source>No&#251;s</source>, <volume>57</volume>(<issue>1</issue>), <fpage>144</fpage>&#8211;<lpage>161</lpage>.</mixed-citation></ref>
</ref-list>
</back>
</article>