<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">1533-628X</journal-id>
<journal-title-group>
<journal-title>Philosophers&#8217; Imprint</journal-title>
</journal-title-group>
<issn pub-type="epub">1533-628X</issn>
<publisher>
<publisher-name>Michigan Journal of Community Service Learning</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3998/phimp.3176</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Evidential Decision Theory and the Ostrich</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Isaacs</surname>
<given-names>Yooav</given-names>
</name>
<email>yoaavisaacs@gmail.com</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Levinstein</surname>
<given-names>Benjamine A.</given-names>
</name>
<email>balevinstein@gmail.com</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Baylor University</aff>
<aff id="aff-2"><label>2</label>University of Illinois Urbana-Champaign</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2024-03-01">
<day>01</day>
<month>03</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>24</volume>
<elocation-id>6</elocation-id>
<history>
<date date-type="received" iso-8601-date="2022-07-22">
<day>22</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted" iso-8601-date="2023-03-15">
<day>15</day>
<month>03</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2024 Yoaav Isaacs and Benjamin A. Levinstein</copyright-statement>
<copyright-year>2024</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. <uri xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">https://creativecommons.org/licenses/by-nc-nd/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://www.philosophersimprint.org/024004/phimp/article/10.3998/phimp.3176/"/>
<abstract>
<p>Evidential Decision Theory is flawed, but its flaws are not fully understood. David Lewis (<xref ref-type="bibr" rid="B26">1981</xref>) famously charged that EDT recommends an irrational policy of managing the news and &#8220;commends the ostrich as rational&#8221;. Lewis was right, but the case he appealed to&#8212;Newcomb&#8217;s Problem&#8212;does not demonstrate his conclusion. Indeed, decision theories other than EDT, such as Committal Decision Theory and Functional Decision Theory, agree with EDT&#8217;s verdicts in Newcomb&#8217;s Problem, but their flaws, whatever they may be, do not stem from any ostrich-like recommendations. We offer a new case which shows that EDT mismanages the news, thus vindicating Lewis&#8217;s original charge. We argue that this case reveals a flaw in the &#8220;Why ain&#8217;cha rich?&#8221; defense of <sc>edt</sc>. We argue further that this case is an advance on extant putative counterexamples to EDT.</p>
</abstract>
<kwd-group>
<kwd>evidential decision theory</kwd>
<kwd>self-locating belief</kwd>
<kwd>Newcomb</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Evidential Decision Theory is flawed, but its flaws are not fully understood. David Lewis (<xref ref-type="bibr" rid="B26">1981</xref>) famously charged that <sc>edt</sc> recommends an irrational policy of managing the news and &#8220;commends the ostrich as rational&#8221;. Lewis was right, but the case he appealed to&#8212;N<sc>ewcomb</sc>&#8212;does not demonstrate his conclusion. Indeed, decision theories other than <sc>edt</sc>, such as Cohesive Decision Theory and Functional Decision Theory, agree with <sc>edt</sc>&#8217;s verdicts in N<sc>ewcomb</sc>, but their flaws, whatever they may be, do not stem from any ostrich-like recommendations.</p>
<p>We offer a new case which shows that <sc>edt</sc> mismanages the news, thus vindicating Lewis&#8217;s original charge. We argue that this case reveals a flaw in the &#8220;Why ain&#8217;cha rich?&#8221; defense of <sc>edt</sc>. We argue further that this case is an advance on extant putative counterexamples to <sc>edt</sc>.</p>
</sec>
<sec>
<title>2. EDT v. CDT</title>
<p>Both Evidential and Causal Decision theory agree you should maximize expected utility. The difference between them arises from how they calculate expected utility. The standard informal way to cash out this difference is as follows: According to <sc>edt</sc>, you should evaluate acts based on the extent to which they <italic>indicate</italic> good outcomes, whereas according to <sc>cdt</sc>, you should evaluate acts based on the extent to which they <italic>cause</italic> good outcomes.<xref ref-type="fn" rid="n1">1</xref></p>
<p>To illustrate their differences, we begin with the familiar:</p>
<disp-quote>
<p><bold>N<sc>ewcomb</sc></bold>&#160;&#160;You are confronted with two boxes, one transparent and one opaque. You can choose either to take the contents of both boxes or to take only the contents of the opaque box. The transparent box contains $1,000. The opaque box contains either nothing or $1,000,000, depending on a past prediction about what choice you would make. If it was predicted that you would take the contents of both boxes, then the opaque box contains nothing. If it was predicted that you would take the contents of only the opaque box, then the opaque box contains $1,000,000. This predictor is known to be highly reliable. Should you take one box or two?</p>
</disp-quote>
<p>The Evidential Decision Theorist tells you to one-box. One-boxing is strong evidence you&#8217;ll get $1M, whereas two-boxing is strong evidence you&#8217;ll only get $1,000.</p>
<p>The Causal Decision Theorist says you should take both boxes. Either the money is in the opaque box or it isn&#8217;t. It&#8217;s too late to do anything about that now. And either way, you cause a better result by taking both.</p>
<p>Before diagnosing whether <sc>edt</sc>&#8217;s verdict stems from an irrational news management policy, it&#8217;s worth exploring the difference between <sc>edt</sc> and <sc>cdt</sc> more carefully.</p>
<p>For simplicity, we&#8217;ll formulate <sc>edt</sc> and <sc>cdt</sc> with the same framework. <sc>edt</sc> and <sc>cdt</sc> both appeal to a set of acts <inline-formula>
<mml:math id="Eq001-mml">
<mml:mi mathvariant='script'>A</mml:mi>
</mml:math>
</inline-formula>, states <inline-formula>
<mml:math id="Eq002-mml">
<mml:mi mathvariant='script'>S</mml:mi>
</mml:math>
</inline-formula>, and outcomes <inline-formula>
<mml:math id="Eq003-mml">
<mml:mi mathvariant='script'>O</mml:mi>
</mml:math>
</inline-formula>. An act and a state jointly result in a unique outcome. Outcomes are objects of ultimate concern for an agent. If the agent would prefer world <italic>w</italic><sub>1</sub> over <italic>w</italic><sub>2</sub>, then <italic>w</italic><sub>1</sub> and <italic>w</italic><sub>2</sub> are elements of distinct outcomes. We measure the desirability of an outcome with a real-valued function <italic>u</italic> unique up to positive affine transformation. The agent also comes equipped with a probability function Pr that measures her uncertainty over <inline-formula>
<mml:math id="Eq004-mml">
<mml:mi mathvariant='script'>A</mml:mi>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="Eq005-mml">
<mml:mi mathvariant='script'>S</mml:mi>
</mml:math>
</inline-formula>.<xref ref-type="fn" rid="n2">2</xref></p>
<p>To capture the difference between the two theories, we follow Gallow (<xref ref-type="bibr" rid="B16">2020</xref>). We can divide up a given state into factors that are causally downstream and causally upstream of your acts.<xref ref-type="fn" rid="n3">3</xref> The downstream factors are exactly those over which you exert causal influence in a given state of the world. Call the upstream factors <italic>K</italic> and the downstream factors <italic>C</italic>. Then we can distinguish <sc>edt</sc> and <sc>cdt</sc> as follows:</p>
<disp-formula id="FD1">
<label>(EDT)</label>
<mml:math id="Eq006-mml">
<mml:mrow><mml:mi mathvariant='script'>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mi>K</mml:mi></mml:munder><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>K</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mi>C</mml:mi></mml:munder><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>K</mml:mi><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mtext mathvariant="italic">KCA</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD2">
<label>(CDT)</label>
<mml:math id="Eq007-mml">
<mml:mrow><mml:mi mathvariant='script'>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mi>K</mml:mi></mml:munder><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>K</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mi>C</mml:mi></mml:munder><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>K</mml:mi><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mtext mathvariant="italic">KCA</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</disp-formula>
<p>This formulation of <sc>edt</sc> and <sc>cdt</sc> brings out the fundamental difference between the two theories. <sc>edt</sc> thinks you should consider how likely your act renders upstream factors (Pr(<italic>K&#124;A</italic>)), whereas <sc>cdt</sc> thinks you should only consider the unconditional probability of those factors (Pr(<italic>K</italic>)). <sc>edt</sc> favors maximizing the expected value of the information that you perform your action. For <sc>edt</sc>, an act&#8217;s expected value derives <italic>both</italic> from its causal contributions to what you value and from the evidence it provides that the underlying state of the world conduces to what you value. In contrast, for <sc>cdt</sc> an act&#8217;s expected value derives <italic>solely</italic> from its causal contributions to what you value.</p>
<p>In Newcomb&#8217;s problem, <sc>edt</sc> doesn&#8217;t care whether the presence or absence of $1M is upstream or downstream of your act, so it considers Pr(1<italic>M</italic>&#124;1<italic>B</italic>), Pr(1<italic>M</italic>&#124;2<italic>B</italic>), etc., when calculating <inline-formula>
<mml:math id="Eq008-mml">
<mml:mrow><mml:mi mathvariant='script'>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="Eq009-mml">
<mml:mrow><mml:mi mathvariant='script'>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula>. Thus <sc>edt</sc> recommends one-boxing. But since whether there&#8217;s money in the box is upstream of your act, <sc>cdt</sc> considers Pr(1<italic>M</italic>) and Pr(&#172;1M) when calculating <inline-formula>
<mml:math id="Eq010-mml">
<mml:mrow><mml:mi mathvariant='script'>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="Eq011-mml">
<mml:mrow><mml:mi mathvariant='script'>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula>. Thus, <sc>cdt</sc> recommends two-boxing.</p>
<p>This divergence famously led David Lewis (<xref ref-type="bibr" rid="B26">1981</xref>) to charge that <sc>edt</sc> recommends an irrational policy of managing the news, alleging that it &#8220;commends the ostrich as rational&#8221;. But the case that <sc>edt</sc> is irrational and ostrich-like is questionable.</p>
<p>Admittedly, one can get oneself into the mood where it seems strange to consider Pr(<italic>K</italic>&#124;<italic>A</italic>) when <italic>A</italic> is downstream of <italic>K</italic>. After all, <italic>A</italic> can&#8217;t affect <italic>K</italic>! But on the other hand, one can get oneself into the mood where it doesn&#8217;t. After all, if you&#8217;re trying to determine how much utility you&#8217;d get from performing <italic>A</italic>, you only want to consider worlds where <italic>A</italic> is true. How likely <italic>K</italic> is in those worlds is just Pr(<italic>K</italic>&#124;<italic>A</italic>). This is, in effect, just to articulate the different fundamental intuitions behind <sc>edt</sc> and <sc>cdt</sc>. <sc>edt</sc> tells you to perform the act that gives you the best distribution over outcomes. <sc>cdt</sc> tells you to perform the act that gives you the best distribution over outcomes holding things outside of your control fixed. Put this way, it&#8217;s far from clear that <sc>edt</sc>&#8217;s policy is irrational.</p>
<p>While there&#8217;s much more to say theoretically, we don&#8217;t think that <sc>edt</sc>&#8217;s verdict in N<sc>ewcomb</sc> is enough to show that <sc>edt</sc> mismanages the news. For one, it remains controversial what the right answer in N<sc>ewcomb</sc> is.<xref ref-type="fn" rid="n4">4</xref> Second, there are other decision theories that <italic>don&#8217;t</italic> manage the news the way <sc>edt</sc> does and that still recommend one-boxing. Functional decision theory, for instance, appeals to the decision procedure the agent uses.<xref ref-type="fn" rid="n5">5</xref> According to <sc>fdt</sc>, one should consider what would happen if your decision procedure were to output different acts in the act space. F<sc>dt</sc> thinks of these procedures as abstract objects (like computer programs) that are not local to your own mind. If another agent is using or simulating the same procedure, then, on <sc>fdt</sc>&#8217;s counterfactuals, the output of your decision procedure will vary for that agent too. According to functional decision theorists, moreover, you can control what your procedure outputs.</p>
<p>In N<sc>ewcomb</sc>, <sc>fdt</sc> claims that if the predictor is accurate, then her choice is affected by the output of your decision procedure (even if you haven&#8217;t yet decided). If your decision procedure were to output <italic>one-box</italic> when you run it, then it also would have output <italic>one-box</italic> when the predictor ran it.<xref ref-type="fn" rid="n6">6</xref></p>
<p>Structurally, <sc>fdt</sc> is very close to <sc>cdt</sc>, with two basic changes.<xref ref-type="fn" rid="n7">7</xref> Whereas <italic>cdt</italic> divides states into factors that are upstream and downstream of the <italic>act</italic> itself, <sc>fdt</sc> divides states into factors that are upstream or downstream of your <italic>decision procedure</italic>. Since both the predictor&#8217;s and your choice are influenced by the output of your decision procedure in N<sc>ewcomb</sc>, the predictor&#8217;s choice is downstream of your procedure but upstream of the physical action of selecting one or two boxes. Second, whereas <sc>cdt</sc> considers only <italic>causal</italic> influence, <sc>fdt</sc> has a broader notion of influence. Even though the predictor&#8217;s choice is not <italic>causally</italic> influenced by anything you do, it is still influenced by something you have control over, namely, the output of your decision procedure.</p>
<p>Whatever the merits or demerits of <sc>fdt</sc>, it does not &#8216;manage&#8217; the news in the way <sc>edt</sc> does. The equation for <sc>fdt</sc>&#8217;s notion of expected utility looks just like equation (<sc>cdt</sc>) above. The only difference is that what counts as an upstream factor (<italic>K</italic>) is different for <sc>fdt</sc> than it is for <sc>cdt</sc>.<xref ref-type="fn" rid="n8">8</xref></p>
<p>Therefore, causal decision theorists cannot charge <sc>fdt</sc> with mismanaging the news. They will charge that it delivers the wrong verdicts and appeals to the wrong counterfactuals and perhaps even that it has bad metaphysics. But the one-boxing of <sc>fdt</sc> is not ostrich-like, and so one-boxing is not automatically ostrich-like.<xref ref-type="fn" rid="n9">9</xref></p>
<p>This does not mean that <sc>edt</sc> is not objectionably ostrich-like, or that <sc>edt</sc> does not prescribe one-boxing for objectionably ostrich-like reasons. But it does mean that Newcomb&#8217;s problem makes a poor diagnostic case for being objectionably ostrich-like. Dialectically, the case against <sc>edt</sc> would be stronger if there were a case in which <sc>edt</sc> gave a prescription which was more straightforwardly unreasonable and which other standard decision theories did not share.</p>
</sec>
<sec>
<title>3. A New Case</title>
<p>To show that invoking Pr(<italic>K</italic>&#124;<italic>A</italic>) instead of just Pr(<italic>K</italic>) when calculating expected utility is irrational, we provide a new case.</p>
<disp-quote>
<p>Consider:</p>
<p><bold>T<sc>orture</sc></bold>&#160;&#160;John has been abducted by a fiendish organization. His captors flip a fair coin in private. If the coin lands <italic>H</italic>eads, John will eventually be set free unharmed. If it lands <italic>T</italic>ails, he&#8217;ll be brutally tortured. Before John learns his fate, his captors place him in a cell and subject him to two rounds of the following decision problems. In round 1, if the coin lands Heads, John will see a Green light flash with 90% probability and a Red light flash with 10% probability. If it lands Tails, he&#8217;ll see a Red light flash with 90% probability and a Green light flash with 10% probability. If he sees a Green light, he has no decision to make. If he sees a Red light he&#8217;ll then be offered a choice to pay $1 to rig the lighting device so that he&#8217;ll be sure to see a Red light in any future round. (So, if John sees Red in round 1, and John pays, then he&#8217;ll see Red in round 2. If he sees Red in round 2 and pays, then he simply loses the dollar.) After making this decision, his memory will be erased. John is certain he will always decide the same way whenever he sees a Red light. John is in his cell and sees a Red light. John cares a little bit about money, but much more about not being tortured. What should he do?</p>
</disp-quote>
<p>John obviously shouldn&#8217;t pay. However, <sc>edt</sc> mandates that he does pay.</p>
<p>If John doesn&#8217;t pay, then he&#8217;ll believe to degree.9 that the coin landed tails, and he&#8217;ll be tortured.</p>
<p>If John pays, then he knows the sequence he observes over the two rounds is (or will be) either <italic>RR</italic> or <italic>GR.</italic> If the sequence is <italic>RR</italic>, then there&#8217;s a 90% chance he&#8217;ll be tortured, since in the first round the probability of <italic>T</italic> given that the light was Red is.9, but the second round&#8217;s reading was meaningless. If the sequence he sees is <italic>GR</italic>, then there&#8217;s only a 50% chance he&#8217;ll be tortured since he saw one <italic>G</italic> and one <italic>R</italic> that are equally well correlated with <italic>H</italic> and <italic>T</italic> respectively. So, assuming he&#8217;s not certain he&#8217;s in round 1, then upon seeing Red, his credence will be somewhere strictly between.5 and.9 that he&#8217;ll be tortured. So, by paying he lowers the probability of being tortured. (We assume this difference is big enough on his utility function to trump the small amount of money he loses.)</p>
<p>By paying, John is playing the ostrich. He&#8217;s merely changing the <italic>information</italic> he gets from the Red signal, but he&#8217;s not actually doing anything about the possibility of upcoming torture.<xref ref-type="fn" rid="n10">10</xref> In other words, he&#8217;s merely managing the news, and he&#8217;s paying $1 (or $2) for the privilege.<xref ref-type="fn" rid="n11">11</xref></p>
<p>C<sc>dt</sc> and <sc>fdt</sc> agree that John shouldn&#8217;t pay. According to <sc>cdt</sc>, John can&#8217;t do anything to change how the coin landed, so he may as well save his money. According to <sc>fdt</sc>, John&#8217;s decision procedure that tells him to pay or not to pay has no effect (causal or otherwise) on whether the coin lands Heads. Both theories&#8212;though wildly different in orientation&#8212;agree that paying is dominated by not paying. The only value that paying has is news value.</p>
<p>Note that this is a different sort of case from others where <sc>edt</sc> will pay to avoid information to protect against future decisions. For example:</p>
<disp-quote>
<p><bold>O<sc>ptional</sc> N<sc>ewcomb</sc></bold>&#160;&#160;As in N<sc>ewcomb</sc>, you are confronted with two boxes. The transparent box has $1,000. The opaque box contains either nothing or $1,000,000 depending on a past prediction about which choices you will make. At <italic>t</italic><sub>1</sub>, the experimenters tell you they will reveal whether the money is in the opaque box unless you pay them $1. At <italic>t</italic><sub>2</sub>, you&#8217;ll get to decide whether to one-box or two-box. The predictor is highly reliable both at determining whether you will pay not to know what&#8217;s in the opaque box and whether you&#8217;ll one-box or two-box at <italic>t</italic><sub>2</sub>. If it was predicted that you&#8217;d ultimately take only the contents of the opaque box, then the opaque box contains $1,000,000. Otherwise, it contains nothing. What should you do?</p>
</disp-quote>
<p>Suppose you know at <italic>t</italic><sub>1</sub> that you&#8217;ll follow <sc>edt</sc> at both <italic>t</italic><sub>1</sub> and <italic>t</italic><sub>2</sub>. Then you know that if at <italic>t</italic><sub>2</sub>, you are certain there&#8217;s nothing in the opaque box, you&#8217;ll two-box. And you know that if you&#8217;re certain there&#8217;s a million in the opaque box, you&#8217;ll also two-box. So, given that you know the contents of the box, you&#8217;ll two-box no matter what at <italic>t</italic><sub>2</sub>. However, the predictor is very reliable, so at <italic>t</italic><sub>1</sub>, you think that if you decide not to pay the experimenters, it&#8217;s highly likely you&#8217;ll learn there&#8217;s nothing in the opaque box. On the other hand, if you aren&#8217;t certain what&#8217;s in the opaque box at <italic>t</italic><sub>2</sub>, <sc>edt</sc> will recommend one-boxing. In that case, you&#8217;re very likely to find $1,000,000 in the opaque box. So, <sc>edt</sc> tells you to pay not to know at <italic>t</italic><sub>1</sub>.</p>
<p>This case <italic>may</italic> be troublesome for <sc>edt</sc>, but we don&#8217;t think it&#8217;s as troublesome as T<sc>orture</sc>. In O<sc>ptional</sc> N<sc>ewcomb</sc>, you pay not to know at <italic>t</italic><sub>1</sub> to stop yourself from choosing an act at a different time that you now foresee as sub-optimal. If you had your druthers at <italic>t</italic><sub>1</sub>, you&#8217;d avoid paying and commit your <italic>t</italic><sub>2</sub>-self to one-boxing no matter what. But you don&#8217;t have that option. Instead, it&#8217;s worth a small fee to avoid letting your later self decide differently from how you&#8217;d like.<xref ref-type="fn" rid="n12">12</xref> In T<sc>orture</sc>, no future decisions ride on whether John pays to rig the device. He buys himself nothing. All that&#8217;s avoided is bad news.</p>
<p>A further virtue of this case&#8212;although inessential for the main point of news-management&#8212;is that it does not involve any strange prediction, as in N<sc>ewcomb</sc>. John does in some sense predict himself, but it&#8217;s the sort of prediction that is entirely mundane: he knows that he would behave in a particular way in a given situation. While we here assume he knows this with certainty for simplicity, the case also works if one relaxes this assumption.<xref ref-type="fn" rid="n13">13</xref><sup>,</sup><xref ref-type="fn" rid="n14">14</xref></p>
</sec>
<sec>
<title>4. Reexamination</title>
<p>Our reasoning that John would think himself less likely to be tortured conditional on paying the $1 than conditional on not paying the $1 is plausible, but not beyond criticism. John&#8217;s situation involves possible memory loss and attendant self-locating uncertainty, just as Adam Elga&#8217;s (<xref ref-type="bibr" rid="B15">2000</xref>) Sleeping Beauty Problem does. And our T<sc>orture</sc> case is subject to some of the same same controversies as the Sleeping Beauty Problem. While it is uncontroversial that John&#8217;s credence that he will be tortured should be a mixture of his credence that he will be tortured conditional on it being round 1 and his credence that he will be tortured conditional on it being round 2, it is controversial what his credences in it being round 1 or round 2 should be. (That&#8217;s why our argument did not employ any particular probabilities for those possibilities, but only assumed intermediate credences for each.) Moreover, even our natural-seeming claim that&#8212;conditional on not paying&#8212;John should have credence.9 that he will be tortured is not beyond doubt. Some advocate what Titelbaum (<xref ref-type="bibr" rid="B31">2008</xref>) terms the &#8220;Relevance Limiting Thesis&#8221;, according to which credences about uncentered propositions should only be affected by uncentered evidence. Given that thesis, seeing a red light would rule out the sequence <italic>GG</italic>, but would not favor <italic>RR</italic> over either <italic>RG</italic> or <italic>GR</italic>, and as a result John&#8217;s credence in torture would be less than.9.<xref ref-type="fn" rid="n15">15</xref></p>
<p>The Relevance Limiting Thesis does not merely muddy the waters; it invalidates our reasoning. Given the Relevance Limiting Thesis, John&#8217;s credence that he will avoid torture conditional on paying the $1 is no greater than his credence that he will avoid torture conditional on his not paying.</p>
<sec>
<title>4.1 Calculation</title>
<p>Let&#8217;s look at the details of why the Relevance Limiting Thesis invalidates our reasoning. In our case, we have a sequence of states of the world: <italic>s</italic><sub>0</sub> is either <italic>H</italic> or <italic>T, s</italic><sub>1</sub> and <italic>s</italic><sub>2</sub> are either red lights or green lights. We will index <italic>R</italic> and <italic>G</italic> accordingly, so <italic>HR</italic><sub>1</sub><italic>G</italic><sub>2</sub> is the world where the coin lands Heads, a red light blinks first, and a green light blinks second.</p>
<p>The agent has uncertainty both over which world is actual and over which center he occupies. So, we&#8217;ll write Pr(<italic>s</italic> in <italic>s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub> <italic>&#124; E</italic>) for his subjective probability of the world being <italic>s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub> and him currently occupying center <italic>s</italic> given <italic>E</italic>. For instance, Pr(<italic>R</italic><sub>1</sub> in <italic>TR</italic><sub>1</sub><italic>R</italic><sub>2</sub>) is his probability that he&#8217;s seeing the red light flash for the first time in the world where the coin lands tails and the light flashes red both times.</p>
<p>According to the Relevance Limiting Thesis, upon seeing red, the agent only rules out the <italic>G</italic><sub>1</sub><italic>G</italic><sub>2</sub>-worlds. It provides him with no <italic>further</italic> evidence that he is in an <italic>R</italic><sub>1</sub><italic>R</italic><sub>2</sub>-world relative to an <italic>R</italic><sub>1</sub><italic>G</italic><sub>2</sub>- or <italic>G</italic><sub>1</sub><italic>R</italic><sub>2</sub>-world. Put differently: the agent takes seeing red <italic>now</italic> to be equivalent to learning the uncentered proposition that he sees red <italic>at least once</italic>, that is, the set of worlds with some red flashes.</p>
<p>One way to make this concrete is to appeal to the most common form of the Relevance Limiting Thesis, known as Compartmentalized Conditionalization (CC).</p>
<p>According to CC, Pr(<italic>s</italic> in <italic>s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub> <italic>&#124; E</italic>) should be equal to Pr(<italic>s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub> <italic>&#124; E</italic>)&#183;1/#(<italic>E, s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub>), where #(<italic>E, s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub>) is the number of times the agent has total evidence <italic>E</italic> in the world <italic>s</italic><sub>0</sub> <italic>s</italic><sub>1</sub> <italic>s</italic><sub>2</sub>. For instance, if &#8216;red&#8217; refers to the evidence the agent has when he has observed a red light, #(red,<italic>HR</italic><sub>1</sub><italic>R</italic><sub>2</sub>) = 2.</p>
<p>To see that paying is sub-optimal, we need only calculate John&#8217;s subjective probabilities for being tortured (equivalently, for the coin landing tails) conditional on paying or not paying given that he observes red.</p>
<p>First, consider the policy of not paying, which we abbreviate p&#772;. John&#8217;s subjective probability here is:</p>
<disp-formula>
<label>(1)</label>
<mml:math id="Eq013-mml">
<mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mtext>&#x00A0;in&#x00A0;</mml:mtext><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mtext>&#x00A0;in&#x00A0;</mml:mtext><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mo>&#x00A0;</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>&#x2003;&#x2003;</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mtext>&#x00A0;in&#x00A0;</mml:mtext><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mtext>&#x00A0;in&#x00A0;</mml:mtext><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mo>&#x00A0;</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow>
</mml:math>
</disp-formula>
<p>The second line follows given the Relevance Limiting Thesis in general (and from CC in particular). Note that <inline-formula>
<mml:math id="Eq014-mml">
<mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mstyle scriptlevel='+1'><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>Pr</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mstyle></mml:mrow>
</mml:math>
</inline-formula> and similarly for the other terms in (2).</p>
<p>Furthermore, we can verify <inline-formula>
<mml:math id="Eq015-mml">
<mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula>, so <inline-formula>
<mml:math id="Eq016-mml">
<mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow>
</mml:math>
</inline-formula>. To see why, note:</p>
<p><inline-formula>
<mml:math id="Eq017-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>H</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>))</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:msup><mml:mrow><mml:mn>.9</mml:mn></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:msup><mml:mrow><mml:mn>.1</mml:mn></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>H</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mn>.59</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>Putting this all together, we have:</p>
<p><inline-formula>
<mml:math id="Eq018-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>red</mml:mtext><mml:mo>,</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>Pr</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:msup><mml:mrow><mml:mn>.9</mml:mn></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mn>.09</mml:mn><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mn>.09</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>.495</mml:mn></mml:mrow><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>&#x00A0;</mml:mo></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>To calculate the conditional probability of torture given John pays upon seeing red, we use the same derivation to see that:</p>
<p><inline-formula>
<mml:math id="Eq019-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>red</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.9</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.09</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>.495</mml:mn></mml:mrow><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>red</mml:mtext><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>So, if John follows both the Relevance Limiting Thesis and <sc>edt</sc>, he won&#8217;t pay.</p>
</sec>
<sec>
<title>4.2 AVariant</title>
<p>Our analysis of T<sc>orture</sc> only holds if the Relevance Limiting Thesis is false. And, admittedly, the general consensus is that the Relevance Limiting Thesis is false&#8212;most of the controversy regarding the epistemology of self-locating belief concerns <italic>how</italic> self-locating evidence affects credences in uncentered propositions, not <italic>whether</italic> it does. So it would not be the end of the world if our argument had to assume that the Relevance Limiting Thesis was false. But we don&#8217;t. Happily, it&#8217;s possible to modify T<sc>orture</sc> slightly so that the Relevance Limiting Thesis loses its relevance.</p>
<p>The Relevance Limiting Thesis matters for our initial statement of T<sc>orture</sc> only because of the possibility of duplicate experiences. But it&#8217;s easy to adapt</p>
<p>Michael Titelbaum&#8217;s (<xref ref-type="bibr" rid="B31">2008</xref>) &#8220;technicolor&#8221; trick and thereby avoid that possibility. Let&#8217;s suppose that there&#8217;s another fair coin that&#8217;s tossed, and it will affect the brightness of the red/green lights that John is shown. If this coin lands Heads then the light he sees at <italic>t</italic><sub>1</sub> will be bright and the light he sees at <italic>t</italic><sub>2</sub> will be dim, and if the coin lands Tails then the light he sees at <italic>t</italic><sub>1</sub> will be dim and the light he sees at <italic>t</italic><sub>2</sub> will be bright. Since the brightness of the light is guaranteed to vary across times, even cases in which John sees two red lights or two green lights will not contain duplicate experiences, and thus the Relevance Limiting Thesis will not apply. Whatever sort of light John sees, he can rule out worlds in which he never sees that sort of light and renormalize his credences in the worlds in which he does see that sort of light.</p>
<p>Most problems in the epistemology of self-locating belief are not so easily avoided. As we mentioned, the main controversies involve how self-locating evidence affects credences in uncentered propositions. And the crux of the controversies is how confirmation works between worlds that contain different numbers of agents (or different quantities of experience for some agent).<xref ref-type="fn" rid="n16">16</xref> But T<sc>orture</sc> involves the same quantity of experiences for John no matter what. Thus although John is uncertain whether he&#8217;s at <italic>t</italic><sub>1</sub> or <italic>t</italic><sub>2</sub>, this self-locating uncertainty is entirely pedestrian&#8212;like not being sure exactly what time it is under ordinary circumstances. All major views regarding the epistemology of self-locating belief will validate the following calculations.<xref ref-type="fn" rid="n17">17</xref></p>
</sec>
<sec>
<title>4.3 The Details</title>
<p>To see why the technicolor trick works, we&#8217;ll assume without loss of generality that John sees a dim red light, which we abbreviate dr.</p>
<p>We&#8217;ll write the results of the first coin toss (which determines whether John gets tortured) as either <italic>H</italic><sub>1</sub> or <italic>T</italic><sub>1</sub> and the second coin toss as <italic>H</italic><sub>2</sub> or <italic>T</italic><sub>2</sub> and use upper and lower case letters to denote bright or dim lights, respectively. So <italic>H</italic><sub>1</sub><italic>H</italic><sub>2</sub><italic>R</italic><sub>1</sub> <italic>g</italic><sub>2</sub> denotes the fact that both coins landed heads, the first light was bright red, and the second light was dim green.</p>
<p>Suppose John will pay upon seeing a red light (dim or not). Then:</p>
<p><inline-formula>
<mml:math id="Eq020-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow> <mml:mo>[</mml:mo> <mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.9</mml:mn><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.09</mml:mn><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.9</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.4725</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>The second equality follows by the definition of conditional probability and the fact that observing a dim red light is guaranteed in each of the worlds considered.</p>
<p>Next we calculate:</p>
<p><inline-formula>
<mml:math id="Eq021-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>H</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>H</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>H</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mn>.1</mml:mn><mml:mo>+</mml:mo><mml:mn>.09</mml:mn><mml:mo>+</mml:mo><mml:mn>.1</mml:mn><mml:mo>+</mml:mo><mml:mn>.9</mml:mn><mml:mo>+</mml:mo><mml:mn>.09</mml:mn><mml:mo>+</mml:mo><mml:mn>.9</mml:mn></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mn>.545</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>So,</p>
<p><inline-formula>
<mml:math id="Eq022-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>.4725</mml:mn><mml:mo>/</mml:mo><mml:mn>.54</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>&#x2248;</mml:mo><mml:mn>.867</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>On the other hand, if John doesn&#8217;t pay, a similar calculation reveals that:</p>
<p><inline-formula>
<mml:math id="Eq023-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mtext>dr</mml:mtext><mml:mo>,</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mrow> <mml:mo>[</mml:mo> <mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>H</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>G</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:mn>.45</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>A tedious calculation shows that <inline-formula>
<mml:math id="Eq024-mml">
<mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mtext>dr&#x007C;</mml:mtext><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>.5</mml:mn></mml:mrow>
</mml:math>
</inline-formula>. So:</p>
<p><inline-formula>
<mml:math id="Eq025-mml">
<mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mtext mathvariant="normal">Pr</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>&#x007C;</mml:mo><mml:mover accent='true'><mml:mi>p</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>.45</mml:mn></mml:mrow><mml:mrow><mml:mn>.5</mml:mn></mml:mrow></mml:mfrac></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2003;&#x2003;&#x2003;&#x2003;</mml:mo><mml:mo>=</mml:mo><mml:mn>.9</mml:mn></mml:mtd></mml:mtr></mml:mtable>
</mml:math>
</inline-formula></p>
<p>Given that the cost of payment is trivial, John prefers paying to not paying if he follows <sc>edt</sc>.</p>
<p>(The astute reader may notice that John has more possible options now, such as paying if the red light is dim but not when it&#8217;s bright or vice versa. We won&#8217;t go through the calculations here, but John will prefer paying upon seeing <italic>any</italic> red light to these more complicated options.)</p>
</sec>
</sec>
<sec>
<title>5 Why Ain&#8217;cha Rich?</title>
<p>A traditional motivation for <sc>edt</sc> is that its followers tend to do better than followers of <sc>cdt</sc>. In N<sc>ewcomb</sc>, for instance, one-boxers tend to end up richer than two-boxers. So one-boxers can challenge two-boxers by saying, &#8220;If you&#8217;re so smart, why ain&#8217;cha rich?&#8221;<xref ref-type="fn" rid="n18">18</xref></p>
<p>The causal decision theorist can of course retort that the evidentialist is looking at the wrong reference classes. Of people who walk into a room with only a thousand dollars, causalists do better. And of people who walk into a room with a million and a thousand dollars, causalists also do better. From the <sc>cdt</sc> point of view, the fact that evidentialists tend to walk into better rooms is irrelevant.</p>
<p>In this case, though, there&#8217;s no good sense in which <sc>edt</sc> outperforms <sc>cdt</sc>. Evidentialists get tortured just as often as causalists. People who choose to pay get tortured just as often as people who choose not to pay.<xref ref-type="fn" rid="n19">19</xref> What&#8217;s different is the ratio of instances of torture to red-seeing time-slices&#8212;evidentialists have fewer instances of torture per red-seeing time-slice than causalists do. But that&#8217;s merely because evidentialists stupidly produce extra red-seeing time-slices; they make themselves get bad news more often so as to dilute the significance of the bad news. This plainly is an irrational manipulation of the news.</p>
<p>In N<sc>ewcomb</sc>, &#8220;why ain&#8217;cha rich&#8221; reasoning militates in favor of <sc>edt</sc>&#8217;s verdict. But in T<sc>orture</sc>, &#8220;why ain&#8217;cha rich&#8221; reasoning militates against <sc>edt</sc>&#8217;s verdict. So <sc>edt</sc> is not supported by &#8220;why ain&#8217;cha rich&#8221; reasoning. In fact, that reasoning cuts against <sc>edt</sc>.<xref ref-type="fn" rid="n20">20</xref></p>
<p>Consider an analogous situation: You suffer from infrequent but very painful migraine headaches. There&#8217;s a biotech company that can predict when you&#8217;ll get migraines, and it notifies you about upcoming migraines the day before they happen. But if you pay them extra, they&#8217;ll also randomly tell you that you&#8217;re going to get a migraine even when you won&#8217;t. That way the news value of being told that you&#8217;re going to get a migraine won&#8217;t be as bad. It&#8217;s obviously irrational to pay to get bad news more often in order to make each instance of bad news less bad. By paying you don&#8217;t get to have any fewer headaches, so it&#8217;s not worth anything. And indeed, <sc>edt</sc> would not recommend paying extra; the risk of migraines is the same either way. However, it&#8217;s good news to learn that in the past you <italic>had</italic> paid to make it more likely that you&#8217;d get fallacious notifications of future migraines. In effect, <sc>edt</sc> recommends paying in the present so as to get evidence that you paid in the past. This is obviously foolish. It&#8217;s good news not worth paying for.</p>
</sec>
<sec>
<title>6. The Larger Dialectic</title>
<p>It is a common view that the correct decision theory mandates the maximization of expected utility.<xref ref-type="fn" rid="n21">21</xref> Yet there are deep disagreements about how expected utilities should be calculated&#8212;in effect, about what expected utilities are. <sc>edt</sc> and <sc>cdt</sc> are the most prominent positions (though there are others). The standard methodology is to come up with cases where these decision theories disagree and pump intuitions about which verdict is right. But intuitions differ, and any verdict is liable to be justifiable in some fairly natural sense.<xref ref-type="fn" rid="n22">22</xref> <sc>edt</sc> will maximize evidential expected utility and fail to maximize causal expected utility, while <sc>cdt</sc> will maximize causal expected utility and fail to maximize evidential expected utility. Any sensible decision theory will be optimal relative to its own sense of optimality. So the strongest argument against a sensible decision theory is one that makes its sense of optimality seem foolish.</p>
<p>Several such arguments have been attempted regarding <sc>edt</sc>. N<sc>ewcomb</sc> was meant to show that followers of <sc>edt</sc> foolishly reject free money. But followers of <sc>edt</sc> (unlike those who diverge from <sc>edt</sc> in N<sc>ewcomb</sc>) tend to wind up rich. That doesn&#8217;t seem straightforwardly foolish.</p>
<p>Arntzenius (<xref ref-type="bibr" rid="B5">2008</xref>) offers a case in which, given a predictor who predicts whether an agent will win or lose their bets, a follower of <sc>edt</sc> will tend to lose money in the long-run. But it&#8217;s both odd to have predictions about whether or not bets will win and it&#8217;s odd that the argument only applies to long-run tendencies.<xref ref-type="fn" rid="n23">23</xref></p>
<p>Wells (<xref ref-type="bibr" rid="B34">2019</xref>) offers a complicated case involving multiple decisions, predictions, and coin tosses in which a follower of <sc>edt</sc> is guaranteed to end up poorer than a follower of <sc>cdt</sc>. But Wells&#8217; case crucially relies on the follower of <sc>edt</sc> and the follower of <sc>cdt</sc> having different credences (about what they expect to do), and thus the agents Wells compares do not actually face the same decision problem.<xref ref-type="fn" rid="n24">24</xref></p>
<p>A further advantage of T<sc>orture</sc> is that it is straightforwardly unaffected by the tickle defense. Ellery Eells (<xref ref-type="bibr" rid="B12">1981</xref>, <xref ref-type="bibr" rid="B13">1982</xref>) argues that <sc>edt</sc> doesn&#8217;t actually recommend one-boxing in N<sc>ewcomb</sc>, and thus that Lewis&#8217; accusation against <sc>edt</sc> on the basis of that recommendation is misguided. Eells contends that both the predictor&#8217;s prediction and the agent&#8217;s action are based on the agent&#8217;s beliefs and desires, and further that the agent can feel the pull of these beliefs and desires&#8212;the tickle&#8212;prior to action. Detecting the character of one&#8217;s tickle will screen off the correlation between prediction and action, thus removing any incentive to one-box. It&#8217;s unclear whether the tickle defense works in N<sc>ewcomb</sc> or in the Arntzenius and Wells cases. But in T<sc>orture</sc>, it is obvious that what matters is what the agent actually chooses, and not any sort of doxastic or bouletic tickle. There&#8217;s no way to screen off the relevant correlation, and thus no way to claim that <sc>edt</sc> avoids making a foolish recommendation.</p>
<p>The most prominent problem cases for <sc>edt</sc> do not make it clear that <sc>edt</sc> has a problem. The case presented in this paper is simpler, more straightforward, and does show what&#8217;s wrong with <sc>edt</sc>. Lewis&#8217; famous charge that <sc>edt</sc> irrationally manages the news is vindicated.<xref ref-type="fn" rid="n25">25</xref></p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>For a more precise characterization which differentiates causation from causal dependence, see Hedden (<xref ref-type="bibr" rid="B19">2023</xref>).</p></fn>
<fn id="n2"><p>Some formulations of <sc>edt</sc> dispense with the division of acts, states, and outcomes, and some formulations of <sc>cdt</sc> avoid probabilities over acts. Neither of these finer points makes a substantive difference to our discussion below. See Jeffrey (<xref ref-type="bibr" rid="B24">1983</xref>) for more on the finer points about <sc>edt</sc> and see H&#225;jek (<xref ref-type="bibr" rid="B17">2016</xref>) for more on the finer points about <sc>cdt</sc>.</p></fn>
<fn id="n3"><p>By &#8216;upstream&#8217;, we mean not downstream.</p></fn>
<fn id="n4"><p>For defenses of one-boxing, see Spohn (<xref ref-type="bibr" rid="B30">2012</xref>); Ahmed (<xref ref-type="bibr" rid="B1">2014</xref>); Horwich (<xref ref-type="bibr" rid="B22">1987</xref>); Horgan (<xref ref-type="bibr" rid="B20">1981</xref>); Levinstein and Soares (<xref ref-type="bibr" rid="B25">2020</xref>); Yudkowsky and Soares (<xref ref-type="bibr" rid="B35">2017</xref>).</p></fn>
<fn id="n5"><p>See Levinstein and Soares (<xref ref-type="bibr" rid="B25">2020</xref>); Yudkowsky and Soares (<xref ref-type="bibr" rid="B35">2017</xref>).</p></fn>
<fn id="n6"><p>If the predictor runs a simulation of your decision procedure, then the simulation still would have likely output <italic>one-box</italic> according to <sc>fdt</sc>. Note that the important thing is that the predictor&#8217;s choice is somehow influenced by the output of the procedure you use to decide, even if the predictor herself doesn&#8217;t &#8216;run&#8217; it.</p></fn>
<fn id="n7"><p>There are actually many different versions of <sc>fdt</sc>, but those differences need not matter to us. See Yudkowsky and Soares (<xref ref-type="bibr" rid="B35">2017</xref>).</p></fn>
<fn id="n8"><p>Of course, one could criticize <sc>fdt</sc> for giving the wrong recommendations based on the news it does get, but that doesn&#8217;t make it ostrich-like. The crux of Lewis&#8217; charge is that <sc>edt</sc> wrongly recommends actions based not on causal effects, but instead on epistemic upshots.</p></fn>
<fn id="n9"><p>Cohesive Decision Theory (<xref ref-type="bibr" rid="B27">Meacham, 2010</xref>) also prescribes one-boxing for reasons unrelated to news-mismanagement. Although the exact technical details are rather involved, the rough idea is that <sc>coh</sc>DT tells you to do whatever you would have wanted to bind yourself to do at the beginning of your life (and before any predictions were made). In N<sc>ewcomb</sc>, you would have wanted to bind yourself to one-box before any predictions were made. In that way, whenever a prediction actually ends up being made, it&#8217;s highly likely there will be money in the opaque box. So, <sc>coh</sc>DT tells you to one-box because one-boxing conforms to a hypothetical prior plan, not because one-boxing is good news.</p></fn>
<fn id="n10"><p>As Ahmed (<xref ref-type="bibr" rid="B3">2021</xref>) notes, there are multiple senses in which a decision theory could be deemed ostrich-like. Some are senses in which all standard decision theories are ostrich-like and some are senses in which even <sc>edt</sc> is not ostrich-like. Ahmed favors a definition according to which an ostrich-like decision theory would recommend manipulating one&#8217;s beliefs directly (such as by taking a pill to make you think that everything is fantastic). We agree with Ahmed that such direct manipulations are foolish and that <sc>edt</sc> does not recommend them. We also don&#8217;t want to get into a debate about what the definition of ostrich-like is. But our core point is that <sc>edt</sc>&#8217;s verdict in T<sc>orture</sc> shows that <sc>edt</sc> is flawed and that this flaw is due to news mismanagement.</p></fn>
<fn id="n11"><p>Note that a ratifiability requirement would plausibly alter <sc>edt</sc>&#8217;s verdict in this case. We&#8217;re skeptical of ratifiability requirements for standard reasons (particularly that they sometimes forbid all actions, see Egan (<xref ref-type="bibr" rid="B14">2007</xref>) for more). And in any case, our intended topic is classic <sc>edt</sc>.</p></fn>
<fn id="n12"><p>See Arntzenius (<xref ref-type="bibr" rid="B5">2008</xref>) and Ahmed and Price (<xref ref-type="bibr" rid="B4">2012</xref>) for discussion of O<sc>ptional</sc> N<sc>ewcomb</sc>-like cases.</p></fn>
<fn id="n13"><p>This case involves the possibility of memory loss, which some consider to be a rational failing. We don&#8217;t share this view, but those who do may consider a variant of the case in which John has a twin, and both twins are sure that they will make the same choices. In this variant, the relevant issues are reproduced without the possibility of memory loss.</p></fn>
<fn id="n14"><p>Soares and Fallenstein (<xref ref-type="bibr" rid="B29">2015</xref>) present a case called XOR B<sc>lackmail</sc> where <sc>edt</sc> comes apart from both <sc>fdt</sc> and <sc>cdt</sc>. We believe that XOR B<sc>lackmail</sc> also supports the accusation that <sc>edt</sc> is ostrich-like, but that T<sc>orture</sc> supports it even more strongly. The most important advantages for T<sc>orture</sc> are that it doesn&#8217;t involve an exotic predictor but only appeals to self-prediction, and it shows that an <sc>edt</sc> agent will <italic>directly manipulate</italic> a signal in order to receive auspicious news. See also Conitzer (<xref ref-type="bibr" rid="B10">2015</xref>) for another case that, like ours, involves <italic>de se</italic> credences.</p></fn>
<fn id="n15"><p>One could revise the procedure, replacing the single Green light flash with a sequence of a Green light, a Green light, and a Red light and replacing the single Red light flash with a sequence of a Green light, a Red light, and a Red light. It&#8217;s natural to think that seeing a Green light is evidence that the coin landed Heads and that seeing a Red light is evidence that the coin landed Tails. But since it&#8217;s certain that John will see at least one Green light and at least one Red light, according to the Relevance Limiting Thesis the flashes give him no evidence at all. This peculiar consequence is often taken as an argument against the Relevance Limiting Thesis. See also Weintraub (<xref ref-type="bibr" rid="B33">2004</xref>), Bostrom (<xref ref-type="bibr" rid="B7">2002</xref>), Titelbaum (<xref ref-type="bibr" rid="B31">2008</xref>), Briggs (<xref ref-type="bibr" rid="B8">2010</xref>), and Dorr (<xref ref-type="bibr" rid="B11">ms</xref>) for more.</p></fn>
<fn id="n16"><p>In the framework of time-slice epistemology these amount to the same thing. See Hedden (<xref ref-type="bibr" rid="B18">2015</xref>) for more.</p></fn>
<fn id="n17"><p>For discussions of how to update in the face of centered evidence, see Bostrom (<xref ref-type="bibr" rid="B7">2002</xref>) and Titelbaum (<xref ref-type="bibr" rid="B32">2012</xref>). For a proof that the major theories of self-locating belief all agree in pedestrian circumstances, see Isaacs et al. (<xref ref-type="bibr" rid="B23">2022</xref>).</p></fn>
<fn id="n18"><p>See Lewis (<xref ref-type="bibr" rid="B26">1981</xref>).</p></fn>
<fn id="n19"><p>To see why, note that this is essentially the same question as the proponent of Compartmentalized Conditionalization asks: those who see red at least once and pay are tortured just as often as those who see red at least once and don&#8217;t pay.</p></fn>
<fn id="n20"><p>Ahmed and Price (<xref ref-type="bibr" rid="B4">2012</xref>) unpack &#8220;why ain&#8217;cha rich&#8221; reasoning, deploying it to support <sc>edt</sc>. But one can formulate an argument parallel to theirs which opposes <sc>edt</sc>:</p>
<p><list list-type="simple">
<list-item><p>(1) The average return of being a non-payer exceeds that of being a payer.</p></list-item>
<list-item><p>(2) Everyone can see that (1) is true.</p></list-item>
<list-item><p>(3) Therefore not paying foreseeably does better than paying.</p></list-item>
<list-item><p>(4) Therefore <sc>edt</sc> is committed to the foreseeably worse option for anyone facing T<sc>orture</sc>.</p></list-item>
</list></p>
<p>T<sc>orture</sc> shows that&#8212;by the very lights of <sc>edt</sc>&#8217;s defenders&#8212;<sc>edt</sc> is flawed.</p></fn>
<fn id="n21"><p>For recent alternatives to expected utility theory, see Buchak (<xref ref-type="bibr" rid="B9">2013</xref>) and Rinard (<xref ref-type="bibr" rid="B28">2015</xref>).</p></fn>
<fn id="n22"><p>For more on this point, see Horgan (<xref ref-type="bibr" rid="B21">2017</xref>) and Bales (<xref ref-type="bibr" rid="B6">2018</xref>).</p></fn>
<fn id="n23"><p>See Ahmed and Price (<xref ref-type="bibr" rid="B4">2012</xref>) for an extended critique of Arntzenius&#8217; argument on these two points.</p></fn>
<fn id="n24"><p>For more on this point see Ahmed (<xref ref-type="bibr" rid="B2">2020</xref>).</p></fn>
<fn id="n25"><p>Thanks to John Hawthorne, Vince Conitzer, and an audience at the Formal Rationality Forum at Northeastern University. Special thanks to Caspar Oesterheld who provided insightful comments and pushed us on the Relevance Limiting Thesis. Ben Levinstein&#8217;s research was partly supported by Mellon New Directions grant 1905-06835.</p></fn>
</fn-group>
<ref-list>
<ref id="B1"><mixed-citation publication-type="book"><string-name><surname>Ahmed</surname>, <given-names>A.</given-names></string-name> (<year>2014</year>). <source>Evidence, Decision and Causality</source>. <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><string-name><surname>Ahmed</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Equal opportunities in Newcomb&#8217;s problem and elsewhere</article-title>. <source>Mind</source> <volume>129</volume>(<issue>515</issue>), <fpage>867</fpage>&#8211;<lpage>886</lpage>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="book"><string-name><surname>Ahmed</surname>, <given-names>A.</given-names></string-name> (<year>2021</year>). <source>Evidential Decision Theory</source>. <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="journal"><string-name><surname>Ahmed</surname>, <given-names>A.</given-names></string-name> and <string-name><given-names>H.</given-names> <surname>Price</surname></string-name> (<year>2012</year>). <article-title>Arntzenius on &#8216;why ain&#8217;cha rich?&#8217;</article-title>. <source>Erkenntnis</source> <volume>77</volume>(<issue>1</issue>), <fpage>15</fpage>&#8211;<lpage>30</lpage>.</mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Arntzenius</surname>, <given-names>F.</given-names></string-name> (<year>2008</year>). <article-title>No regrets, or: Edith Piaf revamps decision theory</article-title>. <source>Erkenntnis</source> <volume>68</volume>, <fpage>277</fpage>&#8211;<lpage>297</lpage>.</mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Bales</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Decision-theoretic pluralism</article-title>. <source>Philosophical Quarterly</source> <volume>68</volume>(<issue>273</issue>), <fpage>801</fpage>&#8211;<lpage>818</lpage>.</mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="book"><string-name><surname>Bostrom</surname>, <given-names>N.</given-names></string-name> (<year>2002</year>). <source>Anthropic Bias: Observation Selection Effects in Science and Philosophy</source>. <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="book"><string-name><surname>Briggs</surname>, <given-names>R.</given-names></string-name> (<year>2010</year>). <chapter-title>Putting a value on beauty</chapter-title>. In <string-name><given-names>T. S.</given-names> <surname>Gendler</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Hawthorne</surname></string-name> (Eds.), <source>Oxford Studies in Epistemology</source>, Volume <volume>3</volume>, pp. <fpage>3</fpage>&#8211;<lpage>34</lpage>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="book"><string-name><surname>Buchak</surname>, <given-names>L.</given-names></string-name> (<year>2013</year>). <source>Risk and Rationality</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Conitzer</surname>, <given-names>V.</given-names></string-name> (<year>2015</year>). <article-title>A dutch book against sleeping beauties who are evidential decision theorists</article-title>. <source>Synthese</source> <volume>192</volume>(<issue>9</issue>), <fpage>2887</fpage>&#8211;<lpage>2899</lpage>.</mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Dorr</surname>, <given-names>C.</given-names></string-name> (ms). <article-title>A challenge for halfers</article-title>.</mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Eells</surname>, <given-names>E.</given-names></string-name> (<year>1981</year>). <article-title>Causality, utility, and decision</article-title>. <source>Synthese</source> <volume>48</volume>(<issue>2</issue>), <fpage>295</fpage>&#8211;<lpage>329</lpage>.</mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="book"><string-name><surname>Eells</surname>, <given-names>E.</given-names></string-name> (<year>1982</year>). <source>Rational Decision and Causality</source>. <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Egan</surname>, <given-names>A.</given-names></string-name> (<year>2007</year>). <article-title>Some counterexamples to causal decision theory</article-title>. <source>Philosophical Review</source> <volume>116</volume>(<issue>1</issue>), <fpage>93</fpage>&#8211;<lpage>114</lpage>.</mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Elga</surname>, <given-names>A.</given-names></string-name> (<year>2000</year>). <article-title>Self-locating belief and the Sleeping Beauty problem</article-title>. <source>Analysis</source> <volume>60</volume>(<issue>2</issue>), <fpage>143</fpage>&#8211;<lpage>147</lpage>.</mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>Gallow</surname>, <given-names>J. D.</given-names></string-name> (<year>2020</year>). <article-title>The causal decision theorist&#8217;s guide to managing the news</article-title>. <source>The Journal of Philosophy</source> <volume>117</volume>(<issue>3</issue>), <fpage>117</fpage>&#8211;<lpage>149</lpage>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>H&#225;jek</surname>, <given-names>A.</given-names></string-name> (<year>2016</year>). <article-title>Deliberation welcomes prediction</article-title>. <source>Episteme</source> <volume>13</volume>(<issue>4</issue>), <fpage>507</fpage>&#8211;<lpage>528</lpage>.</mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><string-name><surname>Hedden</surname>, <given-names>B.</given-names></string-name> (<year>2015</year>). <article-title>Time-slice rationality</article-title>. <source>Mind</source> <volume>124</volume>(<issue>494</issue>), <fpage>449</fpage>&#8211;<lpage>491</lpage>.</mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><string-name><surname>Hedden</surname>, <given-names>B.</given-names></string-name> (<year>2023</year>). <article-title>Counterfactual decision theory</article-title>. <source>Mind</source> <volume>132</volume>, <fpage>730</fpage>&#8211;<lpage>761</lpage>.</mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><string-name><surname>Horgan</surname>, <given-names>T.</given-names></string-name> (<year>1981</year>). <article-title>Counterfactuals and newcomb&#8217;s problem</article-title>. <source>The Journal of Philosophy</source> <volume>78</volume>(<issue>6</issue>), <fpage>331</fpage>&#8211;<lpage>356</lpage>.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="book"><string-name><surname>Horgan</surname>, <given-names>T.</given-names></string-name> (<year>2017</year>). <source>Essays on Paradoxes</source>. <publisher-loc>Oxford, England</publisher-loc>: <publisher-name>Oxford University Press USA</publisher-name>.</mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="book"><string-name><surname>Horwich</surname>, <given-names>P.</given-names></string-name> (<year>1987</year>). <source>Asymmetries in Time: Problems in the Philosophy of Sciences</source>. <publisher-name>Series Bradford Books</publisher-name>.</mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>Isaacs</surname>, <given-names>Y.</given-names></string-name>, <string-name><given-names>J.</given-names> <surname>Hawthorne</surname></string-name>, and <string-name><given-names>J. Sanford</given-names> <surname>Russell</surname></string-name> (<year>2022</year>). <article-title>Multiple universes and self-locating evidence</article-title>. <source>Philosophical Review</source> <volume>131</volume>(<issue>3</issue>), <fpage>241</fpage>&#8211;<lpage>294</lpage>.</mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="book"><string-name><surname>Jeffrey</surname>, <given-names>R. C.</given-names></string-name> (<year>1983</year>). <source>The Logic of Decision</source> (<edition>2nd</edition> ed.). <publisher-name>University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>Levinstein</surname>, <given-names>B. A.</given-names></string-name> and <string-name><given-names>N.</given-names> <surname>Soares</surname></string-name> (<year>2020</year>). <article-title>Cheating death in damascus</article-title>. <source>The Journal of Philosophy</source> <volume>117</volume>(<issue>5</issue>), <fpage>237</fpage>&#8211;<lpage>266</lpage>.</mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><string-name><surname>Lewis</surname>, <given-names>D. K.</given-names></string-name> (<year>1981</year>). <article-title>Why ain&#8217;cha rich?</article-title> <source>No&#251;s</source> <volume>15</volume>(<issue>3</issue>), <fpage>377</fpage>&#8211;<lpage>380</lpage>.</mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="journal"><string-name><surname>Meacham</surname>, <given-names>C. J.</given-names></string-name> (<year>2010</year>). <article-title>Binding and its consequences</article-title>. <source>Philosophical Studies</source> <volume>149</volume>(<issue>1</issue>), <fpage>49</fpage>&#8211;<lpage>71</lpage>.</mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><string-name><surname>Rinard</surname>, <given-names>S.</given-names></string-name> (<year>2015</year>, <month>February</month>). <article-title>A decision theory for imprecise probabilities</article-title>. <source>Philosohers&#8217; Imprint</source> <volume>15</volume>(<issue>7</issue>), <fpage>1</fpage>&#8211;<lpage>16</lpage>.</mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><string-name><surname>Soares</surname>, <given-names>N.</given-names></string-name> and <string-name><given-names>B.</given-names> <surname>Fallenstein</surname></string-name> (<year>2015</year>). <article-title>Toward idealized decision theory</article-title>. <source>arXiv Pre-print arXiv: 1507.01986</source>.</mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="journal"><string-name><surname>Spohn</surname>, <given-names>W.</given-names></string-name> (<year>2012</year>). <article-title>Reversing 30 years of discussion: Why causal decision theorists should one-box</article-title>. <source>Synthese</source> <volume>187</volume>(<issue>1</issue>), <fpage>95</fpage>&#8211;<lpage>122</lpage>.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Titelbaum</surname>, <given-names>M.</given-names></string-name> (<year>2008</year>). <article-title>The relevance of self locating beliefs</article-title>. <source>Philosophical Review</source> <volume>117</volume>(<issue>4</issue>), <fpage>555</fpage>&#8211;<lpage>606</lpage>.</mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="book"><string-name><surname>Titelbaum</surname>, <given-names>M. G.</given-names></string-name> (<year>2012</year>). <source>Quitting Certainties: A Bayesian Framework Modeling Degrees of Belief</source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="journal"><string-name><surname>Weintraub</surname>, <given-names>R.</given-names></string-name> (<year>2004</year>). <article-title>Sleeping beauty: A simple solution</article-title>. <source>Analysis</source> <volume>64</volume>(<issue>1</issue>), <fpage>8</fpage>&#8211;<lpage>10</lpage>.</mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="journal"><string-name><surname>Wells</surname>, <given-names>I.</given-names></string-name> (<year>2019</year>). <article-title>Equal opportunity and Newcomb&#8217;s problem</article-title>. <source>Mind</source> <volume>128</volume>(<issue>510</issue>), <fpage>429</fpage>&#8211;<lpage>457</lpage>.</mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="journal"><string-name><surname>Yudkowsky</surname>, <given-names>E.</given-names></string-name> and <string-name><given-names>N.</given-names> <surname>Soares</surname></string-name> (<year>2017</year>). <article-title>Functional decision theory: A new theory of instrumental rationality</article-title>.</mixed-citation></ref>
</ref-list>
</back>
</article>