<?xml version="1.0" encoding="utf-8"?>
<article xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="JATS-journalpublishing1-mathml3.xsd" dtd-version="1.2" article-type="Research Article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher">phimp</journal-id>
<journal-title-group>
<journal-title>Philosophers&#x2019; Imprint</journal-title>
</journal-title-group>
<issn pub-type="epub"></issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1300</article-id>
<article-id pub-id-type="doi">10.3998/phimp.1300</article-id>
<article-categories>
</article-categories>
<title-group>
<article-title>Just as Planned: Bayesianism, Externalism, and Plan Coherence</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Medina</surname>
<given-names>Pablo Zendejas</given-names>
</name>
<email>zendejas.pablo@gmail.com</email>
<aff id="aff1">New York University</aff>
</contrib>
</contrib-group>
<pub-date>
<day>31</day>
<month>12</month>
<year>2023</year>
</pub-date>
<volume>23</volume>
<issue>28</issue>
<permissions>
<license>
<license-p>CC BY-NC-ND 4.0</license-p>
<license-p>This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License &#x003C;<ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="http://www.philosophersimprint.org/023028">www.philosophersimprint.org/023028</ext-link>/&#x003E;</license-p>
</license>
</permissions>
<abstract>
<p>Two of the most influential arguments for Bayesian updating (&#x201C;Conditionalization&#x201D;) &#x2014; Hilary Greaves&#x2019; and David Wallace&#x2019;s <italic>Accuracy Argument</italic> and David Lewis&#x2019; <italic>Diachronic Dutch Book Argument</italic> &#x2014; seem to impose a strong and surprising limitation on rational uncertainty: that one can never be rationally uncertain of what one&#x2019;s evidence is. Many philosophers (&#x201C;externalists&#x201D;) reject that claim, and now seem to face a difficult choice: either to endorse the arguments and give up Externalism, or to reject the arguments and lose some of the best justifications of Conditionalization. I argue that the key to resolving this conflict lies in recognizing that both arguments are <italic>plan-based</italic>, in that they argue for Conditionalization by first arguing that one ought to <italic>plan</italic> to conditionalize. With this in view, I argue that the conflict with Externalism only arises if one misconceives the requirement to carry out a plan made at an earlier time. These arguments should therefore not persuade us to reject Externalism. Furthermore, rethinking the nature of this requirement allows us to give two new arguments for Conditionalization that don&#x2019;t rule out rational uncertainty about one&#x2019;s evidence and that can thus serve as common ground in the debate between externalists and their opponents.</p>
</abstract>
<kwd-group>
<kwd>Conditionalization</kwd>
<kwd><sc>externalism</sc></kwd>
<kwd>Higher-Order Uncertainty</kwd>
</kwd-group>
<counts>
<fig-count count="1"/>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1.</label><title>Introduction</title>
<p>The aim of this paper is to reconcile two important ideas in epistemology: the Bayesian view that rational beliefs evolve by conditionalization on evidence, and the externalist view that one can rationally be uncertain about what one&#x2019;s evidence is, and thus also about what one should rationally believe. More precisely, the first view is:</p>
<disp-quote>
<p><sc>conditionalization</sc> Upon learning that <italic>E</italic>, a rational agent will conditionalize their prior credences on <italic>E</italic>, so that, for any proposition <italic>p</italic>,
<disp-formula id="FD1">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M1">
<mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>df</mml:mtext></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x2227;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mrow>
</mml:math>
</disp-formula>
if defined, where <italic>C</italic> and <italic>C</italic><sub><italic>E</italic></sub> are the agent&#x2019;s credence functions before and after learning that <italic>E</italic>, respectively.</p>
</disp-quote>
<p>This claim is widely assumed and applied not only in philosophy, but in many fields that model belief probabilistically, such as statistics, economics, and computer science. There are two parts to this principle. First, it says that one&#x2019;s beliefs should evolve by conditionalizing on <italic>some</italic> proposition. To conditionalize on a proposition <italic>E</italic> is to revise one&#x2019;s beliefs so that one&#x2019;s posterior credence in any claim <italic>p</italic> is directly proportional to one&#x2019;s prior credence in the conjunction of <italic>p</italic> and <italic>E</italic>, renormalized so that <italic>E</italic> is now assigned a credence of one. The second part of <sc>conditionalization</sc> says that the proposition on which one should conditionalize is one&#x2019;s <italic>evidence</italic>. This second part is not superfluous: &#x201C;evidence&#x201D; is not just what we call whatever the agent happens to conditionalize on. Rather, as I&#x2019;ll understand it, what evidence the agent has &#x2014; what they have <italic>learnt</italic> &#x2014; is fixed outside of the Bayesian model of which <sc>conditionalization</sc> forms part.<sup><xref rid="fn1" ref-type="fn">1</xref></sup> In other words, <sc>conditionalization</sc> says nothing about the conditions for learning different propositions, but is only an answer to the question: &#x201C;How should my beliefs change, <italic>given</italic> the evidence I have learnt?&#x201D;</p>
<p>Two of the most influential arguments for <sc>conditionalization</sc>, which I will call the <italic>standard arguments</italic>, are <xref rid="r22" ref-type="bibr">David Lewis&#x2019; (1999)</xref> <italic>Diachronic Dutch Book Argument</italic> and <xref rid="r16" ref-type="bibr">Hilary Greaves &#x0026; David Wallace&#x2019;s (2006)</xref> <italic>Accuracy Argument</italic>. Both aim to establish <sc>conditionalization</sc> by first showing that one should <italic>plan</italic> to conditionalize. My aim is to reconcile this style of argument with:</p>
<disp-quote>
<p><sc>externalism</sc> (informal) It is possible for a rational agent to learn that <italic>E</italic>, without thereby becoming certain that they learnt that <italic>E</italic>.<sup><xref rid="fn2" ref-type="fn">2</xref></sup></p>
</disp-quote>
<p>Epistemic externalists typically think that the factors that ground the rationality of our beliefs can sometimes fail to be accessible to us. Since, as I&#x2019;ll assume, one&#x2019;s evidence determines what one can rationally believe, rational uncertainty about the former is one of the main sources of rational uncertainty about the latter.</p>
<p>To be clear: <sc>conditionalization</sc> itself is <italic>consistent</italic> with <sc>externalism</sc>. A conditionalizer can be uncertain about what they have learnt if their evidence is <italic>opaque</italic> in the sense that it is not equivalent to the claim that it is their evidence. Indeed, externalists typically hold their view because they believe that it is possible to have opaque evidence and that one should conditionalize on it (more on this in &#x00A7;&#x00A7;<xref rid="s5" ref-type="sec">3.2</xref>). Rather, the conflict arises because the standard arguments are restricted to situations where one cannot learn opaque evidence. Moreover, several recent papers (<xref rid="r17" ref-type="bibr">Hild, 1998</xref>; <xref rid="r33" ref-type="bibr">Schoenfield, 2017</xref>) have claimed that if we generalize these arguments to situations where opaque evidence is possible, they no longer support <sc>conditionalization</sc>, but rather an alternative update rule, called <sc>auto-epistemic (a-e) conditionalization</sc>. This rule says that if <italic>E</italic> is one&#x2019;s evidence, one should conditionalize not on <italic>E</italic>, but on the claim that <italic>E</italic> is one&#x2019;s evidence, and is thus, as I&#x2019;ll explain, inconsistent with <sc>externalism</sc>.</p>
<p>I will argue that the key to resolving this conflict lies in the widely recognized fact that both arguments are <italic>plan-based</italic> in that they try to establish <sc>conditionalization</sc> by showing that a rational agent will <italic>plan</italic> to conditionalize on future evidence. This claim only entails <sc>conditionalization</sc> if combined with a <italic>plan coherence</italic> principle, which says that a rational agent will implement that plan after learning the evidence. However, there are two natural ways of interpreting the claim that a rational agent will implement their antecedent plan, which differ in the kinds of plan that they require the agent to implement. These are equivalent when the agent cannot learn opaque evidence, but come apart in other cases. While recent literature has generalized the two arguments by implicitly adopting one of these two principles, I argue that it should be rejected in favour of the other. This allows the externalist to reject a premise in the recent arguments for <sc>a-e conditionalization</sc> and thus against <sc>externalism</sc>.</p>
<p>Moreover, as I show, my preferred plan coherence principle forms the basis for two new arguments for <sc>conditionalization</sc>. The first is accuracy-theoretic, and builds on Greaves &#x0026; Wallace&#x2019;s argument. The second argument involves practical planning, rather than planning for what to believe. As such, the second argument is also of interest independently of the question of <sc>externalism</sc>, since it demonstrates a way of arguing for <sc>conditionalization</sc> from rather minimal commitments about practical rationality and its relationship to epistemic rationality. Most importantly, however, both arguments show that one should conditionalize regardless of whether one&#x2019;s evidence is opaque or not, making them consistent with <sc>externalism</sc> without presupposing it.</p>
</sec>
<sec id="s2">
<label>2.</label><title>Framework</title>
<p>We are interested in a <italic>rational</italic> agent who undergoes a learning experience, and in the relationship between their beliefs before and after that experience. I&#x2019;ll call such a situation an <italic>experiment</italic>. We&#x2019;ll focus on the <italic>strongest</italic> proposition that the agent learns in this experiment &#x2014; their <italic>total incremental evidence</italic>. For simplicity, I will refer to it as the proposition that they have <italic>learnt</italic>, or as their <italic>evidence</italic>, but keep in mind that this is distinct from their total <italic>background</italic> evidence, which includes what the agent already knows going into the experiment. We&#x2019;ll use &#x201C;<italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026;&#x201D; to stand for the different propositions that the agent may learn in the experiment. For any <italic>E</italic>, moreover, we&#x2019;ll let <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I1"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi></mml:mrow></mml:math></inline-formula><italic>E</italic> stand for the proposition that says that the agent learns that <italic>E</italic>.<sup><xref rid="fn3" ref-type="fn">3</xref></sup></p>
<p>Our agent begins the experiment with a <italic>rational</italic> credence function <italic>C</italic> (their &#x201C;prior&#x201D;), which is defined over the set of propositions. After undergoing the experiment, they then adopt a new rational credence function <italic>C</italic><sub><italic>E</italic></sub> (their &#x201C;posterior&#x201D;) where <italic>E</italic> is their evidence. Formally, propositions are subsets of a set <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I2"><mml:mrow><mml:mi mathvariant='script'>W</mml:mi></mml:mrow></mml:math></inline-formula> of doxastic possibilities (&#x201C;worlds&#x201D;), which we assume is finite for mathematical simplicity. Finally, we&#x2019;ll make two other simplifying, but dispensable, assumptions: that the agent assigns positive credence to all worlds in <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I3"><mml:mrow><mml:mi mathvariant='script'>W</mml:mi></mml:mrow></mml:math></inline-formula> before learning, and that they cannot learn an inconsistent proposition. This ensures that <italic>C</italic>(<italic>&#x00B7;|E</italic><sub><italic>i</italic></sub>) is defined for all <italic>E<sub>i</sub></italic>.<sup><xref rid="fn4" ref-type="fn">4</xref></sup></p>
</sec>
<sec id="s3">
<label>3.</label><title>Externalism and Evidential Opacity</title>
<sec id="s4">
<label>3.1</label><title>Externalism</title>
<p>The externalist about rationality thinks that rational uncertainty about what&#x2019;s rational for one to believe is as possible as rational uncertainty about everyday empirical matters, such as the weather or the outcome of a game. This paper focuses on a specific source of uncertainty about rationality &#x2014; uncertainty about what one&#x2019;s evidence is. Thus restricted, the externalist thinks that there are possible experiments where one could learn evidence that would leave one rationally uncertain of what one&#x2019;s evidence is. More precisely:</p>
<disp-quote>
<p><sc>externalism</sc> (formal) There are possible experiments where the agent can learn some evidence <italic>E</italic><sub><italic>i</italic></sub> such that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I4"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>.<sup><xref rid="fn5" ref-type="fn">5</xref></sup></p>
</disp-quote>
<p>Our question is not whether <sc>externalism</sc> is true, but whether the standard Bayesian strategies for justifying update rules can be reconciled with it.</p>
</sec>
<sec id="s5">
<label>3.2</label><title>Evidential Opacity</title>
<p>As is common in Bayesian epistemology, we are treating the agent&#x2019;s evidence as exogenous to the model. We can give it a functional characterization: their evidence is the strongest proposition that their experiences in the experiment enable them to take for granted in their reasoning. But we have not said, and will not say, anything about what it takes for a proposition to have this status. However, externalists typically hold their view in part because they deny a strong, but natural, assumption about evidence. To state it, we&#x2019;ll need the following definition:</p>
<disp-quote>
<p><italic>Transparent/Opaque Evidence</italic> (definition): <italic>E</italic> is <italic>transparent</italic> if the agent is certain in advance that they will learn that <italic>E</italic> just in case <italic>E</italic> is true, i.e. if <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I5"><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo>&#x2194;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>.<sup><xref rid="fn6" ref-type="fn">6</xref></sup> If <italic>E</italic> is not transparent, we say that it is <italic>opaque</italic>.</p>
</disp-quote>
<p>In practice, when discussing opaque evidence, we&#x2019;ll consider only cases where the agent is uncertain whether they will learn some proposition if it is true (i.e. <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I6"><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo>&#x2192;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x003C;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>), not cases where the agent is uncertain whether some proposition is true if they will learn it (i.e. <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I7"><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo>&#x2192;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x003C;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>). In this paper, I remain neutral on the possibility of the latter kind of case, and nothing that I say precludes its possibility.</p>
<p>Now, philosophers who accept <sc>externalism</sc> typically do so because they deny that evidence always has to be opaque. That is, they typically deny:</p>
<disp-quote>
<p><sc>evidential transparency</sc> In every possible experiment, every potential body of evidence is transparent.</p>
</disp-quote>
<p>Intuitively, there is a direct connection between <sc>externalism</sc> and <sc>evidential transparency</sc>: if you can only ever learn transparent evidence, then your evidence will always tell you what your evidence is, so that it would seem irrational to be uncertain of what it is. If, on the other hand, you can have opaque evidence of the kind that leaves open what your evidence is, the right response would seem to be uncertainty about your evidence.</p>
<p>To better understand <sc>evidential transparency</sc>, and why one might deny it, consider one of the externalist&#x2019;s favourite cases &#x2014; a sceptical scenario:<sup><xref rid="fn7" ref-type="fn">7</xref></sup></p>
<disp-quote>
<p><italic>Here&#x2019;s a Hand:</italic> You&#x2019;re about to open your eyes, and know that you will either have an experience as of a hand (<italic>EXP</italic>) or not. You know that, if you don&#x2019;t have the hand-experience, then you don&#x2019;t have a hand. However, if you do have the hand-experience, you may in fact have a hand (<italic>HAND</italic>), or it could be that you&#x2019;re being deceived by a malicious demon, and there is no hand (<italic>&#x00AC;HAND</italic>).</p>
</disp-quote>
<p>According to the standard externalist treatment of sceptical scenarios your evidential situation is as follows: if you veridically experience the hand &#x2014; call this the <italic>Good Case</italic> &#x2014; you will learn that you have a hand, i.e. that you&#x2019;re in the Good Case. On the other hand, if you&#x2019;re deceived by the demon &#x2014; call this the <italic>Bad Case</italic> &#x2014; you will not learn that you have a hand, but merely that you have an experience as of a hand (<xref rid="r36" ref-type="bibr">Williamson, 2000</xref>, ch. 8). What evidence you have is thus determined by facts that are external to you. An externalist may think this for different reasons, but in Williamson&#x2019;s case, this is because he thinks that your evidence is what you come to <italic>know</italic> by looking, which itself depends on whether there is in fact a hand in front of you.</p>
<p>The externalist description of the case is represented in <xref rid="f01" ref-type="fig"><bold>Figure 1</bold></xref>. Each circle represents a world that is possible for all you know before looking. The arrows represent the possible evidential facts after looking &#x2014; an arrow goes from one world to another iff your evidence in the former world (after looking) doesn&#x2019;t rule out you being in the latter world. Thus, according to the externalist, in the Bad Case your evidence only rules out you not having had a hand-experience (<italic>&#x00AC;EXP</italic>), whereas in the Good Case, your evidence <italic>does</italic> rule out you having had a misleading experience as well (<italic>EXP &#x2227; &#x00AC;HAND</italic>).</p>
<fig id="f01" position="float">
<label>Figure 1:</label>
<caption>
<p>The Externalist Description of <italic>Here&#x2019;s a Hand</italic>.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="phimp-1300-f01.jpg"/>
</fig>
<p>The point of the example is that if this really is the evidential structure of the experiment, then your evidence in the Bad Case is opaque. That is because your evidence in the Bad Case (<italic>EXP</italic>) doesn&#x2019;t rule out the possibility that you&#x2019;re in the Good Case, where your evidence is different: that you had the experience <italic>and</italic> that you have a hand. So your evidence in the Bad Case is not equivalent to the claim that it is your evidence (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I8"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula>), since it doesn&#x2019;t entail it. If the externalist is right about sceptical scenarios, <sc>evidential transparency</sc> is false.</p>
</sec>
</sec>
<sec id="s6">
<label>4.</label><title>Plan-Based Arguments for Conditionalization</title>
<p><sc>conditionalization</sc> is an <italic>update rule</italic> &#x2014; a principle of rationality that says how one&#x2019;s beliefs ought to change upon learning new evidence. A <italic>plan-based argument</italic> for an update rule proceeds in two steps: First, it is shown that before learning, the plan to follow that rule is in some sense the best plan. Second, the argument assumes that rational agents are <italic>plan coherent</italic> &#x2014; they carry out plans made at an earlier time. Thus, after learning, a rational agent will update their beliefs according to the rule in question. The standard arguments for <sc>conditionalization</sc> are really arguments for the first premise of this argument &#x2014; which is usually called <sc>plan conditionalization</sc>. However, I will emphasize the second premise, as I believe that it is key to solve the issue of <sc>externalism</sc> and opaque evidence.<sup><xref rid="fn8" ref-type="fn">8</xref></sup></p>
<p>Now, before I explain the arguments in greater detail, it is important to note that, as they were originally presented, they are restricted to experiments with only transparent evidence. This is not necessarily because their authors <italic>assumed</italic> <sc>evidential transparency</sc>. Instead, we can think of the arguments as applying only to a restricted class of experiments, leaving open how to update in other experiments, if such experiments are possible.<sup><xref rid="fn9" ref-type="fn">9</xref></sup></p>
<sec id="s7">
<label>4.1</label><title>Planning</title>
<sec id="s8">
<label>4.1.1</label><title>Conditional and Unconditional Plans</title>
<p>A plan-based argument, as I have characterized it, makes claims about <italic>doxastic plans</italic>: plans about what to believe.<sup><xref rid="fn10" ref-type="fn">10</xref></sup> But we&#x2019;ll find it useful, later, to have a more general definition of plans that also includes plans for action. Thus, an <italic>act</italic> can be either a <italic>doxastic act</italic> &#x2014; which is a credence function &#x2014; or a <italic>practical act</italic>.<sup><xref rid="fn11" ref-type="fn">11</xref></sup></p>
<p>We can then define two kinds of plans: first, a <italic>conditional plan</italic> is a pair of a condition, which is a proposition, and an act (whether doxastic or practical). So for example, the plan to bring an umbrella if it rains is <italic>&#x27E8;Rain</italic>, <italic>bring-umbrella&#x27E9;</italic>, where <italic>Rain</italic> is the proposition that it rains and <italic>bring-umbrella</italic> is the act of bringing an umbrella. Likewise, the plan to conditionalize on <italic>E</italic> if one learns that <italic>E</italic> is <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I9"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Second, an <italic>unconditional plan</italic> is a set of conditional plans whose conditions partition logical space (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I10"><mml:mrow><mml:mi mathvariant='script'>W</mml:mi></mml:mrow></mml:math></inline-formula>). So, for example, {&#x27E8;<italic>Rain, bring-umbrella</italic>&#x27E9;, &#x27E8;&#x00AC;<italic>Rain, bring-sunglasses</italic>&#x27E9;} is an unconditional plan, which involves bringing an umbrella if it rains but sunglasses if it doesn&#x2019;t. In the doxastic case, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I11"><mml:mrow><mml:mtext>&#x00A0;PCond&#x00A0;</mml:mtext><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is the plan to conditionalize on one&#x2019;s evidence no matter what one learns, i.e. to conditionalize on <italic>E</italic><sub>1</sub> if one learns that <italic>E</italic><sub>1</sub>, to conditionalize on <italic>E</italic><sub>2</sub> if one learns that <italic>E</italic><sub>2</sub>, <italic>etc.</italic></p>
</sec>
<sec id="s9">
<label>4.1.2</label><title>Plan Evaluation</title>
<p>We&#x2019;ll assume that the agent has preferences between acts, conditional plans, and unconditional plans, and will say that a plan is <italic>best</italic> in a class if it is strictly preferred to all others in that class. As usual, we&#x2019;ll use &#x201C;<italic>&#x2AB0;</italic>&#x201D; to describe the agent&#x2019;s rational (weak) preference relation before learning, as well as &#x201C;<italic>&#x227B;</italic>&#x201D; for strict preference, and &#x201C;&#x007E;&#x201D; for indifference.<sup><xref rid="fn12" ref-type="fn">12</xref></sup></p>
<p>When is one unconditional plan better than another? The arguments we&#x2019;ll look at answer this question differently, but a common assumption, which most of the arguments below will use is that one should evaluate unconditional plans by their <italic>expected value</italic>. To calculate the expected value of a plan, one takes the weighted sum of the value of the plan at every world, where the value of the plan at a world is just the value, at that world, of the act recommended by the plan at that world. The weight given to a world is just its credence.<sup><xref rid="fn13" ref-type="fn">13</xref></sup> The value function will for the moment be left undefined, since its interpretation differs between arguments.</p>
</sec>
</sec>
<sec id="s10">
<label>4.2</label><title>Which Plans Trigger Rational Requirements?</title>
<p>We want to argue for <sc>conditionalization</sc> by showing that the plan to conditionalize is the best plan, and that, since one ought to &#x201C;stick to the plan&#x201D; after learning, one should conditionalize. Any such argument faces a problem, however: the plan to conditionalize is not best if we compare it to <italic>all</italic> other plans. The standard example is the <italic>omniscience plan</italic>, which says, for every world <italic>w</italic>, to assign credence one to all truths and credence zero to all falsehoods at <italic>w</italic> (<xref rid="r16" ref-type="bibr">Greaves &#x0026; Wallace, 2006</xref>, p. 612). On natural ways of evaluating plans, this is the best plan there is, since it will make the agent&#x2019;s beliefs both as accurate (as close to the truth) as can be, and as good a guide to action as one could ask for, since one can never lose by betting on the truth and only on the truth. And yet there is no rational requirement to be omniscient. So even though the omniscience plan is the best plan, there is no requirement to implement it after learning. As I&#x2019;ll put it, one is not required to be <italic>plan coherent</italic> with respect to it.</p>
<p>It should be clear that the problem with the omniscience plan is that it doesn&#x2019;t have the right <italic>conditions</italic>: by assigning different credence functions to every world, it makes distinctions that are too fine-grained for there to be a requirement to implement it after learning. To exclude such plans from consideration, we&#x2019;ll say that they are not <italic>admissible</italic> in the following sense:</p>
<disp-quote>
<p><italic>Admissibility/Plan Coherence</italic> (definition): A plan is <italic>admissible</italic> if its condition is such that, after learning, a rational agent will implement the antecedently best actionable plan with that condition. We&#x2019;ll say that the agent is required to be <italic>plan coherent</italic> with respect to the admissible plans.</p>
</disp-quote>
<p>Since we&#x2019;ll want to conduct our discussion both in terms of conditional and unconditional plans, I&#x2019;ll think of admissibility as a property of conditional plans in the first instance, and will say that an unconditional plan is admissible iff all of its elements are admissible.</p>
<p>Which plans are admissible? There are two natural answers to this question. On the first, one is not required to implement a plan when one has not learned its condition. In particular, if I have not learnt that <italic>w</italic> is actual, I am not required to implement the best plan for <italic>w</italic>, even though <italic>w</italic> is in fact actual. I am only required to implement the plan for the condition that I <italic>have</italic> learnt, i.e. the best plan for my evidence <italic>E</italic>. This gives us:</p>
<disp-quote>
<p><sc>epistemic admissibility</sc> If the agent learns that <italic>E</italic>, the admissible plans are those with <italic>E</italic> as their condition, so that after learning that <italic>E</italic>, a rational agent will implement the antecedently best actionable plan for what to do or believe if <italic>E</italic> is true.<sup><xref rid="fn14" ref-type="fn">14</xref></sup></p>
</disp-quote>
<p>The second answer is that we can exclude the omniscience plan because it is not a plan for what to believe given what one learns. That is, its conditions aren&#x2019;t propositions about what one has learnt (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I12"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> for some <italic>E</italic>), but cut across such propositions. In other words, there may be two worlds <italic>w</italic><sub>1</sub> and <italic>w</italic><sub>2</sub> such that the agent has learnt the same thing at those worlds, but where the omniscience plan recommends different beliefs. We can exclude such plans by assuming:</p>
<disp-quote>
<p><sc>auto-epistemic (a-e) admissibility</sc> If the agent learns that <italic>E</italic>, the admissible plans are those with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I13"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> as their condition, so that after learning that <italic>E</italic>, a rational agent will implement the antecedently best actionable plan for what to do or believe if <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I14"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> is true (i.e. if they learn that <italic>E</italic>).</p>
</disp-quote>
<p>The distinction between these two principles is easily overlooked, because they are equivalent in experiments with only transparent evidence. If any <italic>E</italic> that the agent may learn is transparent, then any such <italic>E</italic> is equivalent to <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I15"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>. Thus, the best plan for what to believe given <italic>E</italic> is just the best plan for what to believe given <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I16"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p>What the standard arguments do is to focus on plans with <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026; (or equivalently, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I17"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I18"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026;) as their condition, and prove that the plan to conditionalize is best among them. They can thus be understood as adopting one of these two accounts of admissibility, though this is not explicit in the presentation, because the attention is on identifying the best plan, not on what follows from the fact that it is best. That is, the standard arguments show:</p>
<disp-quote>
<p><sc>plan conditionalization</sc> In any experiment with only transparent evidence, before learning, the plan to conditionalize on one&#x2019;s evidence is the best unconditional plan with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I19"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I20"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; (or equivalently, <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026;) as its conditions. That is, PCond &#x227B; <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I21"><mml:mrow><mml:mi mathvariant='script'>P</mml:mi></mml:mrow></mml:math></inline-formula> for any other doxastic unconditional plan <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I22"><mml:mrow><mml:mi mathvariant='script'>P</mml:mi></mml:mrow></mml:math></inline-formula> with the same conditions.</p>
</disp-quote>
<p>This premise doesn&#x2019;t entail <sc>conditionalization</sc> on its own. Just because the plan to conditionalize on one&#x2019;s evidence was best <italic>before learning</italic>, it doesn&#x2019;t follow that later, after acquiring more evidence, one should implement it. We need something to bridge the gap between prior plans and posterior implementation. That is the job of the principles of admissibility we have just reviewed. Helping ourselves to one of these two principles (it doesn&#x2019;t matter which), we can now reason as follows: the best <italic>conditional</italic> plan for what to believe given <italic>E</italic> or <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I23"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> must be to conditionalize on <italic>E</italic>, since, as <sc>plan conditionalization</sc> says, the best <italic>unconditional plan</italic> with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I24"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I25"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; (or equivalently, <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026;) as its conditions says to conditionalize on <italic>E</italic> if they learn that <italic>E</italic> (or equivalently, if <italic>E</italic> is true).<sup><xref rid="fn15" ref-type="fn">15</xref></sup> Now applying either <sc>epistemic admissibility</sc> or <sc>a-e admissibility</sc>, it follows that upon learning that <italic>E</italic>, a rational agent will conditionalize on <italic>E</italic>, which is just what <sc>conditionalization</sc> says.</p>
</sec>
<sec id="s11">
<label>4.3</label><title>Two Arguments for Plan Conditionalization</title>
<p>Having seen how to argue from <sc>plan conditionalization</sc> to <sc>conditionalization</sc>, we will now look at two arguments for the former principle.</p>
<sec id="s12">
<label>4.3.1</label><title>The Accuracy Argument</title>
<p>First, we have the <italic>Accuracy Argument</italic> by <xref rid="r16" ref-type="bibr">Hilary Greaves &#x0026; David Wallace (2006)</xref>,<sup><xref rid="fn16" ref-type="fn">16</xref></sup> which forms part of the <italic>Accuracy-First Epistemology</italic> research program.<sup><xref rid="fn17" ref-type="fn">17</xref></sup> The fundamental ideas of this program are, first, that the sole epistemic good is the <italic>accuracy</italic> of belief states &#x2014; intuitively, their closeness to the truth &#x2014; and second that rational requirements on belief are justified insofar as conforming to them is conducive to this good. Whether a requirement is conducive to accuracy can be determined using decision-theoretic principles, in our case by comparing the expected accuracy of different plans.</p>
<p>How to measure accuracy is a difficult question, but Greaves &#x0026; Wallace only make the standard assumption of <italic>Strict Propriety</italic>, which says that probability functions will regard themselves as expectedly more accurate than any other credence function.<sup><xref rid="fn18" ref-type="fn">18</xref></sup> Using this assumption, Greaves &#x0026; Wallace prove:</p>
<disp-quote>
<p><sc>condi-max</sc> In experiments with only transparent evidence, the plan that uniquely maximizes expected accuracy among plans with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I26"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I27"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as their conditions is the plan to conditionalize on one&#x2019;s evidence, i.e. PCond.</p>
</disp-quote>
<p>Combining this with the claim that rational preferences between unconditional plans are determined by their expected accuracy, <sc>plan conditionalization</sc> follows.</p>
</sec>
<sec id="s13">
<label>4.3.2</label><title>The Diachronic Dutch Book Argument</title>
<p>Our second argument is the <italic>Diachronic Dutch Book Argument</italic> (<italic>DDBA</italic>) for <sc>plan conditionalization</sc> (<xref rid="r22" ref-type="bibr">Lewis, 1999</xref>; <xref rid="r35" ref-type="bibr">Teller, 1973</xref>). A dutch book argument aims to establish an epistemic norm by showing that someone who doesn&#x2019;t conform to it is disposed to accept every one of a collection of bets that jointly risk the agent money without any corresponding chance of gain (a &#x201C;dutch book&#x201D;). This is thought to reveal an agent who violates such a norm as having incoherent, and therefore irrational, commitments. In the planning framework, this assumption amounts to:</p>
<disp-quote>
<p><sc>non-exploitability</sc> The best plan with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I28"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I29"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as its conditions is one which doesn&#x2019;t lead the agent to accept a dutch book.</p>
</disp-quote>
<p>As it turns out, there is a unique such plan, namely the plan to conditionalize on one&#x2019;s evidence, as the following result establishes:</p>
<disp-quote>
<p><sc>(converse) dutch book theorem for conditionalization</sc> In experiments with only transparent evidence, if a rational agent follows a doxastic plan with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I30"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I31"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as its conditions, they are disposed to accept a dutch book over time if and only if they don&#x2019;t follow the plan to conditionalize.</p>
</disp-quote>
<p>From these two premises, <sc>plan conditionalization</sc> follows: the best admissible plan is one which avoids dutch-bookability, and there is exactly one such plan, namely the plan to conditionalize on one&#x2019;s evidence.<sup><xref rid="fn19" ref-type="fn">19</xref></sup></p>
</sec>
</sec>
</sec>
<sec id="s14">
<label>5.</label><title>A Dilemma for the Externalist</title>
<p>As noted, the standard arguments for <sc>conditionalization</sc> are restricted to experiments where all potential bodies of evidence are transparent. If we endorse <sc>evidential transparency</sc>, no other experiments are possible, so the arguments establish that one should always conditionalize. However, on this assumption, it would follow that <sc>externalism</sc> is false. If one conditionalizes on a proposition, one becomes certain in it and in all equivalent propositions. So as long as our agent conditionalizes on their evidence, as our two arguments say they should if we assume <sc>evidential transparency</sc>, they will become certain of what it is that they have learnt, since transparent evidence by definition is equivalent to the claim that it is their evidence.</p>
<p>What if, on the other hand, we deny <sc>evidential transparency</sc>? Then <sc>conditionalization</sc> is <italic>consistent</italic> with <sc>externalism</sc>: conditionalizing on opaque evidence can leave one uncertain of what one&#x2019;s evidence is. For example, if in <italic>Here&#x2019;s a Hand</italic> you are in the Bad Case, you learn <italic>EXP</italic>, which is true in both the Good and the Bad Cases. Thus, conditionalizing on one&#x2019;s evidence will leave you uncertain of whether you are in the Good or the Bad Case. And since your evidence is different in those two cases, you will also be uncertain of what your evidence is.</p>
<p>An argument for <sc>conditionalization</sc> that covers experiments with opaque evidence would therefore be just what the externalist needs. However, generalizing the plan-based arguments to such experiments raises the question of what our account of <italic>admissibility</italic> should be: <sc>epistemic admissibility</sc> or <sc>a-e admissibility</sc>. While, as we saw, these principles are equivalent in experiments with only transparent evidence, this is no longer the case when opaque evidence is possible. If one is in the Bad Case, one&#x2019;s evidence <italic>EXP</italic> is not equivalent to the claim that one has learnt <italic>EXP</italic>. Thus, the best plan for what to believe given <italic>EXP</italic> need not be the same as the best plan for what to believe given <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I32"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p>What several recent papers have done is to generalize the two arguments by focusing on plans with the <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I33"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mtext>s</mml:mtext></mml:mrow></mml:math></inline-formula> as their condition, which in our terms amounts to adopting <sc>a-e admissibility</sc>. To be clear, because the distinction between the two principles of admissibility is overlooked, this choice is not explicitly made. Rather, this account of admissibility is simply assumed as the natural way to block plans such as the omniscience plan from being considered. Given this assumption, <xref rid="r17" ref-type="bibr">Matthias Hild (1998)</xref> has shown that the <sc>(converse) dutch book theorem for conditionalization</sc> doesn&#x2019;t generalize to experiments with potentially opaque evidence. Similarly, <xref rid="r33" ref-type="bibr">Miriam Schoenfield (2017)</xref> shows that <sc>condi-max</sc> doesn&#x2019;t generalize to such experiments either.<sup><xref rid="fn20" ref-type="fn">20</xref></sup> That is not all, however: Hild and Schoenfield also show that in experiments with possibly opaque evidence, the best plan is the plan to follow <sc>a-e conditionalization</sc>. That is, the plan that maximizes expected accuracy, and the only plan that doesn&#x2019;t render the agent exploitable by dutch books, is the plan to comply with the following update rule:</p>
<disp-quote>
<p><sc>auto-epistemic (a-e) conditionalization</sc> Upon learning that <italic>E</italic>, a rational agent will conditionalize their prior credences on <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I34"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>, so that, for any proposition <italic>p</italic>,
<disp-formula id="FD2">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M2">
<mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>df</mml:mtext></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x2227;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mrow>
</mml:math>
</disp-formula>
if defined, where <italic>C</italic> and <italic>C</italic><sub><italic>E</italic></sub> are the agent&#x2019;s credence functions before and after learning that <italic>E</italic>, respectively.</p>
</disp-quote>
<p>Recall that <sc>conditionalization</sc> doesn&#x2019;t just say to conditionalize on <italic>some proposition</italic>, but specifically on one&#x2019;s evidence. <sc>a-e conditionalization</sc> agrees with the first part of that claim, but not with the second. It says to conditionalize, not on one&#x2019;s evidence (call it <italic>E</italic>), but on the true description of one&#x2019;s evidence (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I35"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>). Note that if <italic>E</italic> is transparent, the agent is certain in advance that <italic>E</italic> will be their evidence iff <italic>E</italic> is true. Hence, conditionalizing on <italic>E</italic> will have the same effect as conditionalizing on <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I36"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>, since conditionalizing on equivalent propositions yields equivalent results. Thus, under the restriction to transparent evidence, <sc>a-e conditionalization</sc> and <sc>conditionalization</sc> always agree. But if we reject <sc>evidential transparency</sc> and thus accept that opaque evidence is possible, the two rules can come apart.</p>
<p>In other words, what Hild and Schoenfield have shown is that if the restriction to experiments with only transparent evidence is lifted, it is the plan to <italic>auto-epistemically</italic> conditionalize that has the nice properties: it maximizes expected accuracy among plans with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I37"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I38"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as their conditions and is the only such plan that renders the agent immune to exploitability by dutch books. Combining these facts with the rest of the premises of the <italic>Accuracy</italic> and <italic>Dutch Book Arguments</italic>, we get:</p>
<disp-quote>
<p><sc>plan auto-epistemic (a-e) conditionalization</sc> Before learning, the plan to auto-epistemically conditionalize is the best plan with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I39"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as its conditions. That is,
<disp-formula id="FD3">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M3">
<mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x2026;</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mi>&#x1D4AB;</mml:mi></mml:mrow>
</mml:math>
</disp-formula>
for any other doxastic unconditional plan <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I40"><mml:mrow><mml:mi mathvariant='script'>P</mml:mi></mml:mrow></mml:math></inline-formula> with the same conditions.</p>
</disp-quote>
<p>This premise can then be combined with <sc>a-e admissibility</sc>, just as before, to get an argument for <sc>a-e conditionalization</sc> itself.</p>
<p>Now, here&#x2019;s the bad news for the externalist: <sc>a-e conditionalization</sc> is inconsistent with <sc>externalism</sc>. One becomes certain of what one conditionalizes on, so if one conditionalizes on the true description of one&#x2019;s evidence, naturally one will never be uncertain of what one&#x2019;s evidence is! For example, in <italic>Here&#x2019;s a Hand</italic>, if one is in the Bad Case, <sc>a-e conditionalization</sc> will say to conditionalize on <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I41"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula> &#x2014; the claim that the strongest proposition that one learnt is that one had an experience as of a hand. This will leave one certain that one is in the Bad Case. So rejecting <sc>evidential transparency</sc> will not help the externalist: we still cannot endorse any of the two arguments as sound and retain <sc>externalism</sc>.</p>
<p>To sum up: the result is a dilemma for the externalist. On the first horn, they can assume <sc>evidential transparency</sc>. Then the two arguments will give them <sc>conditionalization</sc>, but not <sc>externalism</sc>. On the second horn, they can reject <sc>evidential transparency</sc>. That&#x2019;s what the externalist wanted to do all along &#x2014; if it&#x2019;s possible to be uncertain about evidential matters, that&#x2019;s surely because one can have opaque evidence. But then the premises of the two arguments will support <sc>a-e conditionalization</sc>, which again is inconsistent with <sc>externalism</sc>. So regardless of whether they accept <sc>evidential transparency</sc>, the premises of our two arguments will lead them to reject <sc>externalism</sc>.</p>
</sec>
<sec id="s15">
<label>6.</label><title>Rethinking Plan Coherence</title>
<sec id="s16">
<label>6.1</label><title>Which Plans Are Admissible?</title>
<p>How can the externalist respond? They could reject the accuracy framework, and the idea that dutch-bookability indicates irrationality. My suggestion, however, is to reconsider something that they have in common: the assumption about plan coherence (<sc>a-e admissibility</sc>) that has to be made in order to go from <sc>plan a-e conditionalization</sc> to <sc>a-e conditionalization</sc> itself. In particular, I think that externalists have good reason to reject <sc>a-e admissibility</sc> in favour of <sc>epistemic admissibility</sc>. That is, the conditions of the admissible plans are what you&#x2019;ve learnt, not that you&#x2019;ve learnt it. Since <sc>a-e admissibility</sc> is a premise in the arguments for <sc>a-e conditionalization</sc>, this allows externalists to reject those arguments as unsound.</p>
<p>To see why, let us think about practical planning for a moment. Imagine that our agent is now playing poker. Before the next card is drawn, they rightly conclude that, if they have a better hand than their opponents, the plan to raise the bet is best. The card is revealed, and as a matter of fact they do now have the best hand. However, since they cannot see their opponents&#x2019; hands, they don&#x2019;t know this. In fact, given what the agent knows about the cards on the table and their opponents&#x2019; behaviour, it now looks like they have a worse hand than some of their opponents. In this situation, they are obviously not required to raise the bet, despite the fact that the best plan says to do so. The natural explanation, I suggest, is that they have not learnt that they have the best hand, and so are not required to implement the best plan for that condition. This is precisely the explanation offered by <sc>epistemic admissibility</sc>, according to which a requirement to implement the best plan in a class only takes effect when one has learned the condition of those plans.</p>
<p>Now returning to the epistemic case, in <italic>Here&#x2019;s a Hand</italic>, this explanation shows where <sc>a-e admissibility</sc> goes wrong. In the Bad Case, while you have learnt <italic>EXP</italic>, you have not learnt <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I42"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula>. Thus, the same objection as in the poker case applies: you have not learnt the condition of the best <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I43"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula>-plan, and so aren&#x2019;t required to implement it. To put the point differently: everyone will acknowledge that if we count one&#x2019;s &#x201C;situation&#x201D; as what hand one has in a poker game, one should not always prefer what the better plan for one&#x2019;s situation says. My suggestion is that this is because everyone agrees that one can have the best hand in a poker game, without having learnt this. But if we think of one&#x2019;s &#x201C;situation&#x201D; as a description of what one&#x2019;s evidence is, it has proved more tempting to think that one <italic>is</italic> required to prefer what the better plan for one&#x2019;s situation says. However, if opaque evidence is possible, the same objection applies here: one can learn that <italic>E</italic>, without having learnt that one learnt that <italic>E</italic>.<sup><xref rid="fn21" ref-type="fn">21</xref></sup></p>
<p>This argument against <sc>a-e admissibility</sc> from <sc>epistemic admissibility</sc> can be put on theoretically firmer footing by showing that the latter follows from two plausible principles of practical reasoning. First we have a diachronic principle:</p>
<disp-quote>
<p><sc>irrelevance</sc> If what a rational agent learns doesn&#x2019;t rule out any <italic>p</italic>-worlds, their preferences between plans with <italic>p</italic> as their condition will remain unchanged.</p>
</disp-quote>
<p>The idea is that if the agent doesn&#x2019;t learn anything that rules out <italic>p-</italic>worlds, the new evidence doesn&#x2019;t put them in a position to reconsider their plans for <italic>p</italic>, since they have not learnt anything about what could happen if <italic>p</italic> is true. The second premise concerns the agent&#x2019;s practical reasoning after learning:</p>
<disp-quote>
<p><sc>implementation</sc> If, after learning, a rational agent prefers &#x27E8;<italic>p</italic>, &#x03D5;<italic>&#x27E9;</italic> to all other actionable plans with <italic>p</italic> as their condition, and if their evidence entails that <italic>p</italic>, then they will <italic>&#x03D5;</italic>.</p>
</disp-quote>
<p>This is a synchronic principle of practical rationality, akin to that of means-end reasoning. We can think of it as saying that one can reason from one&#x2019;s evidence and one&#x2019;s preferences between conditional plans to a decision to implement the preferred plan. For example, if my best plan for rain is to bring a rain coat rather than an umbrella, and my evidence entails that it is raining, then I can decide to bring the rain coat on the basis of those two attitudes.</p>
<p><sc>epistemic admissibility</sc> follows from these two premises. First, by learning that <italic>E</italic>, a rational agent doesn&#x2019;t learn anything that rules out any <italic>E</italic>-worlds. Hence, their preferences between <italic>E</italic>-plans will remain unchanged, so that they still prefer the antecedently best <italic>E</italic>-plan to other <italic>E</italic>-plans. Second, if the agent learns that <italic>E</italic>, they have learnt something that entails that <italic>E</italic>, so that they will implement the best actionable <italic>E</italic>-plan, which as just shown is the <italic>antecedently</italic> best actionable <italic>E</italic>-plan.</p>
<p>Importantly, there is no parallel argument for <sc>a-e admissibility</sc>. Consider the best <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I44"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plan. To establish <sc>a-e admissibility</sc>, we would want to reason in two steps, as before: first that the agent&#x2019;s preferences between <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I45"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plan will stay the same upon learning that <italic>E</italic>, and second that the agent will implement the best <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I46"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plan after learning. For the sake of argument, I am happy to grant the first step, since it follows from <sc>irrelevance</sc> on the assumption that the agent will only learn <italic>E</italic> if it is true.<sup><xref rid="fn22" ref-type="fn">22</xref></sup> On this assumption, learning that <italic>E</italic> cannot rule out any <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I47"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-worlds, since they are also <italic>E</italic>-worlds, so that <sc>irrelevance</sc> says that the agent&#x2019;s preferences between <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I48"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plans will remain the same upon learning that <italic>E</italic>.</p>
<p>However, we would then have to show that the agent should implement the best <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I49"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plan after learning. In cases of opaque evidence, this doesn&#x2019;t follow from <sc>implementation</sc>: in the Bad Case, the agent doesn&#x2019;t learn anything that entails <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I50"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi><mml:mi>X</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:math></inline-formula> by learning that <italic>EXP</italic>, and so it doesn&#x2019;t follow that they are required to implement the best plan for this condition. In other words, if the agent&#x2019;s evidence is opaque, they cannot necessarily reason &#x201C;I have learnt that <italic>E</italic>, and the best plan for learning that <italic>E</italic> is to <italic>&#x03D5;</italic>, so I&#x2019;ll <italic>&#x03D5;</italic>&#x201D;, because they have not learnt that they have learnt that <italic>E</italic>, and so cannot use <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I51"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> in their deliberation. This is just an instance of the general fact that rational requirements are triggered not by just any facts, but by <italic>learnt</italic> facts. For example, if I know that <italic>if p, then q</italic>, this doesn&#x2019;t commit me to believing that <italic>q</italic> unless I also know (or <italic>learn</italic>) that <italic>p</italic>. Similarly, the fact that I prefer the plan to <italic>&#x03D5;</italic> to the plan to <italic>&#x03C8; given some condition</italic> doesn&#x2019;t trigger a rational requirement to prefer <italic>&#x03D5;</italic> to <italic>&#x03C8;</italic> unless I have learnt the condition.</p>
<p>Finally, recall that we introduced the admissibility principles to exclude plans such as the omniscience plan. Arguably, however, the stated motivations for excluding this and similar plans speak against <sc>a-e admissibility</sc> in cases of opaque evidence. First, Greaves &#x0026; Wallace say that plans such as the omniscience plan require the agent to &#x201C;respond to information that he does not have&#x201D; (<xref rid="r16" ref-type="bibr">Greaves &#x0026; Wallace, 2006</xref>, p. 612). I think that this problem is sometimes also shared by <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I52"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plans: requiring the agent to auto-epistemically conditionalize in the Bad Case is precisely to require them to &#x201C;respond to information which he does not have&#x201D; &#x2014; namely the information that they are in the Bad Case. Even more revealing is <xref rid="r12" ref-type="bibr">Kenny Easwaran&#x2019;s (2013</xref>, pp. 124&#x2013;125) motivation for excluding the omniscience plan: that a plan should assign the same credence function to two states if the agent &#x201C;does not learn anything to distinguish them&#x201D;. Again, this problem is shared by <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I53"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>-plans when evidence is opaque: in the Bad Case the agent doesn&#x2019;t learn anything to distinguish the Good Case from the Bad Case, and so should not be required to act on the plan to conditionalize on being in the Bad Case if they are in the Bad Case.</p>
</sec>
<sec id="s17">
<label>6.2</label><title>Is My Argument Unexternalist?</title>
<p>One may worry that, while the case against <sc>a-e admissibility</sc> seems reasonable, it clashes with a fundamental externalist principle: that facts about rationality can fail to be known to the agent. Doesn&#x2019;t my complaint that one is not required to implement a plan if one doesn&#x2019;t know its condition contradict this principle? This would be a serious problem if true, since my aim is to render Bayesianism consistent with <sc>externalism</sc>.</p>
<p>In reply, it is important to distinguish between <italic>rules</italic> and <italic>plans</italic>. The former are principles of rationality, and the objection is right that an externalist wouldn&#x2019;t want to say that such a principle only applies when the agent knows what it says to do. My objection, however, is about the <italic>plan</italic>. A plan is a commitment of the agent, like a belief or an intention.<sup><xref rid="fn23" ref-type="fn">23</xref></sup> What I have claimed is that this commitment only generates the additional commitment to implementing the plan when the agent has learnt its condition.</p>
<p>This is perfectly in line with what externalists think. No externalist would say that any fact can be rationally used in one&#x2019;s reasoning just by virtue of being true. To return to the above example, no externalist would hold the absurd view that if one knows <italic>if p then q</italic>, and <italic>p</italic> is <italic>true</italic> but not known or believed to be, one can rationally use <italic>p</italic> in one&#x2019;s reasoning and is thereby rationally committed to believing that <italic>q</italic>. Rather, they think that one must stand in some substantive epistemic relation to <italic>p</italic>, such as <italic>knowing it</italic> or <italic>being in a position to know it</italic>, in order for such a requirement to be triggered. Or to take a practical example, my intention or desire to drink gin doesn&#x2019;t rationally require me to intend or desire to drink the contents of the glass just because it happens to contain gin. A requirement is only triggered if I <italic>know</italic> that the glass contains gin, even for an externalist. That&#x2019;s why many externalists think of one&#x2019;s evidence as well as one&#x2019;s practical reasons as consisting of the propositions that one <italic>knows</italic>, or stands in similar relations to, such as what one <italic>non-inferentially knows</italic> or is in a <italic>position to know</italic> (<xref rid="r23" ref-type="bibr">Littlejohn, 2011</xref>; <xref rid="r24" ref-type="bibr">Lord, 2018</xref>; <xref rid="r36" ref-type="bibr">Williamson, 2000</xref>). In the Bayesian framework, we call this relation &#x201C;learning&#x201D; or &#x201C;having the proposition as evidence&#x201D;.</p>
<p>The effects of the externalist principle that requirements of rationality can fail to be known are felt on another level. One source of uncertainty about rationality is that that one can stand in the right epistemic relation to a proposition without standing in the right relation to the higher-order proposition that says that one stands in the right relation to that proposition. So, for example, one can know <italic>if p then q</italic> and also know <italic>p</italic>, but fail to know that one knows that <italic>p</italic>. In such a case, the externalist would say that one is required to believe that <italic>q</italic>, even though one is not in a position to know that this is required. Similarly, an externalist about evidence would say that one can sometimes learn some <italic>E</italic> without learning <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I54"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>. It would then follow from <sc>epistemic admissibility</sc> that one is required to implement the best plan for <italic>E</italic> even though one doesn&#x2019;t know that one is so required.</p>
</sec>
</sec>
<sec id="s18">
<label>7.</label><title>Foundations for New Arguments</title>
<sec id="s19">
<label>7.1</label><title>Why Plan Coherence?</title>
<p>I have argued that we should reject <sc>a-e admissibility</sc> in favour of <sc>epistemic admissibility</sc>. This suggests the following strategy: to argue for an update rule, we should consider plans with the evidence rather than a description of the evidence as their condition. In particular, if we could show that the plan to adopt <italic>C</italic>(<italic>&#x00B7;|E</italic>) is the best <italic>E</italic>-plan, <sc>epistemic admissibility</sc> would then kick in and require the agent to adopt the conditionalized credences. This is the line that I will pursue in the rest of this paper.</p>
<p>Before I do so, I want to address a more general challenge: I have argued that <sc>epistemic admissibility</sc> is a more plausible expression of the idea that rationality requires one to &#x201C;stick to the plan&#x201D; than <sc>a-e admissibility</sc>. But why think that rationality requires plan coherence of any kind? That is, why think that there are any admissible plans at all? I have already argued that if one did not learn the condition of the best plan, one need not be required to implement it. One can also have learnt <italic>too much</italic> to be required to implement it, as in a case where I have planned to sell my lottery ticket if I am offered $100 for it, but then learn that I am offered this amount <italic>and</italic> that it is the winning ticket. Here I have learnt <italic>more</italic> than the condition of the plan, which puts me in a position to rationally re-evaluate it.</p>
<p>What <italic>is</italic> plausible is that one should stick to a plan if one has learnt <italic>just enough</italic>, i.e. if one has learnt <italic>exactly</italic> the condition of the plan, no more and no less. Admittedly, I don&#x2019;t have an argument against general scepticism about plan coherence,<sup><xref rid="fn24" ref-type="fn">24</xref></sup> though in this respect I am not in a worse position than those who appeal to <sc>a-e admissibility</sc>. To the sceptic, all I can offer is the intuitive plausibility of the claim that one should stick to the plan when none of the above objections apply. This is best brought out by considering how someone would justify their violation of the principle:</p>
<list list-type="simple">
<list-item><p><italic>A</italic>: Why did you bring your umbrella? I thought you said yesterday that wearing a rain coat was the better plan for rain.</p></list-item>
<list-item><p><italic>B</italic>: It was, yes. But then I learnt something that made me reconsider. Yesterday night I didn&#x2019;t know that it was going to rain, so bringing the rain coat was the better plan for rain. But this morning, I learnt that it was raining, and in the light of that new information I decided to bring the umbrella instead.</p></list-item>
<list-item><p><italic>A</italic>: Wait, how can the fact that it is raining make you revise your plan about what to do <italic>if it rains</italic>?</p></list-item>
</list>
<p><italic>A</italic>&#x2019;s reaction seems justified. Either <italic>B</italic> was wrong about the plan to begin with, or they were wrong to revise it. That something happened cannot itself be a reason to reconsider what to do if it happens.</p>
</sec>
<sec id="s20">
<label>7.2</label><title>Conditional Plan Evaluation</title>
<p>In order to develop an argument for <sc>conditionalization</sc> based on <sc>epistemic admissibility</sc>, an adjustment will be necessary. The original arguments, which are restricted to experiments with only transparent evidence, establish <sc>conditionalization</sc> by showing that the best conditional plan for learning that <italic>E</italic> (i.e. for <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I55"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>) is to conditionalize on <italic>E</italic>. This claim was in turn justified by arguing that the best <italic>unconditional</italic> plan with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I56"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, &#x2026; as its conditions was the plan to conditionalize on <italic>E</italic><sub>1</sub> if the agent learns that <italic>E</italic><sub>1</sub>, conditionalize on <italic>E</italic><sub>2</sub> if they learn that <italic>E</italic><sub>2</sub>, <italic>etc.</italic></p>
<p>Since we intend to use plans with <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026; as their conditions in a setting with opaque evidence, this kind of justification is not available, however. One can only form an unconditional plan from conditional plans if their conditions form a partition, but <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026; is not guaranteed to form a partition in experiments with opaque evidence. For example, in <italic>Here&#x2019;s a Hand</italic>, one proposition that you might learn is <italic>EXP</italic>, and another is <italic>EXP</italic> &#x2227; <italic>HAND</italic>. These propositions are consistent, and thus a plan to believe one thing given <italic>EXP</italic> and another given <italic>EXP</italic> &#x2227; <italic>HAND</italic> is not a real unconditional plan. However, as I will show, there is a similar way of evaluating conditional plans that escapes this problem. To this end, I will assume three principles about rational preference between conditional plans, and show that, combined with the claim that one ought to maximize expected utility, they pin down a unique way of evaluating conditional plans.</p>
<p>First, it is highly plausible that the preference relation between conditional plans satisfies some very minimal structural properties &#x2014; that it is reflexive, and that two conditional plans with the same condition can always be compared:</p>
<disp-quote>
<p><sc>reflexivity</sc> For any conditional plan <italic>A</italic>, <italic>A</italic> &#x2AB0; <italic>A</italic>.</p>
<p><sc>conditional completeness</sc> For any condition <italic>p</italic> and acts <italic>&#x03D5;</italic>, <italic>&#x03C8;</italic>, either &#x27E8;<italic>p</italic>, <italic>&#x03D5;</italic>&#x27E9; &#x2AB0; &#x27E8;<italic>p</italic>, <italic>&#x03C8;</italic>&#x27E9; or &#x27E8;<italic>p</italic>, <italic>&#x03C8;</italic>&#x27E9; &#x2AB0; &#x27E8;<italic>p</italic>, <italic>&#x03D5;</italic>&#x27E9; (or both).<sup><xref rid="fn25" ref-type="fn">25</xref></sup></p>
</disp-quote>
<p>The third principle will do the heavy lifting, and is a version of <xref rid="r31" ref-type="bibr">Leonard Savage&#x2019;s (1954)</xref> <italic>Sure-Thing Principle</italic>.<sup><xref rid="fn26" ref-type="fn">26</xref></sup> The idea is that if you have some partition of logical space, and for each cell of the partition prefer some act to another, you should be able to combine those preferences between conditional plans to form a preference between the <italic>unconditional</italic> plans that they make up. For example, suppose that it is better to bring an umbrella than a rain coat if it rains, and better to bring sunglasses than a cap if it doesn&#x2019;t rain. If so, the unconditional plan to bring an umbrella if it rains and sunglasses if it doesn&#x2019;t must be better than the plan to bring a rain coat if it rains and a cap if it doesn&#x2019;t. Formally:</p>
<disp-quote>
<p><sc>conditional dominance</sc> For any acts <italic>&#x03D5;</italic><sub>1</sub>, &#x2026;, <italic>&#x03D5;</italic><sub><italic>n</italic></sub>, <italic>&#x03C8;</italic><sub>1</sub>, &#x2026;, <italic>&#x03C8;</italic><sub><italic>n</italic></sub>, and any partition <italic>p</italic><sub>1</sub>, &#x2026;, <italic>p</italic><sub><italic>n</italic></sub>, if for all <italic>i</italic> (1 &#x2264; <italic>i</italic> &#x2264; <italic>n</italic>),
<disp-formula id="FD4">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M4">
<mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow>
</mml:math>
</disp-formula>
then
<disp-formula id="FD5">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M5">
<mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x2223;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2264;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C8;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x2223;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2264;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow>
</mml:math>
</disp-formula></p>
</disp-quote>
<p>These assumptions, combined with the idea that we evaluate unconditional plans by their expected value, suffice to establish the following claim:</p>
<disp-quote>
<p><sc>conditional expectation lemma</sc> Given <sc>conditional dominance</sc> and <sc>reflexivity</sc>, and if preferences between unconditional plans are determined by their expected value, then for any acts <italic>&#x03D5;</italic>, <italic>&#x03C8;</italic> (whether doxastic or practical), and any proposition <italic>E</italic>, if <italic>C</italic>(<italic>&#x00B7;|E</italic>) is defined and
<disp-formula id="FD6">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M6">
<mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow>
</mml:math>
</disp-formula>
then
<disp-formula id="FD7">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M7">
<mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x03C8;</mml:mi></mml:mrow>
</mml:math>
</disp-formula>
where <italic>EV</italic><sub><italic>c</italic></sub><italic>A</italic> is the expected value of act <italic>A</italic> according to credence function <italic>c</italic>.</p>
</disp-quote>
<p>The proof is in the <xref rid="app1" ref-type="app">Appendix</xref>. We have thus arrived at a way of evaluating conditional plans: rational preferences between plans with some condition reflect their conditional expected value.</p>
</sec>
</sec>
<sec id="s21">
<label>8.</label><title>Two Arguments for Conditionalization</title>
<p>We can now combine this close connection between conditional credences and preferences about conditional plans with <sc>epistemic admissibility</sc> to give two new arguments for <sc>conditionalization</sc>: one accuracy-theoretic, and one practical. These arguments are not restricted to experiments with transparent evidence. As such, their conclusion is consistent with <sc>externalism</sc>: if it is possible to have opaque evidence, they require the agent to conditionalize on it, and thus to become uncertain of what they have learnt.</p>
<sec id="s22">
<label>8.1</label><title>A New Accuracy Argument</title>
<p>I begin with the accuracy-theoretic argument, which builds on Greaves &#x0026; Wallace&#x2019;s <italic>Accuracy Argument</italic>, but which is now stated in terms of the best conditional doxastic plan for the agent&#x2019;s evidence. Just as before, it proceeds in two steps. First, it is shown that the <italic>conditional</italic> plan to conditionalize on <italic>E</italic> is the best plan for <italic>E</italic>:</p>
<disp-quote>
<p><sc>conditional plan conditionalization</sc> For any possible evidence <italic>E</italic>, the best plan for <italic>E</italic> is to conditionalize on <italic>E</italic>. That is, for any credence function <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I57"><mml:mrow><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup><mml:mo>&#x2260;</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>.<sup><xref rid="fn27" ref-type="fn">27</xref></sup></p>
<p><bold>Justification:</bold> Assume, for reductio, that there is some credence function <italic>c</italic><sup><italic>*</italic></sup> &#x2260; <italic>C</italic>(<italic>&#x00B7;|E</italic>) such that it is not the case that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I58"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. By <sc>conditional completeness</sc>, it follows that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I59"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Now applying the <sc>conditional expectation lemma</sc>, we get <italic>EV</italic><sub><italic>C</italic>(<italic>&#x00B7;|E</italic>)</sub><italic>c</italic><sup><italic>*</italic></sup> &#x2265; <italic>EV</italic><sub><italic>C</italic>(<italic>&#x00B7;|E</italic>)</sub><italic>C</italic>(<italic>&#x00B7;|E</italic>). Assuming that the value of a credence function is its accuracy, it follows that <italic>C</italic>(<italic>&#x00B7;|E</italic>) regards <italic>c</italic><sup><italic>*</italic></sup> as at least as accurate as itself in expectation. But that is impossible, since by <italic>Strict Propriety</italic> any probability function regards itself as more accurate in expectation than any other credence function, and <italic>C</italic>(<italic>&#x00B7;|E</italic>) is a probability function, since <italic>C</italic> is. Hence the assumption must have been false: <italic>&#x27E8;E</italic>, <italic>C</italic>(<italic>&#x00B7;|E</italic>)<italic>&#x27E9;</italic> is the best doxastic conditional plan with <italic>E</italic> as a condition.</p>
</disp-quote>
<p>Note that this principle is about the best plan for <italic>E</italic>, not the best plan for <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I60"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>. We can thus argue as follows: Suppose that our rational agent learns that <italic>E</italic>. Assuming that all doxastic plans are actionable (i.e. that the agent can adopt any credence function upon learning that <italic>E</italic>), it follows from <sc>conditional plan conditionalization</sc> that the best actionable <italic>E</italic>-plan is to conditionalize on <italic>E</italic>. Thus, by <sc>epistemic admissibility</sc> the agent will conditionalize on <italic>E</italic>. Hence, <sc>conditionalization</sc> is true. Since no assumptions have been made about the relationship between <italic>E</italic> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I61"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>, this conclusion holds irrespective of whether <italic>E</italic> is transparent or opaque.</p>
</sec>
<sec id="s23">
<label>8.2</label><title>A Practical Argument</title>
<p>The <sc>conditional expectation lemma</sc> also suggests a different argument for <sc>conditionalization</sc>, one that doesn&#x2019;t employ the accuracy framework. In fact, we can dispense with the idea of doxastic planning altogether. As long as <italic>practical planning</italic> &#x2014; planning what to do, rather than what to believe &#x2014; obeys the assumptions listed in the <sc>conditional expectation lemma</sc>, we can establish <sc>conditionalization</sc> by combining <sc>epistemic admissibility</sc> with expected utility reasoning.<sup><xref rid="fn28" ref-type="fn">28</xref></sup></p>
<p>Acts are now practical acts, instead of doxastic acts, and we&#x2019;ll assume that preferences between them are determined by their expected utility. The <italic>Practical Argument</italic> for <sc>conditionalization</sc> then goes as follows:</p>
<disp-quote>
<p>Suppose, for reductio, that <sc>conditionalization</sc> is false, so that for some <italic>E</italic> and <italic>A</italic>, <italic>C</italic><sub><italic>E</italic></sub>(<italic>A</italic>) <italic>&#x2260; C</italic>(<italic>A|E</italic>). Assume that our agent can perform one of the following two acts, where acts are functions from worlds to the utility, in that world, of performing the act:
<disp-formula id="FD8">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M8">
<mml:mrow><mml:mi>&#x1D49C;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x00A0;if&#x00A0;</mml:mtext><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>A</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x00A0;if&#x00A0;</mml:mtext><mml:mi>w</mml:mi><mml:mo>&#x2209;</mml:mo><mml:mi>A</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow>
</mml:math>
</disp-formula>
and
<disp-formula id="FD9">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M9">
<mml:mrow><mml:mi>&#x1D4AA;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mtext>&#x00A0;for&#x00A0;all&#x00A0;</mml:mtext><mml:mi>w</mml:mi><mml:mo>.</mml:mo></mml:mrow>
</mml:math>
</disp-formula></p>
<p>We can think of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I62"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula> as accepting a $1 bet on <italic>A</italic> for the price of <italic>C</italic><sub><italic>E</italic></sub>(<italic>A</italic>), and of <italic>&#x1D4AA;</italic> as declining to bet.</p>
<p>Since <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I63"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2260;</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>, it follows that either <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I64"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x1D49C;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x1D4AA;</mml:mi></mml:mrow></mml:math></inline-formula> or <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I65"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x1D49C;</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x1D4AA;</mml:mi></mml:mrow></mml:math></inline-formula>.<sup><xref rid="fn29" ref-type="fn">29</xref></sup> Thus, by the <sc>conditional expectation lemma</sc>, it is not both the case that (a) <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I66"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D49C;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and (b) <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I67"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D49C;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. However, by <sc>conditional completeness</sc>, one of (a) or (b) is true, so that by definition of strict preference (&#x227B;) it follows that either <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I68"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D49C;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> or <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I69"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D49C;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>.</p>
<p>Now applying <sc>epistemic admissibility</sc>, it follows that the agent must either perform <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I70"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula> or <italic>&#x1D4AA;</italic> after learning that <italic>E</italic>. But <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I71"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula> is a fair bet from the point of view of <italic>C</italic><sub><italic>E</italic></sub> (i.e. <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I72"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula> and <italic>&#x1D4AA;</italic> have the same expected utility for the agent after learning that <italic>E</italic>)<sup><xref rid="fn30" ref-type="fn">30</xref></sup> so that the agent is permitted to perform either. Thus, our assumption must be wrong: it cannot be the case that for some <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I73"><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>A</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2260;</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. <sc>conditionalization</sc> must be true.</p>
</disp-quote>
<p>Two remarks on this argument: First, just like the <italic>New Accuracy Argument</italic>, it is not restricted to experiments with transparent evidence. Second, note that by applying <sc>epistemic admissibility</sc>, we implicitly assumed that the agent has exactly two available acts: <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I74"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula> and <italic>&#x1D4AA;</italic>. How does this justify <sc>conditionalization</sc> in a situation where those acts aren&#x2019;t available, or where others are? Here we can take a page from the <italic>DDBA</italic>, which is meant to show that one should always conditionalize on one&#x2019;s evidence, not just when a clever bookie is in the vicinity and the only available options are to accept their bet, or not. Proponents of the <italic>DDBA</italic>, such as David Lewis, argue that the fact that the agent <italic>would</italic> accept a dutch book if they faced that choice means that they have &#x201C;contradictory opinions about the expected value of the very same transaction&#x201D; (<xref rid="r22" ref-type="bibr">Lewis, 1999</xref>, p. 405), which is a kind of irrationality, not in the agent&#x2019;s actions, but in their <italic>preferences</italic>. The same can be said about this argument: the fact that the agent is indifferent between <italic>&#x1D4AA;</italic> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I75"><mml:mrow><mml:mi mathvariant='script'>A</mml:mi></mml:mrow></mml:math></inline-formula>, even though before learning they strictly preferred one of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I76"><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D4AA;</mml:mi><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I77"><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x1D49C;</mml:mi><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:math></inline-formula> to the other, shows that they have plan incoherent preferences even though they aren&#x2019;t able to perform those acts, or are able to perform others.<sup><xref rid="fn31" ref-type="fn">31</xref></sup></p>
</sec>
<sec id="s24">
<label>8.3</label><title>Discussion</title>
<p>My two arguments can be seen, roughly, as generalizations of the standard arguments to experiments that may contain opaque evidence, though the <italic>Practical Argument</italic> is more loosely related to the DDBA. They generalize the arguments in a different way than Hild and Schoenfield do, though, by adopting a different account of admissibility. Those who, like Hild and Schoenfield, consider the admissible plans to be those with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I78"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> as their condition, are really answering the question: What is the optimal way to react to the fact that one has learned that <italic>E</italic>?. My arguments, on the other hand, answer a different question: What is the optimal way to react to the fact that <italic>E</italic> is true?. It should be no surprise that our answers differ accordingly: I think one ought to conditionalize on <italic>E</italic>, while they conclude that one ought to conditionalize on <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I79"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>.</p>
<p>This doesn&#x2019;t, I think, reflect well on the arguments for <sc>a-e conditionalization</sc>. Arguably, we should want our update rule to help explicate the idea that one should believe what one&#x2019;s evidence supports. If so, what should matter is the evidential proposition itself (<italic>E</italic>), not the fact that it is one&#x2019;s evidence (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I80"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>). That is because, intuitively, whether <italic>E</italic> supports <italic>p</italic> depends on the logical or inductive relationship between <italic>E</italic> and <italic>p</italic>, not on the logical or inductive relationship between <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I81"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> and <italic>p</italic>. For example, <italic>It is raining</italic> doesn&#x2019;t intuitively support <italic>I have evidence about the weather</italic> to any high degree, given normal background evidence, because there is no logical or strong inductive connection between the fact about the weather and the fact about my mind. That is the case even though <italic>My evidence says that it is raining</italic> entails the claim that <italic>I have evidence about the weather</italic>.<sup><xref rid="fn32" ref-type="fn">32</xref></sup></p>
<p>As for the differences between my arguments, each has its own advantage. The <italic>Practical Argument</italic>, I believe, has an important advantage over the <italic>New Accuracy Argument</italic>. The idea that we should think of the rationality of epistemic principles as determined by their conduciveness to accurate belief is highly controversial.<sup><xref rid="fn33" ref-type="fn">33</xref></sup> The <italic>Practical Argument</italic> allows us to sidestep this controversy, since the only assumption it makes about rational credences (apart from them being probabilities) is that they are connected to preferences about acts and plans via expected value calculations. This makes the <italic>Practical Argument</italic> interesting independently of the issue of <sc>externalism</sc>. On the other hand, if one thinks that facts about epistemic rationality are explanatorily prior to facts about practical rationality, the <italic>New Accuracy Argument</italic> seems better poised to <italic>explain</italic>, rather than just establish, the rationality of conditionalizing.</p>
</sec>
<sec id="s25">
<label>8.4</label><title>Comparison to Another Solution</title>
<p><xref rid="r14" ref-type="bibr">Dmitri Gallow (2021)</xref> suggests a different way of reconciling the <italic>Accuracy Argument</italic> with <sc>externalism</sc>.<sup><xref rid="fn34" ref-type="fn">34</xref></sup> He gives an accuracy-based argument for an update rule which is consistent with <sc>externalism</sc>, but differs from both <sc>conditionalization</sc> and <sc>a-e conditionalization</sc>.<sup><xref rid="fn35" ref-type="fn">35</xref></sup> His key idea is that one can evaluate plans not by the expected accuracy of correctly following them, as I have done, following most of the literature, but rather by the expected accuracy of <italic>adopting</italic> the plan, where the latter takes into account the possibility that one will not follow the plan correctly.<sup><xref rid="fn36" ref-type="fn">36</xref></sup> He shows that if expected accuracy is calculated in this way, his update rule maximizes expected accuracy.</p>
<p>While our update rules differ, this doesn&#x2019;t necessarily mean that we disagree across the board. Instead, they are different kinds of rules: Gallow&#x2019;s rule concerns a less idealized standard of rationality, which takes possible fallibility into account, while my rule is about <italic>ideal</italic> rationality, which is only sensitive to the agent&#x2019;s lack of information, and not to the possibility that they are cognitively limited.<sup><xref rid="fn37" ref-type="fn">37</xref></sup> We do disagree about the ideal case, however. As Gallow acknowledges, his rule collapses into <sc>a-e conditionalization</sc> for ideal agents who know that they will correctly follow their plan.<sup><xref rid="fn38" ref-type="fn">38</xref></sup> For Gallow, then, such agents can never be uncertain of their evidence, even when it is opaque as in <italic>Here&#x2019;s a Hand</italic>. Thus, I reject Gallow&#x2019;s argument for the same reason that I reject Hild&#x2019;s and Schoenfield&#x2019;s arguments: in cases of opaque evidence the agent has not learnt the condition of their update plan and so is not required to implement it. My view thus has an advantage as far as the externalist is concerned. For me one can be rationally uncertain of one&#x2019;s evidence for the same reason one can be rationally uncertain about anything else: one&#x2019;s evidence may not settle the matter. For Gallow this is not enough for uncertainty about evidence to be rational. One must also be uncertain about one&#x2019;s capacity to implement plans.</p>
</sec>
<sec id="s26">
<label>8.5</label><title>Is Conditionalization Still a Coherence Norm?</title>
<p>I end by considering a more general worry about the relationship between <sc>externalism</sc> and Bayesianism.<sup><xref rid="fn39" ref-type="fn">39</xref></sup> Standard Bayesian arguments such as accuracy or dutch book arguments are often thought to show that an agent who doesn&#x2019;t conform to the standard Bayesian norms is <italic>incoherent</italic> in that they are doing something that&#x2019;s suboptimal <italic>from their own perspective</italic>, for example, by accepting a Dutch Book, or by having beliefs that are accuracy-dominated. But if one can have opaque evidence, conditionalizing can&#x2019;t always be a matter of doing what&#x2019;s best from one&#x2019;s perspective, since in the Bad Case, for example, one doesn&#x2019;t know whether it is best to conditionalize on <italic>EXP</italic> or <italic>EXP</italic> &#x2227; <italic>HAND</italic>.</p>
<p>In reply, an externalist will insist on the distinction between what&#x2019;s best from one&#x2019;s perspective and what one can <italic>know</italic> (or be certain) is best from one&#x2019;s perspective. My arguments do show that a non-conditionalizer is incoherent in that they are doing something that&#x2019;s bad by their own lights, even in cases with opaque evidence. In <italic>Here&#x2019;s a Hand</italic>, if your plan preferences are rationally formed, you will prefer the plan to conditionalize on <italic>EXP</italic> to all other plans for <italic>EXP</italic>, so that by failing to conditionalize on <italic>EXP</italic> in the Bad Case you are plan incoherent &#x2014; you fail to implement the plan that&#x2019;s best by your own lights. The fact that your evidence is opaque only means that you don&#x2019;t know exactly what your perspective is, since your perspective is partly constituted by your evidence, so that you aren&#x2019;t certain of what is best by your lights.</p>
<p>One may also worry that this leaves the agent without guidance in such cases. However, precisely for this reason, externalists typically deny that rational norms must always give us guidance, at least in the strong sense that one&#x2019;s evidence must always warrant certainty about what they require.<sup><xref rid="fn40" ref-type="fn">40</xref></sup> Some externalists go so far as to deny that there are any norms that can always provide guidance in this sense (<xref rid="r36" ref-type="bibr">Williamson, 2000</xref>, ch. 4). Indeed, a similar worry arguably arises for other Bayesian arguments even without the possibility of opaque evidence, as long as one can be rationally uncertain of other factors that make up one&#x2019;s perspective, such as one&#x2019;s <italic>credences</italic> or <italic>values</italic>. For example, the <italic>Accuracy Argument for Probabilism</italic> (<xref rid="r19" ref-type="bibr">Joyce, 1998</xref>) shows that every non-probabilistic credence function is accuracy-dominated, so that for any probabilistically incoherent agent there is always a credence function that is guaranteed to be more accurate than their credences. But a non-probabilistic agent who doesn&#x2019;t know what their credences are can fail to know that there is a more accurate credence function, or which function that is.</p>
</sec>
</sec>
<sec id="s27">
<label>9.</label><title>Conclusion: Bayesianism and Externalism Reconciled</title>
<p>We have ended up with the orthodox view that rational agents update by <sc>conditionalization</sc>. But we now know that this is consistent with the possibility of opaque evidence, and thus also with <sc>externalism</sc>. The arguments for <sc>a-e conditionalization</sc> looked like they could circumvent the difficult question of whether one can have opaque evidence and settle the debate between externalists and their opponents by a feat of Bayesian magic, as it were. But they did so, I have argued, only by generalizing the standard arguments in the wrong way: by assuming a principle of admissibility which we have good to reject if opaque evidence is possible. However, there is an alternative way of generalizing those arguments, which forms the basis for two new arguments for <sc>conditionalization</sc> that apply regardless of the possibility of opaque evidence. So if life can sometimes throw opaque evidence our way, the right response is to conditionalize on it, and so remain uncertain about what we have learnt, just as we would remain uncertain about anything else that our evidence doesn&#x2019;t conclusively establish. This is good news for the externalist, but also for the Bayesian, who now has arguments for <sc>conditionalization</sc> that always apply regardless of whether opaque evidence is possible.<sup><xref rid="fn41" ref-type="fn">41</xref></sup></p>
</sec>
</body>
<back>
<app-group>
<app id="app1">
<title>Appendix: Proof of the conditional expectation lemma</title>
<p>(Some notation will be helpful: for any acts <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I82"><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mtext>,&#x00A0;let&#x00A0;</mml:mtext><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B1;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, i.e. the <italic>unconditional</italic> plan that contains the conditional plans to <italic>&#x03B1;</italic> if <italic>E</italic>, and to <italic>&#x03B2;</italic> if <italic>&#x00AC;E</italic>.)</p>
<p>Assume that <italic>C</italic>(<italic>&#x00B7;|E</italic>) is defined and that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I83"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. <sc>reflexivity</sc> gives us <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I84"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Since <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I85"><mml:mrow><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> is a partition, by <sc>conditional dominance</sc> it follows that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I86"><mml:mrow><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mo>&#x227D;</mml:mo><mml:msub><mml:mi>&#x03C8;</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>. We can then reason as follows:</p>
<p>By the assumption that rational preferences between unconditional plans reflect their expected value:

<disp-formula id="FD10">
<label>(A)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M10">
<mml:mrow><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03C7;</mml:mi><mml:mo>&#x232A;</mml:mo><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C7;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03C7;</mml:mi><mml:mo>&#x232A;</mml:mo><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mi>&#x03C8;</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C7;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow>
</mml:math>
</disp-formula>

where <italic>V</italic>(<italic>&#x03C7;</italic>, <italic>w</italic>) is the value of act <italic>&#x03C7;</italic> at world <italic>w</italic>. Expanding this gives:

<disp-formula id="FD11">
<label>(B)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M11">
<mml:mrow>
<mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mo>&#x00AC;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>


The two right summands on either side cancel out, so:

<disp-formula id="FD12">
<label>(C)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M12">
<mml:mrow>
<mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>

We can now divide both sides by <italic>C</italic>(<italic>E</italic>), which is not zero since we have assumed that <italic>C</italic>(<italic>&#x00B7;|E</italic>) is defined, getting:

<disp-formula id="FD13">
<label>(D)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M13">
<mml:mrow>
<mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>


Then, noticing that for any <italic>w &#x2208; E</italic>, <italic>C</italic>(<italic>w</italic>) = <italic>C</italic>(<italic>w</italic> &#x2227; <italic>E</italic>), and since <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I87"><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2227;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula> , we have:

<disp-formula id="FD14">
<label>(E)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M14">
<mml:mrow>
<mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>


Which, since <italic>C</italic>(<italic>w|E</italic>) = 0 for any <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I88"><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2209;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>, gives:

<disp-formula id="FD15">
<label>(F)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M15">
<mml:mrow><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi mathvariant='script'>W</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mtext>&#x03A3;</mml:mtext><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi mathvariant='script'>W</mml:mi></mml:mrow></mml:msub><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>

Which by definition just says that:

<disp-formula id="FD16">
<label>(G)</label>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M16">
<mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msub><mml:mi>&#x03C8;</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
</app>
</app-group>
<fn-group>
<fn id="fn1"><label>1.</label> <p><xref rid="r33" ref-type="bibr">Schoenfield (2017)</xref> calls this &#x201C;exogenous evidence&#x201D;.</p></fn>
<fn id="fn2"><label>2.</label> <p>I state this claim formally in &#x00A7;<xref rid="s3" ref-type="sec">3</xref>.</p></fn>
<fn id="fn3"><label>3.</label> <p>Formally, we can define <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I89"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> by letting <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I90"><mml:mi>&#x2112;</mml:mi></mml:math></inline-formula> be a function from worlds to propositions, which assigns, to every world, the proposition learnt at that world. Then we let <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I91"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula> be <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I92"><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>&#x1D4B2;</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>&#x2112;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>w</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>E</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="fn4"><label>4.</label> <p>If we did not make this assumption, the conditionalized credence function <italic>C</italic>(<italic>&#x00B7;|E</italic><sub><italic>i</italic></sub>) wouldn&#x2019;t be guaranteed to be defined for all <italic>E</italic><sub><italic>i</italic></sub>, so that there wouldn&#x2019;t always be a unique plan to conditionalize. This would force us to think in terms of <italic>classes</italic> of such plans, unnecessarily complicating the discussion.</p></fn>
<fn id="fn5"><label>5.</label> <p>Since the set of worlds is finite, I understand uncertainty as having a credence below one.</p></fn>
<fn id="fn6"><label>6.</label> <p>I use &#x201C;<italic>&#x2192;</italic>&#x201D; and &#x201C;<italic>&#x2194;</italic>&#x201D; for the material conditional and biconditional, respectively.</p></fn>
<fn id="fn7"><label>7.</label> <p>For an externalist, sceptical scenarios are counterexamples to <sc>evidential transparency</sc> because they are failures of <italic>Negative Access</italic> &#x2014; the claim that if one&#x2019;s evidence doesn&#x2019;t entail some proposition, then it entails that it doesn&#x2019;t entail that proposition. Many externalists also think that there are failures of <italic>Positive Access</italic> &#x2014; the claim that if one&#x2019;s evidence entails some proposition, it entails that it entails it &#x2014; such as <xref rid="r37" ref-type="bibr">Williamson&#x2019;s (2014)</xref> <italic>Unmarked Clock</italic> case, though this is more controversial.</p></fn>
<fn id="fn8"><label>8.</label> <p><xref rid="r28" ref-type="bibr">Pettigrew (2016</xref>, ch. 15), following <xref rid="r27" ref-type="bibr">Paul (2014)</xref>, discusses a plan coherence principle called &#x201C;Diachronic Continence&#x201D; in the context of arguments for <sc>conditionalization</sc>. The plan coherence principles discussed in this paper are importantly different in that they only refer to the agent&#x2019;s preferences, rather than their intentions, and only say that one ought to be plan coherent with respect to specific classes of plans.</p></fn>
<fn id="fn9"><label>9.</label> <p>They don&#x2019;t explicitly restrict themselves to such experiments, but make assumptions that are equivalent to such a restriction (<xref rid="r33" ref-type="bibr">Schoenfield, 2017</xref>): that that the agent is certain to learn true evidence, and that <italic>E</italic><sub>1</sub>, <italic>E</italic><sub>2</sub>, &#x2026; form a partition.</p></fn>
<fn id="fn10"><label>10.</label> <p>The framework of this paper is based on that of <xref rid="r16" ref-type="bibr">Greaves &#x0026; Wallace (2006)</xref> and <xref rid="r33" ref-type="bibr">Schoenfield (2017)</xref>, with two important differences: it covers practical plans as well as doxastic plans, and conditional plans as well as unconditional plans.</p></fn>
<fn id="fn11"><label>11.</label> <p>Formally, we can think of a practical act as a function from worlds to real numbers, which represent the utility of the act&#x2019;s outcome at that world.</p></fn>
<fn id="fn12"><label>12.</label> <p>That is, &#x201C;<italic>&#x03D5;</italic> &#x007E; <italic>&#x03C8;</italic>&#x201D; means that <italic>&#x03D5;</italic> &#x2AB0; <italic>&#x03C8;</italic> and <italic>&#x03C8;</italic> &#x2AB0; <italic>&#x03D5;</italic>, and &#x201C;<italic>&#x03D5;</italic> &#x227B; <italic>&#x03C8;</italic>&#x201D; means that <italic>&#x03D5;</italic> &#x2AB0; <italic>&#x03C8;</italic> and that it is not the case that <italic>&#x03C8;</italic> &#x2AB0; <italic>&#x03D5;</italic>.</p></fn>
<fn id="fn13"><label>13.</label> <p>Formally, the expected value of unconditional plan <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I93"><mml:mrow><mml:mi mathvariant='script'>P</mml:mi></mml:mrow></mml:math></inline-formula> by the lights of credence function c is <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I94"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant='script'>P</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mi>d</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03A3;</mml:mi><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi><mml:mo>&#x232A;</mml:mo><mml:mo>&#x2208;</mml:mo><mml:mi mathvariant='script'>P</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>&#x03A3;</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>w</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mi>V</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>&#x03D5;</mml:mi><mml:mo>,</mml:mo><mml:mi>w</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, where <italic>V(&#x03D5;,w)</italic> is the value of act <italic>&#x03D5;</italic> at world <italic>w</italic>.</p></fn>
<fn id="fn14"><label>14.</label> <p>I remain neutral on whether the admissibility principles are narrow- or wide-scope norms. Whether <sc>conditionalization</sc>, as justified by the arguments in this paper, is narrow- or wide-scope depends on this choice. See <xref rid="r26" ref-type="bibr">Meacham (2015)</xref> for more discussion.</p></fn>
<fn id="fn15"><label>15.</label> <p>While this inference is intuitive, it can be made more rigorous by appeal to the principles presented in &#x00A7;&#x00A7;<xref rid="s20" ref-type="sec">7.2</xref>, as follows: Suppose for reductio that, for some <italic>E<sub>i</sub></italic>, the plan to conditionalize on <italic>E</italic><sub><italic>i</italic></sub> if they learn <italic>E</italic><sub><italic>i</italic></sub> is not the best admissible plan, so that for some credence function <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I95"><mml:mrow><mml:mi>c</mml:mi><mml:mo>*</mml:mo><mml:mo>&#x2260;</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x30FB;</mml:mo><mml:mo>&#x007C;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, it is not the case that <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I96"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227B;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Then, by <sc>conditional completeness</sc>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I97"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo> <mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow> <mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo> <mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Moreover, for every <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I98"><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2260;</mml:mo><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, by <sc>reflexivity</sc>. Hence, since <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I99"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is a partition (only one proposition can be the strongest claim that the agent has learnt), by <sc>conditional dominance</sc>, we would have <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I100"><mml:mrow><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo> <mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo> <mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow> <mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mo>&#x2329;</mml:mo> <mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x232A;</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo></mml:mrow> <mml:mo>}</mml:mo></mml:mrow><mml:mo>&#x227D;</mml:mo><mml:mtext>PCond</mml:mtext></mml:mrow></mml:math></inline-formula>, so that PCond wouldn&#x2019;t be the best admissible unconditional plan, contrary to <sc>plan conditionalization</sc>. Hence our assumption must be false, and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I101"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>C</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> must be best.</p></fn>
<fn id="fn16"><label>16.</label> <p>While Greaves &#x0026; Wallace themselves aren&#x2019;t explicit about the fact that their argument is a plan-based one, this is made explicit in (<xref rid="r12" ref-type="bibr">Easwaran, 2013</xref>) and (<xref rid="r28" ref-type="bibr">Pettigrew, 2016</xref>, ch. 14).</p></fn>
<fn id="fn17"><label>17.</label> <p>See <xref rid="r28" ref-type="bibr">Pettigrew (2016)</xref> for an introduction and survey. I should note there are other accuracy-based arguments for <sc>conditionalization</sc>. One, namely (<xref rid="r21" ref-type="bibr">Leitgeb &#x0026; Pettigrew, 2010</xref>) is not, I think, vulnerable to the criticisms that I describe here, although I think it has other problems (<xref rid="r28" ref-type="bibr">Pettigrew, 2016</xref>, ch. 15). Others &#x2014; <xref rid="r12" ref-type="bibr">Easwaran (2013)</xref> and <xref rid="r3" ref-type="bibr">Briggs &#x0026; Pettigrew (2020)</xref> &#x2014; make similar assumptions to Greaves &#x0026; Wallace&#x2019;s argument, and so I believe that what I say here carries over to their argument. In particular, both Easwaran and Briggs &#x0026; Pettigrew assume that the agent&#x2019;s possible evidential propositions are guaranteed to be true and form a partition, assumptions which are equivalent to the claim that the agent can learn only opaque evidence (<xref rid="r33" ref-type="bibr">Schoenfield, 2017</xref>).</p></fn>
<fn id="fn18"><label>18.</label> <p>Formally, for any two distinct credence functions <italic>c</italic>, <italic>c</italic>*, if <italic>c</italic> is a probability function, then <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I102"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi mathvariant='fraktur'>a</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mi>c</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi mathvariant='fraktur'>a</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:msup><mml:mi>c</mml:mi><mml:mo>*</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula>, where <italic>a</italic> is the accuracy measure. <xref rid="r16" ref-type="bibr">Greaves &#x0026; Wallace (2006</xref>, p. 620) use the term &#x201C;everywhere strongly stable&#x201D; for a strictly proper accuracy measure.</p></fn>
<fn id="fn19"><label>19.</label> <p>While not all treatments of the <italic>DDBA</italic> interpret it as a plan-based argument, most philosophers who explicitly consider the distinction between <sc>conditionalization</sc> and <sc>plan conditionalization</sc> agree that it can only establish the former, if at all, via the latter. For particularly clear statements of this claim, see <xref rid="r8" ref-type="bibr">Christensen (1996)</xref> and <xref rid="r28" ref-type="bibr">Pettigrew (2016</xref>, p. 188).</p></fn>
<fn id="fn20"><label>20.</label> <p>Similar conclusions are reached by <xref rid="r4" ref-type="bibr">Bronfman (2014)</xref> in the case of the <italic>Accuracy Argument</italic> and by <xref rid="r13" ref-type="bibr">Gallow (2019)</xref> in the case of the <italic>DDBA</italic>. <xref rid="r11" ref-type="bibr">Das (2019)</xref> proves a result similar to Schoenfield&#x2019;s in a framework where the agent updates their <italic>ur-prior</italic> on their <italic>total</italic> evidence. Though I will not attempt to show it here, I believe that my argument applies to Das&#x2019;s framework, too.</p></fn>
<fn id="fn21"><label>21.</label> <p>This will be the case both in failures of Positive and Negative Access (see Note 7), since &#x201C;learns&#x201D; and &#x201C;evidence&#x201D; stand for the <italic>strongest</italic> proposition that the agent has learnt.</p></fn>
<fn id="fn22"><label>22.</label> <p>The assumption that evidence is factive is widespread in formal epistemology, including among externalists (<xref rid="r23" ref-type="bibr">Littlejohn, 2011</xref>; <xref rid="r36" ref-type="bibr">Williamson, 2000</xref>). It is not without critics, however (<xref rid="r1" ref-type="bibr">Arnold, 2013</xref>; <xref rid="r9" ref-type="bibr">Comesana &#x0026; McGrath, 2016</xref>; <xref rid="r10" ref-type="bibr">Comesa&#x00F1;a &#x0026; Kantin, 2010</xref>; <xref rid="r29" ref-type="bibr">Rescorla, 2021</xref>; <xref rid="r30" ref-type="bibr">Rizzieri, 2011</xref>). Such critics will have an additional reason to reject <sc>a-e admissibility</sc>: perhaps learning that <italic>E</italic> can warrant a reconsideration of one&#x2019;s plan for <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I103"><mml:mrow><mml:mi mathvariant='double-struck'>L</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="fn23"><label>23.</label> <p>To be more precise, the agent commits themselves by <italic>preferring</italic> the plan to other plans with the same condition.</p></fn>
<fn id="fn24"><label>24.</label> <p>In terms of the argument in &#x00A7;&#x00A7;<xref rid="s16" ref-type="sec">6.1</xref>, this kind of sceptic would disagree with <sc>irrelevance</sc> rather than <sc>implementation</sc>.</p></fn>
<fn id="fn25"><label>25.</label> <p>This principle is not needed to prove the <sc>conditional expectation lemma</sc> below, but will be used in the next section of the paper.</p></fn>
<fn id="fn26"><label>26.</label> <p>See <xref rid="r25" ref-type="bibr">Maher (1993</xref>, p. 10) for a discussion of the different senses of the term &#x201C;The Sure-Thing Principle&#x201D;.</p></fn>
<fn id="fn27"><label>27.</label> <p>Since the simplifying assumptions in &#x00A7;<xref rid="s2" ref-type="sec">2</xref> guarantee that <italic>C</italic>(&#x30FB;|<italic>E</italic>) is defined for any <italic>E</italic>, <italic>I</italic> omit the &#x201C;&#x2026;if <italic>C</italic>(&#x30FB;|<italic>E</italic>) is defined&#x201D; qualifier in my presentation of the two arguments.</p></fn>
<fn id="fn28"><label>28.</label> <p>Another alternative would be to take a hybrid approach: think about doxastic rather than practical planning, but evaluate doxastic plans by the practical utility of the decisions they justify. This is the approach of <xref rid="r5" ref-type="bibr">Peter M. Brown&#x2019;s (1976)</xref> argument for <sc>conditionalization</sc>. Just as the other arguments I have discussed, Brown&#x2019;s argument is restricted to experiments with transparent evidence, but can be adapted, in the way I have adapted Greaves &#x0026; Wallace&#x2019;s argument, to rid it of this restriction and render it consistent with <sc>externalism</sc>, though showing this is beyond the scope of this paper.</p></fn>
<fn id="fn29"><label>29.</label> <p><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I104"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:msub><mml:mi mathvariant='script'>A</mml:mi><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mfenced><mml:mi>A</mml:mi></mml:mfenced></mml:mrow></mml:mfenced><mml:mo>+</mml:mo><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced><mml:mfenced><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mfenced><mml:mi>A</mml:mi></mml:mfenced></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mfenced><mml:mi>A</mml:mi></mml:mfenced></mml:mrow></mml:math></inline-formula>, which is either greater or smaller than <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I105"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:msub><mml:mi mathvariant='script'>O</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> since <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I106"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mfenced><mml:mi>A</mml:mi></mml:mfenced><mml:mo>&#x2260;</mml:mo><mml:mi>C</mml:mi><mml:mfenced><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2223;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="fn30"><label>30.</label> <p><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I107"><mml:mrow><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mi>&#x1D49C;</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>E</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mi>&#x1D4AA;</mml:mi></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="fn31"><label>31.</label> <p>A more careful presentation of this version of this argument would involve reformulating <sc>epistemic admissibility</sc> in terms of rational <italic>preference</italic>, instead of having the &#x201C;actionable&#x201D;/&#x201C;able to perform&#x201D; qualifier. The principle would then say that the agent is required to prefer <italic>&#x03D5;</italic> to <italic>&#x03C8;</italic> if they learnt that <italic>E</italic> and antecedently preferred <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I108"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03D5;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> or <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="I109"><mml:mrow><mml:mrow><mml:mo>&#x2329;</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mo>&#x232A;</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="fn32"><label>32.</label> <p><xref rid="r2" ref-type="bibr">Barnett (2016)</xref> surveys this similar objections to transparency accounts of self-knowledge.</p></fn>
<fn id="fn33"><label>33.</label> <p>See <xref rid="r6" ref-type="bibr">Caie (2013)</xref>, <xref rid="r7" ref-type="bibr">Carr (2017)</xref>, <xref rid="r15" ref-type="bibr">Greaves (2013)</xref>, and <xref rid="r20" ref-type="bibr">Konek &#x0026; Levinstein (2019)</xref>.</p></fn>
<fn id="fn34"><label>34.</label> <p>Thanks to a reviewer for suggesting that I compare my view to Gallow&#x2019;s.</p></fn>
<fn id="fn35"><label>35.</label> <p>Gallow&#x2019;s terminology differs from mine: my <sc>externalism</sc> he calls &#x201C;Certainty Externalism&#x201D;, while the principle he calls &#x201C;Externalism&#x201D; is the view that one can have evidence that doesn&#x2019;t tell one what one&#x2019;s evidence is. An externalist in his sense is thus a denier of <sc>evidential transparency</sc>, in my terminology.</p></fn>
<fn id="fn36"><label>36.</label> <p>Strictly speaking, Gallow argues for the claim that one should be <italic>disposed</italic>, rather than <italic>plan</italic>, to conditionalize. My presentation here transposes his argument into to the key of planning.</p></fn>
<fn id="fn37"><label>37.</label> <p>Here I am following <xref rid="r32" ref-type="bibr">Schoenfield (2015)</xref>, who suggests that these two ways of evaluating plans correspond to two different meanings of &#x201C;rational&#x201D;.</p></fn>
<fn id="fn38"><label>38.</label> <p>See <xref rid="r14" ref-type="bibr">Gallow (2021</xref>, p. 501).</p></fn>
<fn id="fn39"><label>39.</label> <p>Thanks to an anonymous reviewer for suggesting this worry.</p></fn>
<fn id="fn40"><label>40.</label> <p><xref rid="r18" ref-type="bibr">Hughes (2022)</xref> and <xref rid="r34" ref-type="bibr">Srinivasan (2015)</xref> make this point.</p></fn>
<fn id="fn41"><label>41.</label> <p>I am particularly grateful to Michael Caie and Dmitri Gallow for invaluable advice and feedback on many drafts of this paper. For very helpful comments, I am also grateful to Branden Fitelson, Hanz MacDonald, James Shaw, two anonymous reviewers, and the editors of this journal, as well as to audiences at the 2019 University of Pittsburgh Dissertation Seminar, the 2019 meeting of the Society for Exact Philosophy, and the 2019 Formal Epistemology Workshop.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="r1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Arnold</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Some evidence is false</article-title>. <source><italic>Australasian Journal of Philosophy</italic></source>, <volume>91</volume>(<issue>1</issue>), <fpage>165</fpage>&#x2013;<lpage>172</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/00048402.2011.637937">https://doi.org/10.1080/00048402.2011.637937</ext-link></mixed-citation></ref>
<ref id="r2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Barnett</surname>, <given-names>D. J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Inferential justification and the transparency of belief</article-title>. <source><italic>No&#x00FB;s</italic></source>, <volume>50</volume>(<issue>1</issue>), <fpage>184</fpage>&#x2013;<lpage>212</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/nous.12088">https://doi.org/10.1111/nous.12088</ext-link></mixed-citation></ref>
<ref id="r3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Briggs</surname>, <given-names>R.</given-names></string-name>, &#x0026; <string-name><surname>Pettigrew</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2020</year>). <article-title>An accuracy-dominance argument for conditionalization</article-title>. <source><italic>No&#x00FB;s</italic></source>, <volume>54</volume>(<issue>1</issue>), <fpage>162</fpage>&#x2013;<lpage>181</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/nous.12258">https://doi.org/10.1111/nous.12258</ext-link></mixed-citation></ref>
<ref id="r4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bronfman</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Conditionalization and not knowing that one knows</article-title>. <source><italic>Erkenntnis</italic></source>, <volume>79</volume>(<issue>4</issue>), <fpage>871</fpage>&#x2013;<lpage>892</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10670-013-9570-0">https://doi.org/10.1007/s10670-013-9570-0</ext-link></mixed-citation></ref>
<ref id="r5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Brown</surname>, <given-names>P. M.</given-names></string-name></person-group> (<year>1976</year>). <article-title>Conditionalization and expected utility</article-title>. <source><italic>Philosophy of Science</italic></source>, <volume>43</volume>(<issue>3</issue>), <fpage>415</fpage>&#x2013;<lpage>419</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1086/288696">https://doi.org/10.1086/288696</ext-link></mixed-citation></ref>
<ref id="r6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Caie</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Rational probabilistic incoherence</article-title>. <source><italic>The Philosophical Review</italic></source>, <volume>122</volume>(<issue>4</issue>), <fpage>527</fpage>&#x2013;<lpage>575</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1215/00318108-2315288">https://doi.org/10.1215/00318108-2315288</ext-link></mixed-citation></ref>
<ref id="r7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Carr</surname>, <given-names>J. R.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Epistemic utility theory and the aim of belief</article-title>. <source><italic>Philosophy and Phenomenological Research</italic></source>, <volume>95</volume>(<issue>3</issue>), <fpage>511</fpage>&#x2013;<lpage>534</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/phpr.12436">https://doi.org/10.1111/phpr.12436</ext-link></mixed-citation></ref>
<ref id="r8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Christensen</surname>, <given-names>D.</given-names></string-name></person-group> (<year>1996</year>). <article-title>Dutch-book arguments depragmatized: Epistemic consistency for partial believers</article-title>. <source><italic>The Journal of Philosophy</italic></source>, <volume>93</volume>(<issue>9</issue>), <fpage>450</fpage>&#x2013;<lpage>479</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.2307/2940893">https://doi.org/10.2307/2940893</ext-link></mixed-citation></ref>
<ref id="r9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Comesana</surname>, <given-names>J.</given-names></string-name>, &#x0026; <string-name><surname>McGrath</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Perceptual reasons</article-title>. <source><italic>Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition</italic></source>, <volume>173</volume>(<issue>4</issue>), <fpage>991</fpage>&#x2013;<lpage>1006</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11098-015-0542-x">https://doi.org/10.1007/s11098-015-0542-x</ext-link></mixed-citation></ref>
<ref id="r10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Comesa&#x00F1;a</surname>, <given-names>J.</given-names></string-name>, &#x0026; <string-name><surname>Kantin</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Is evidence knowledge?</article-title> <source><italic>Philosophy and Phenomenological Research</italic></source>, <volume>80</volume>(<issue>2</issue>), <fpage>447</fpage>&#x2013;<lpage>454</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/j.1933-1592.2010.00323.x">https://doi.org/10.1111/j.1933-1592.2010.00323.x</ext-link></mixed-citation></ref>
<ref id="r11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Das</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Accuracy and ur-prior conditionalization</article-title>. <source><italic>The Review of Symbolic Logic</italic></source>, <volume>12</volume>(<issue>1</issue>), <fpage>62</fpage>&#x2013;<lpage>96</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1017/S1755020318000035">https://doi.org/10.1017/S1755020318000035</ext-link></mixed-citation></ref>
<ref id="r12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Easwaran</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Expected accuracy supports conditionalization&#x2014;and conglomerability and reflection</article-title>. <source><italic>Philosophy of Science</italic></source>, <volume>80</volume>(<issue>1</issue>), <fpage>119</fpage>&#x2013;<lpage>142</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1086/668879">https://doi.org/10.1086/668879</ext-link></mixed-citation></ref>
<ref id="r13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gallow</surname>, <given-names>J. D.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Diachronic dutch books and evidential import</article-title>. <source><italic>Philosophy and Phenomenological Research</italic></source>, <volume>99</volume>(<issue>1</issue>), <fpage>49</fpage>&#x2013;<lpage>80</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/phpr.12471">https://doi.org/10.1111/phpr.12471</ext-link></mixed-citation></ref>
<ref id="r14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gallow</surname>, <given-names>J. D.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Updating for externalists</article-title>. <source><italic>No&#x00FB;s</italic></source>, <volume>55</volume>(<issue>3</issue>), <fpage>487</fpage>&#x2013;<lpage>516</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/nous.12307">https://doi.org/10.1111/nous.12307</ext-link></mixed-citation></ref>
<ref id="r15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Greaves</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Epistemic decision theory</article-title>. <source><italic>Mind</italic></source>, <volume>122</volume>(<issue>488</issue>), <fpage>915</fpage>&#x2013;<lpage>952</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/mind/fzt090">https://doi.org/10.1093/mind/fzt090</ext-link></mixed-citation></ref>
<ref id="r16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Greaves</surname>, <given-names>H.</given-names></string-name>, &#x0026; <string-name><surname>Wallace</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Justifying conditionalization: conditionalization maximizes expected epistemic utility</article-title>. <source><italic>Mind</italic></source>, <volume>115</volume>(<issue>459</issue>), <fpage>607</fpage>&#x2013;<lpage>632</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/mind/fzl607">https://doi.org/10.1093/mind/fzl607</ext-link></mixed-citation></ref>
<ref id="r17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hild</surname>, <given-names>M.</given-names></string-name></person-group> (<year>1998</year>). <article-title>The coherence argument against conditionalization</article-title>. <source><italic>Synthese</italic></source>, <volume>115</volume>(<issue>2</issue>), <fpage>229</fpage>&#x2013;<lpage>258</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1023/A:1005082908147">https://doi.org/10.1023/A:1005082908147</ext-link></mixed-citation></ref>
<ref id="r18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hughes</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Epistemology without guidance</article-title>. <source><italic>Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition</italic></source>, <volume>179</volume>(<issue>1</issue>), <fpage>163</fpage>&#x2013;<lpage>196</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11098-021-01655-8">https://doi.org/10.1007/s11098-021-01655-8</ext-link></mixed-citation></ref>
<ref id="r19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Joyce</surname>, <given-names>J. M.</given-names></string-name></person-group> (<year>1998</year>). <article-title>A nonpragmatic vindication of probabilism</article-title>. <source><italic>Philosophy of Science</italic></source>, <volume>65</volume>(<issue>4</issue>), <fpage>575</fpage>&#x2013;<lpage>603</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1086/392661">https://doi.org/10.1086/392661</ext-link></mixed-citation></ref>
<ref id="r20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Konek</surname>, <given-names>J.</given-names></string-name>, &#x0026; <string-name><surname>Levinstein</surname>, <given-names>B. A.</given-names></string-name></person-group> (<year>2019</year>). <article-title>The foundations of epistemic decision theory</article-title>. <source><italic>Mind</italic></source>, <volume>128</volume>(<issue>509</issue>), <fpage>69</fpage>&#x2013;<lpage>107</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/mind/fzw044">https://doi.org/10.1093/mind/fzw044</ext-link></mixed-citation></ref>
<ref id="r21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Leitgeb</surname>, <given-names>H.</given-names></string-name>, &#x0026; <string-name><surname>Pettigrew</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2010</year>). <article-title>An objective justification of bayesianism ii: The consequences of minimizing inaccuracy</article-title>. <source><italic>Philosophy of Science</italic></source>, <volume>77</volume>(<issue>2</issue>), <fpage>236</fpage>&#x2013;<lpage>272</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1086/651318">https://doi.org/10.1086/651318</ext-link></mixed-citation></ref>
<ref id="r22"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Lewis</surname>, <given-names>D. K.</given-names></string-name></person-group> (<year>1999</year>). <chapter-title>Why conditionalize?</chapter-title> In <source><italic>Papers in metaphysics and epistemology</italic></source><italic>, vol.</italic> <volume>2</volume> (pp. <fpage>403</fpage>&#x2013;<lpage>407</lpage>). <publisher-name>Cambridge University Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1017/CBO9780511625343.024">https://doi.org/10.1017/CBO9780511625343.024</ext-link></mixed-citation></ref>
<ref id="r23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Littlejohn</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Evidence and knowledge</article-title>. <source><italic>Erkenntnis</italic></source>, <volume>74</volume>(<issue>2</issue>), <fpage>241</fpage>&#x2013;<lpage>262</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10670-010-9247-x">https://doi.org/10.1007/s10670-010-9247-x</ext-link></mixed-citation></ref>
<ref id="r24"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Lord</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2018</year>). <source><italic>The importance of being rational</italic></source>. <publisher-name>Oxford University Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/oso/9780198815099.001.0001">https://doi.org/10.1093/oso/9780198815099.001.0001</ext-link></mixed-citation></ref>
<ref id="r25"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Maher</surname>, <given-names>P.</given-names></string-name></person-group> (<year>1993</year>). <source><italic>Betting on theories</italic></source>. <publisher-name>Cambridge University Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1017/CBO9780511527326">https://doi.org/10.1017/CBO9780511527326</ext-link></mixed-citation></ref>
<ref id="r26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Meacham</surname>, <given-names>C. J. G.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Understanding conditionalization</article-title>. <source><italic>Canadian Journal of Philosophy</italic></source>, <volume>45</volume>(<issue>5/6</issue>), <fpage>767</fpage>&#x2013;<lpage>797</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/00455091.2015.1119611">https://doi.org/10.1080/00455091.2015.1119611</ext-link></mixed-citation></ref>
<ref id="r27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Paul</surname>, <given-names>S. K.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Diachronic incontinence is a problem in moral philosophy</article-title>. <source><italic>Inquiry</italic></source>, <volume>57</volume>(<issue>3</issue>), <fpage>337</fpage>&#x2013;<lpage>355</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/0020174X.2014.894273">https://doi.org/10.1080/0020174X.2014.894273</ext-link></mixed-citation></ref>
<ref id="r28"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Pettigrew</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2016</year>). <source><italic>Accuracy and the laws of credence</italic></source>. <publisher-name>Oxford University Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/acprof:oso/9780198732716.001.0001">https://doi.org/10.1093/acprof:oso/9780198732716.001.0001</ext-link></mixed-citation></ref>
<ref id="r29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rescorla</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2021</year>). <article-title>On the proper formulation of conditionalization</article-title>. <source><italic>Synthese</italic></source>, <volume>198</volume>(<issue>3</issue>), <fpage>1935</fpage>&#x2013;<lpage>1965</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11229-019-02179-9">https://doi.org/10.1007/s11229-019-02179-9</ext-link></mixed-citation></ref>
<ref id="r30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rizzieri</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2011</year>). <article-title>Evidence does not equal knowledge</article-title>. <source><italic>Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition</italic></source>, <volume>153</volume>(<issue>2</issue>), <fpage>235</fpage>&#x2013;<lpage>242</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11098-009-9488-1">https://doi.org/10.1007/s11098-009-9488-1</ext-link></mixed-citation></ref>
<ref id="r31"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Savage</surname>, <given-names>L. J.</given-names></string-name></person-group> (<year>1954</year>). <source><italic>The foundations of statistics</italic></source> (Vol. <volume>11</volume>). <publisher-name>Wiley Publications in Statistics</publisher-name>.</mixed-citation></ref>
<ref id="r32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Schoenfield</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Bridging rationality and accuracy</article-title>. <source><italic>The Journal of Philosophy</italic></source>, <volume>112</volume>(<issue>12</issue>), <fpage>633</fpage>&#x2013;<lpage>657</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.5840/jphil20151121242">https://doi.org/10.5840/jphil20151121242</ext-link></mixed-citation></ref>
<ref id="r33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Schoenfield</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Conditionalization does not (in general) maximize expected accuracy</article-title>. <source><italic>Mind</italic></source>, <volume>126</volume>(<issue>504</issue>), <fpage>1155</fpage>&#x2013;<lpage>1187</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/mind/fzw027">https://doi.org/10.1093/mind/fzw027</ext-link></mixed-citation></ref>
<ref id="r34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Srinivasan</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Normativity without cartesian privilege</article-title>. <source><italic>Philosophical Issues</italic></source>, <volume>25</volume>(<issue>1</issue>), <fpage>273</fpage>&#x2013;<lpage>299</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1111/phis.12059">https://doi.org/10.1111/phis.12059</ext-link></mixed-citation></ref>
<ref id="r35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Teller</surname>, <given-names>P.</given-names></string-name></person-group> (<year>1973</year>). <article-title>Conditionalization and observation</article-title>. <source><italic>Synthese</italic></source>, <volume>26</volume>(<issue>2</issue>), <fpage>218</fpage>&#x2013;<lpage>258</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/BF00873264">https://doi.org/10.1007/BF00873264</ext-link></mixed-citation></ref>
<ref id="r36"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Williamson</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2000</year>). <source><italic>Knowledge and its limits</italic></source>. <publisher-name>Oxford University Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1093/019925656X.001.0001">https://doi.org/10.1093/019925656X.001.0001</ext-link></mixed-citation></ref>
<ref id="r37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Williamson</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Very improbable knowing</article-title>. <source><italic>Erkenntnis</italic></source>, <volume>79</volume>(<issue>5</issue>), <fpage>971</fpage>&#x2013;<lpage>999</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10670-013-9590-9">https://doi.org/10.1007/s10670-013-9590-9</ext-link></mixed-citation></ref>
</ref-list>
</back>
</article>
