Four Approaches to Supposition

  • Benjamin Eva (Duke University)
  • Ted Shear orcid logo (University of Colorado Boulder)
  • Branden Fitelson (Northeastern University)


Suppositions can be introduced in either the indicative or subjunctive mood. There are also two types of judgments that may be initiated by suppositions of either sort: qualitative (binary) judgments and quantitative (numerical) ones. The former are judgments about whether a given proposition is acceptable and the latter are about how acceptable it is. We systematically explicate the relationships between canonical representatives of each of the four available types of theories. For the qualitative accounts, our representative theories of indicative and subjunctive supposition are based on the belief change operations given by AGM revision and KM update respectively; for the quantitative ones, we consider those given by conditionalization and imaging. This choice is motivated by the familiar approach of understanding supposition as ‘provisional belief revision’ wherein one temporarily treats the supposition as true and forms judgments by making appropriate changes to their other opinions. To compare the numerical judgments recommended by the quantitative theories with the binary ones recommended by the qualitative accounts, we appeal to a suitably adapted version of the Lockean thesis. Ultimately, we establish a number of new results that we interpret as vindicating the often-repeated claim that conditionalization is a probabilistic version of revision, while imaging is a probabilistic version of update.

How to Cite:

Eva, B., Shear, T. & Fitelson, B., (2022) “Four Approaches to Supposition”, Ergo an Open Access Journal of Philosophy 8: 26. doi:



Published on
29 Dec 2022
Peer Reviewed

Suppositionsi.e. propositions that are provisionally accepted for the sake of argument—afford us a distinctive set of tools for deliberation. We use these tools to guide activities that are essential to intelligent behaviour, such as making predictions, forming plans, regretting past decisions, and determining our preferences about possible consequences of our actions. Bertrand Russell even once wrote that without supposition “inference would be inexplicable” (1904: 343).

Legend has it that suppositions come in two basic modes corresponding to whether they are introduced using the indicative or subjunctive grammatical mood. When a supposition is introduced in the indicative, subsequent propositions are to be assessed relative to what we would expect upon learning that the supposition were true. When one is introduced in the subjunctive, however, these evaluations should align with our judgments about how things would be if the supposition were in fact true (independent of whether we were aware of it). But suppositional judgments may be partitioned along a different axis. In some suppositional contexts, we offer coarse-grained qualitative judgments about whether a given proposition is acceptable. In others, we give finer-grained quantitative judgments reflecting how acceptable we find various propositions. In sum, this leaves us with four types of suppositional judgments to accommodate, which are reflected in the four varieties of normative theories of suppositional judgement that have been developed:

  1. qualitative indicative theories,

  2. qualitative subjunctive theories,

  3. quantitative indicative theories, and

  4. quantitative subjunctive theories.

Where (a) and (b) respectively specify norms for qualitative judgments under indicative and subjunctive suppositions, (c) and (d) respectively offer norms governing quantitative judgments under indicative and subjunctive suppositions.

The primary purpose of this paper is to shed light on the structure of these four varieties of normative theories of supposition by systematically explicating the relationships between canonical representatives of each. We approach this project by treating supposition as a form of ‘provisional belief revision’ in which a person temporarily accepts the supposition as true and makes some appropriate changes to her other opinions so as to accommodate their supposition. The idea is that our suppositional judgments are supposed to reflect our judgments about how things would be in some hypothetical state of affairs satisfying the supposition. Following this approach, theories of supposition are formalised in terms of functions mapping some representation of the agent’s epistemic state along with a supposition to a hypothetical epistemic state representing their suppositional judgments.

Theories of indicative and subjunctive supposition are thus characterised using different functions, while qualitative and quantitative theories differ in their respective representation of epistemic states. Qualitative approaches are articulated in terms of coarse-grained full/categorical/outright belief, while quantitative ones rely on finer-grained partial beliefs represented by numerical credences. As we will look at both types of theories, our agents’ epistemic states will consist of both qualitative beliefs and numerical credences.

To represent qualitative and quantitative attitudes, we start with a set of possible worlds W and an agenda A comprising an algebra of subsets of W corresponding to propositions expressible in the finite propositional language . An agent’s beliefs will then be represented by a corpus comprising the set BA containing each proposition believed by the agent. Let denote the set of all possible corpora so that =(A) . Thus, qualitative suppositional theories can be characterised using a belief change operation, :×A , which offers a functional mapping from each corpus and proposition S to the set BS° consisting of the propositions that are acceptable for such an agent under the supposition that S . In similar fashion, an agent’s credences will be represented by a credence function c:A[0,1] satisfying the Kolmogorov axioms of probability. Letting C will denote the set of all probability functions over A , a quantitative suppositional theory is characterised by a credal update function f:C×AC . So, f(c,S) specifies numerical representations of how acceptable each proposition in the agenda is under the supposition S for an agent with credences c . When convenient, we will abuse our formalism by confusing sentences X with their truth-sets X:={wW:wX} . We also introduce analogous notation for sets of sentences Γ , by defining Γ:={wW:w  Γ} .1 With this minimal formalism in hand, we turn to introduce our four representative theories depicted in table 1 below.

Table 1

Four Theories of Supposition.

Qualitative Quantitative
Supposition Indicative AGM Revision: BS Conditionalization: c(|S)
Subjunctive KM Update: BS Imaging: cS()

Our representative qualitative indicative theory is given by the postulates describing AGM revision operations ( ) that were introduced by Carlos Alchourrón, Peter Gärdenfors, and David Makinson in their seminal (1985) work.2 For our qualitative subjunctive theory, we will consider the KM update operations ( ) characterised by the postulates proposed by Katsuno and Mendelzon (1992).3 The need to distinguish between these two was first proposed by Keller and Winslett Wilkins (1985), who suggested that “knowledge-adding” revisions are appropriate when new information is acquired about a static world, while “change-recording” updates are appropriate when learning that the world has changed in some way.4 Interestingly, both revision and update can be characterised as making the minimal change to the agent’s corpus needed to consistently accommodate new information. However, each relies on a different understanding of ‘minimal change’. For revision, we rely on a ‘global’ interpretation of minimality on which minimal change returns a corpus whose overall structure is as similar as possible to that of the original belief set; for update, we use a ‘local’ interpretation on which minimal change is achieved by applying local operations to the possible worlds that are consistent with the original corpus and thereby constructing the new corpus from the worlds yielded by those operations.

Our representative quantitative indicative theory is given by the familiar Bayesian rule of conditionalization, where c( |S) represents the judgments that an agent with credences cS() should hold under the indicative supposition S . Lastly, our quantitative subjunctive theory will be given by the imaging rule introduced by Lewis (1976), where the credences that result from imaging under S , cS() , represent the judgments she should hold under the subjunctive supposition S . There are some deep parallels between, on the one hand, the relationship between conditionalization and imaging and, on the other, the relationship between revision and update. Conditionalization (like revision) can be understood in terms of minimal change using a global interpretation of minimality. Conditionalization returns the globally most similar credence function that represents the new information as certain. Similarly, imaging (like update) can be treated in terms of minimal change using a local interpretation of minimality. Imaging shifts the probability mass from each world that is inconsistent with the new information to the locally most similar world that is consistent with it.

These similarities have not been overlooked. In their seminal paper axiomatising the update operation, Katsuno and Mendelzon explain that imaging can be seen “as a probabilistic version of update, and conditionalization as a probabilistic version of revision” (1992: 184). Similar claims are echoed throughout the literature. Despite the prevalence of such remarks, we are unaware of any attempts to systematically investigate how this plays out at the operational level. One way to understand the purpose of this paper is as an effort to make this claim precise and systematically explicate in what sense, if any, it is actually true. We find that conditions can be imposed that render the judgments made by the two indicative/subjunctive theories coherent with one another but there do not appear to be such conditions available that render coherence between the indicative/subjunctive and subjunctive/indicative theories. This, we argue, vindicates claims of the parallels between the qualitative and quantitative theories.

We proceed as follows: Section 1 briefly sets the stage with further discussion of the distinction between indicative and subjunctive supposition. Section 2 introduces our representative quantitative accounts and explains our method for comparing their recommendations with those provided by qualitative theories. In Section 3, we compare the theories of indicative supposition listed on the first row of table 1, AGM and conditionalization, by drawing on (and extending) results established by Shear and Fitelson (2019). In Section 4, we turn to the theories of subjunctive supposition from the second row of the table, KM and imaging, and systematically taxonomise the conditions under which they cohere with one another. Section 5 then addresses the remaining two diagonal comparisons suggested by table 1 (LIS vs. KM and LSS vs. AGM). Finally, Section 6 summarises the key findings of the analysis and outlines some prospects and remaining issues for future work. A summary of all results from this paper is provided in an appendix.

1. Two Modes of Supposition

On the standard story, the grammatical distinction between the indicative and subjunctive moods in a supposition aligns with a semantic difference between ‘epistemic’ or ‘ontic’ shifts in the modal base used for subsequent evaluations.5

In ordinary (non-suppositional) contexts, we assess propositions by the lights of our current opinions. In general, once we have supposed that S for the sake of argument, we are to temporarily shift those opinions to match some hypothetical alternative epistemic state that represents S as true. When the supposition is offered in the indicative mood, that shift is epistemic in the sense that it accords with the change of opinions that we would have undergone upon simply learning S . Contrastively, when put forth in the subjunctive mood, the shift of our opinions is ontic, since we are to adopt opinions that coincide with those that we would come to hold if we were to learn that S had suddenly been made true by some ‘local miracle’ or ‘ideal intervention’.

To see how this works, it will be instructive to look at an example. Adapting the classic case from Ernest Adams (1970), consider the indicative supposition in 1 and the subjunctive supposition 2 along with the propositions expressed by 3 and 4:

  1. Suppose that Oswald didn’t shoot Kennedy. . .

  2. Suppose that Oswald hadn’t shot Kennedy. . .

  3. Someone else shot Kennedy.

  4. Kennedy would have left Dallas unharmed.

Provided the indicative supposition in 1, the proposition expressed by 3 will no doubt seem acceptable. This is because learning that Oswald did not shoot Kennedy would not lead any reasonable person to give up the belief that Kennedy was shot; instead, the natural inference is to conclude that someone else was the assassin. In contrast, given the subjunctive supposition in 2, 4 seems appropriate. Here, we are to assess propositions relative to the most similar counterfactual world to the actual one in which Kennedy was never shot by Oswald. Since a world in which Oswald took but missed his shot is more similar to the actual one than one in which there was a second shooter, we judge that 4 is acceptable.

This clearly illustrates that the way in which rational agents adjust their epistemic states upon indicatively supposing a proposition will generally be radically different to the way in which they adjust those states upon supposing the same proposition in the subjunctive mood. We turn now to introducing the most salient quantitative theories for how one should adjust their judgments under indicative and subjunctive suppositions.

2. From Quantitative to Lockean Theories of Supposition

2.1. Quantitative Theories of Supposition

Bayesian conditionalization is most commonly understood as a diachronic norm governing the update of probabilistic credence functions. Under that interpretation, when an agent with a prior credence function c learns that some event E has occurred, she should adopt the posterior c matching c conditioned on E so that c(X)=c(X|E) for all X . Conditionalization is defined as follows.

Conditionalization: Given a credence function cC and any SA with c(s)>0 , conditioning c by S results in the credence function c( |S) defined below.

c(X |S)  =dfc(XS)c(S)

Given the Bayesian understanding of conditionalization as an account of learning, and the close relationship between rational learning and indicative supposition, it is no surprise that conditionalization has also been understood as a normative quantitative model of indicative supposition. Interestingly, such an interpretation was first suggested by Rev. Thomas Bayes, who wrote, “The probability that two subsequent events will both happen is compounded of the probability of the first and the probability of the second on the supposition the first happens” (1763: 379). There are also more recent examples of this interpretation in the literature. For instance, this interpretation is explicitly endorsed by evidential decision theorists in their account of ex ante evaluations of option-outcomes.

The most popular alternative to evidential decision theory—causal decision theory—replaces the use of indicative suppositions in the calculation with subjunctive suppositions. The debate between evidentialists and causalists in decision theory boils down to a dispute about which type of supposition is relevant for ex ante evaluations of options.6 The standard treatments of quantitative subjunctive supposition derive from the imaging rule mentioned in the previous section. Although a number of different versions of imaging have been developed in the literature, we will focus on its best known (and simplest) version, first proposed by Lewis. On an intuitive level, the difference between conditionalization and imaging can be understood in terms of the type of minimal change they encode. We mentioned earlier that conditionalization relies on a global measure of similarity, where imaging uses a local one. This point is elegantly explained by Lewis:

Imaging P on A gives a minimal revision in this sense: unlike all other revisions of P to make A certain, it involves no gratuitous movement of probability from worlds to dissimilar worlds. Conditionalizing P on A gives a minimal revision in this different sense: unlike any other revisions of P to make A certain, it does not distort the profile of probability ratios, equalities, and inequalities among sentences that imply A . (1976: 311)

To introduce the details of imaging, we will need to impose some extra structure on the space of possible worlds. Specifically, we assume that, for any proposition X and possible world w , there is a unique “closest” world at which the sentence X is true. This notion is captured by using a selection function, σ:W×AW . Intuitively, σ(w,X) picks out the “closest” or “most similar” possible world to w that satisfies X . Our selection function will be subject to two basic conditions.

Centering: If wX , then σ(w,X)=w .

This first condition requires that each world is the unique closest world to itself, i.e. if X is true at w , then there is no closer world where X is true.

Uniformity: If σ(w,X)Y and σ(w,Y)X , then σ(w,X)=σ(w,Y) .

This second condition says that whenever the closest X -world satisfies Y and the closest Y -world satisfies X , they are one and the same. In order to illustrate the conceptual motivation for this constraint, we will take a brief but necessary detour into an important philosophical application of selection functions—namely, the semantics of subjunctive conditionals.

Under what conditions are subjunctive conditionals such as ‘If Richard Nixon had pressed the button, there would have been a nuclear war’ true? According to the proposal by Stalnaker (1968), this question is best answered in a semantics that utilises selection functions of the kind described above. The idea, roughly put, is that the subjunctive conditional in the example above is true just in case the closest possible world in which Richard Nixon did push the button is one where there was a nuclear war. The suggestion is that the truth value of the subjunctive conditional ‘if X were true, Y would be true’ at a world w is given by the following definition:

Stalnaker conditional ( ): The truth-conditions for the Stalnaker conditional, XY , are given by the semantic clause below.


As should be clear from its definition, the Stalnaker conditional is non-truth functional, since the truth-value of XY at a world w does not supervene on the truth-values of its components at w . Rather, it is true at w just in case the closest world to w at which its antecedent is true is also one at which its consequent is true. For present illustrative purposes, we take subjunctive conditionals such as ‘If Richard Nixon had pressed the button, there would have been a nuclear war’ to be adequately modelled using the Stalnaker conditional.

Given this semantics for subjunctive conditionals, the motivation for Uniformity becomes very clear. When σ(w,X)Y and σ(w,Y)X , the subjunctives ‘if X were true, Y would be true’ and ‘if Y were true, X would be true’ are both true on the semantics. Now imagine that σ(w,X)σ(w,Y) . This implies that there is some Z such that the subjunctive ‘If X were true, Z would be true’ is true, but the subjunctive ‘If Y were true, Z would be true’ is false. Thus, the following sentence comes out as true:


Clearly, this would be a deeply strange and counterintuitive result. For this reason, we assume that our selection function satisfies the Uniformity condition.7

We are now ready to introduce Lewis’s imaging rule, which will serve as our representative quantitative theory of subjunctive supposition. Stated formally:

Imaging: Given a credence function cC and any SA , imaging c on S results in the credence function cS() defined below.

cS(w)={c(w)+w ¬S:σ(w,S)=wc(w)    if wS0                                                      if w¬S

Intuitively, when c is imaged on S , each world w consistent with S keeps all of its original probability, while the prior probability assigned to each world that is inconsistent with S is transferred to the closest world satisfying S .8

As suggested earlier, conditionalization and imaging differ in whether their recommendations are driven by global or local considerations. Conditionalization recommends the closest credence function that accommodates S where the distance between credence functions is interpreted in terms of their global behaviour. In contrast, imaging operates at the local level by shifting credence from each world to the closest world satisfying S .

2.2. Lockean Theories of Supposition

With our quantitative accounts of indicative and subjunctive supposition in hand, we will now outline our approach to comparing them with the qualitative theories we will introduce later. As mentioned earlier, qualitative and quantitative theories articulate the norms of suppositional judgement in terms of different kinds of doxastic attitude. Qualitative theories rely on agents’ belief corpora to offer binary judgements about whether they should regard propositions as acceptable under a supposition. Quantitative theories on the other hand use an agent’s credences to generate numerical judgments corresponding to how acceptable agents ought to find each proposition under any given supposition. To directly compare the two we need a way to bridge the gap between qualitative and quantitative attitudes.

To do so, we apply a suitably adapted version of the Lockean Thesis, so-called by Foley (1993). As it is traditionally understood, the Lockean Thesis provides a normative bridge principle between beliefs and credences, which requires that an agent believes that X just in case she has “sufficiently high” credence in X . This is standardly understood as saying that an agent should believe a proposition X if and only if her credence in X is at least as great as some Lockean threshold, t(12,1] . Put formally:

Lockean Thesis (LTt): For some t(12,1] : XBc(X)t .

This principle will be presupposed as a synchronic coherence requirement used to specify the beliefs that are coherent with an agent’s credences. So, when we are talking about Lockean agents, we will presuppose that they have beliefs and credences satisfying LTt for some t(12,1] . There is an extensive literature on the Lockean Thesis and its motivations.9 Featured prominently in that literature is the Lottery Paradox, first discussed by Kyburg (1961), and the tension it brings to the surface between LTt and the popular normative requirements that beliefs be logically consistent and deductively closed. Primarily for space considerations, we will only briefly engage with that literature at a few points in the next section. Instead, we will unreflectively adopt LTt as a technical tool to aid in our comparative project.

But the Lockean Thesis will play another role in our exploration beyond being a standing synchronic coherence requirement. It will also be used together with the quantitative theories of supposition introduced earlier to construct qualitative suppositional judgments that can be directly compared with the representative qualitative theories of supposition. We begin by introducing the Lockean theory of indicative supposition (LIS) defined below.

LIS: Given a corpus B and some t(12,1] , the set of acceptable propositions under the indicative supposition S is specified in terms of the operation, :×A , defined below.


Where B and c are respectively a corpus of beliefs and credence function satisfying LTt and S is any proposition, BS consists of those propositions assigned conditional credence on S a value at least t . The Lockean theory of subjunctive supposition (LSS) is characterised in an analogous fashion.

LSS: Given a corpus B and some t(12,1] , the set of acceptable propositions under the subjunctive supposition S is specified in terms of the operation, ♦:×A defined below.


Strictly speaking, the two Lockean operations, and , are not singular operations, but rather each characterise families of operations—one for each t(12,1] . When it is useful, we will restrict our attention to certain subsets of Lockean thresholds by letting [t,t] and [t,t] denote the family of operators bounded by the closed interval [t,t] . Analogous conventions will be adopted for the open and half-open intervals.

3. Indicative Supposition

In their seminal 1985 paper, Alchourrón, Gärdenfors, and Makinson introduced their revision operation ( ). Aside from being the now orthodox account of belief revision, the AGM theory has been understood as an account of indicative supposition. Even Isaac Levi, who was highly critical of AGM as a theory of belief revision, acknowledged that “the AGM approach fares better as an account of suppositional reasoning for the sake of the argument” (1996: 290). We follow suit and present the theory as a normative theory of indicative supposition.

The AGM theory relies on the syntactic representation of epistemic states as “belief sets”, which comprise deductively closed sets of sentences. Formally, this means that B is taken to be Cn(B) , where Cn(Γ)=df{X:ΓX} .10 Revising B by a sentence S delivers the new belief set BS , understood as the set of sentences that are acceptable under the supposition S for an agent with the corpus B . This reflects AGM’s presupposition of Cogency as a synchronic coherence requirement on admissible beliefs and suppositional judgments. This requirement, stated below, says that belief corpora and suppositional judgements must be logically consistent and closed under deductive consequence.

Cogency: A set B is cogent just in case (i) B logically consistent, i.e. B , and (ii) B is deductively closed, i.e. B=Cn(B) .

Assuming Cogency results in a coarse-grained representation of epistemic states and suppositional judgments that comes with certain definite costs. For one, since there is just one inconsistent belief set ( B= ), AGM leaves no room to distinguish between agents with inconsistent beliefs/suppositional judgments. This same belief set represents both an agent who believes, as in the Lottery paradox from Kyburg (1961), each of P1,  ,Pn and also that ¬(P1Pn) and another who believes the outright contradiction P¬P . Similarly, Nebel (1989) observes that the reasons why beliefs are held are not reflected in this representation. An agent who independently believes that P and Q is represented in the same way as another who believes that Q on the basis of their beliefs that P and PQ . Such dependencies may be important for belief dynamics as seen by considering the possibility that these agents lose their beliefs that P . We will not dwell on this point further and simply note that AGM’s Cogency assumption will result in some important divergences between AGM and the Lockean accounts.

The AGM revision operation ( ) is axiomatised by the six “basic Gärdenfors postulates”, 1 – 6, together with the “supplementary postulates”, 7 and 8.

( 1) BS=Cn(BS) Closure
( 2) SBS Success
( 3) BS  Cn(B{S}) Inclusion
( 4) If B¬S , then BBS Preservation
( 5) If ¬S , then BS Consistency
( 6) If SS , then BS=BS Extensionality
( 7) BSSCn(BS{S}) Superexpansion
( 8) If BS¬S , then BSSCn(BS{S}) Subexpansion

To explain these postulates, it will be instructive to take a brief detour to discuss the types of coherence requirements they encode. Here, we follow Rott (1999a; 2001) in thinking that these postulates include three different types of coherence requirements: synchronic, diachronic, and dispositional. While synchronic coherence provides us with conditions under which a single set of judgments (either a corpus or a set of judgments under a single supposition) hangs together, diachronic coherence accounts for the constraints that the agent’s corpus places on individual sets of suppositional judgments. Lastly, dispositional coherence involves constraints that may be imposed across different sets of suppositional judgments. A visual explanation is provided by the figure below adapted from Rott (1999a: 404).

Figure 1
Figure 1
Figure 1

The relata of the three types of coherence.

Whereas Cogency is taken as a background synchronic coherence requirement on belief sets, Closure ( 1) and Consistency ( 5) ensure that suppositional judgments also satisfy Cogency. Since the agent’s beliefs do not play any role in determining the content of these constraints, both postulates are straightforwardly seen as purely synchronic requirements on suppositional judgments. For the same reason, Success ( 2) and Extensionality ( 6) may also be regarded as synchronic requirement on suppositional judgments. Unlike the standing synchronic requirements embodied by 1 and 5, the motivations for 2 and 6 are grounded in constitutive or theoretical considerations about the nature of supposition. We take 2 to be a constitutive requirement of supposition. If supposing that S did not result in S being accepted, then this would hardly seem like S had been supposed at all. On the other hand, 6 captures a theoretical commitment that surface grammar or intensional considerations should play no role in determining which propositions are acceptable under a supposition.11

The next two postulates, Inclusion ( 3) and Preservation12 ( 4), provide AGM’s diachronic coherence requirements. Respectively, these impose upper and lower bounds on the set of suppositional judgments. The restriction imposed by 3 ensures that only propositions that are logically related to B or S are acceptable under the supposition that S . On the other hand, 4 requires that beliefs should not fail to be acceptable under the supposition S unless S is logically inconsistent with the agent’s corpus. It is worth noting that this places no restrictions on suppositional judgments when the supposition is inconsistent with the agent’s belief set.

Lastly, we have the dispositional coherence requirements given by the two supplementary postulates, Superexpansion ( 7) and Subexpansion ( 8), which respectively generalise 3 and 4. Indeed, in the presence of the eminently plausible Idempotence ( ) principle requiring that B=B , 7 and 8 imply 3 and 4 respectively. Since the supplementary postulates, 7 and 8, encode dispositional coherence requirements, it should be no surprise that the supplementary postulates have been largely discussed in the literature on iterated belief revision.

3.1. LIS and the AGM Postulates

The question now arises: how do the suppositional judgments recommended by LIS relate to those given under the qualitative account based on AGM? A partial answer to this question is given by previously established results. We will complete this picture after surveying the extant results from the literature.

Beginning with their synchronic requirements, there is an immediate tension between LTt and Cogency that has been extensively discussed in the literatures on the Preface and Lottery Paradoxes—these same issues straightforwardly apply to the synchronic requirements imposed by 1 and 5. The remaining basic Gärdenfors postulates have been considered from a Lockean perspective by Shear and Fitelson (2019).13 LIS satisfies both of the remaining AGM synchronic coherence requirements, 2 and 6. Neither result is surprising: LIS satisfies 2 in virtue of the fact that c(S|S)=1 , while the satisfaction of 6 is secured by the extensional character of conditionalization.

The situation is more interesting for the diachronic requirements given by 3 and 4. Interestingly, 3 is satisfied by LIS in full generality. The reason why is relatively easy to see. It is a theorem of the probability calculus that c(SX)c(X|S) . Thus, whenever XBS it follows that SXB , and so BS  Cn(B{S}) . Turning to the final basic postulate, 4, we see that in general LIS can violate this requirement. The basic reason why is relatively clear, though there are some subtleties that we will discuss. As the characteristic postulate of AGM, 4 says that an agent’s beliefs should remain acceptable under any supposition that is logically consistent with her corpus. However, when an agent is not fully certain of one of her beliefs (say X ), it is possible for that some supposition ( S ) might be logically consistent with her corpus but still count as counter-evidence to X in the sense that c(X|S)<c(X) . This allows for the possibility that c(X)t even though c(X|S)<t and, thus, that B¬S but BBS . Still, there are some further constraints that can be imposed under which LIS can be made to satisfy 4.

The explanation immediately above is suggestive of the first situation in which LIS will be guaranteed to satisfy 4. Indeed, Gärdenfors (1988) established a result, which implies that when belief is taken to imply certainty (i.e. when t = 1), LIS will satisfy 4. Moreover, Gärdenfors’s result actually implies that LIS will satisfy all of the AGM postulates. One might wonder then: is the resulting satisfaction of 4 a consequence of the fact that 1 and 5 are satisfied when t = 1?

Shear and Fitelson show that the answer to this question is no, LIS can violate 4 even under the further assumption of Cogency. However, they establish the more surprising result that, assuming Cogency, LIS can only violate 4 when the Lockean threshold is relatively high. In particular, such violations are only possible when the Lockean threshold is at least the inverse of the Golden ratio (i.e. when t(ϕ1,1) , where ϕ10.618 ). As an immediate corollary, assuming both Cogency and that t(12,ϕ1] , LIS satisfies all of the basic Gärdenfors postulates, 1 – 6.

But this only tells part of the story about the import of the “Golden threshold” at ϕ1 .14 This is because LIS exhibits interesting behaviour relative to the two weakened variants of Preservation provided below.

( 4v) If S,XB , then ¬XBS Very Weak Preservation
( 4w) If SB , then BBS Weak Preservation

The first of these postulates, Very Weak Preservation ( 4v), requires that taking something that you already believe as a supposition for the sake of argument should not lead you to reject any of your other beliefs under that supposition. The second, Weak Preservation ( 4w), says that under the same conditions, you should accept anything that you believe.

Although imposing the assumption of Cogency on LIS was not sufficient to guarantee the satisfaction of full Preservation ( 4), it turns out that it is sufficient to ensure that LIS will satisfy both of the weaker requirements, 4v and 4w. However, there is another way to guarantee that LIS will satisfy Very Weak Preservation: if the Lockean threshold is at least ϕ1 , then LIS will satisfy 4v (even without the help of Cogency). These results are summarised in table 2 below.

Table 2

LIS and Some Variants of Preservation.

4 4w 4v
+ Cogency
(12,ϕ1] + Cogency

The import of these results will depend on how you regard 4v, 4w, 4, and Cogency. We regard 4v as eminently reasonable: it would seem very strange to believe both P and Q , but reject Q under the supposition that P . After all, that would mean that P ’s certain truth would provide sufficient evidence to accept ¬Q —that would seem to be ruled out by your concurrent beliefs that P and that Q . For the die-hard Lockeans who reject Cogency, this gives reason to maintain that the Lockean threshold must be a sufficiently high ( t>ϕ1 ) so as to rule out this possibility. The import of the remaining results is up for debate. A Lockean who finds 4w plausible will be forced into adopting Cogency. However, this would be harder to motivate for a Lockean since once we accept that rational belief need not require certainty, there is no obvious argument in favour of 4w. Still, proponents of AGM who find LIS attractive may take solace in the realisation that their preferred account can be reconciled with LIS through the acceptance of a sufficiently low threshold.15

Thus far, we have presented a number of results concerning LIS and the basic Gärdenfors postulates, 1 – 6, but have not addressed two remaining supplementary postulates, 7 and 8. Shear and Fitelson only mention these postulates in passing, since their primary concern was with the diachronic requirements governing single-step belief change rather than the dispositional requirements that provide bridges between different potential revisions. However, in the context of supposition, dispositional requirements are more obviously relevant. Accordingly, we will now complete the picture by reporting some new results establishing that the relationship between LIS and 3 and 4 carries over to their generalisations given by 7 and 8.

Proposition 1. LIS must satisfy  7. That is, the following is satisfied for any  B , S,S , and t(12,1] :


Proof. Let XBSS , i.e.  c(X|SS)t . Then, letting cS():=c(|S) , we get:


Thus, c(SX|S)t and so SXBS . From this we conclude XCn(BS{S}) .             

Proposition 2. In the absence of Cogency, LIS can violate  8 for any t(12,1) . That is, if t(12,1) , it is possible that:

BS¬S, but BSSCn(BS{S})

Proof. Let c be any credence function satisfying the conditions below, where ε>0 is arbitrarily small:

c(SSX)=ε         c(SS¬X)=1t         c(S¬SX)=tε

It is simple to see that case provides the basis for a counterexample to 8 for any threshold t(12,1) in the absence of Cogency.             

Proposition 3. The twin requirements of Cogency and t(12,ϕ1] are necessary and sufficient to guarantee that LIS satisfies  8.

Proof. Supposing Cogency, we let S be consistent with BS and XCn(BS{S}) , and define c as a vector on the assignments below.

c(SSX)=α c(SS¬X) =β       c(S¬SX)=γ      c(S¬S¬X)=δ

We start by showing t(12,ϕ1] only if XBSS , and hence that 8 is satisfied. For contradiction, suppose that t(12,ϕ1] , but c(X|SS)<t . Since XCn(BS{S}) implies SXBS , our assumptions imply

αα+β<t, and


First, note that since SXBS , by Cogency  S¬XBS would imply that (SX)(S¬X)BS . This is equivalent to ¬SBS thus contradicting our assumption that S is consistent with BS . So, S¬XBS (i.e.  c(S¬X|S)<t ) which gives us


Next, observe that SBS would imply by Cogency that SXBS , since SXBS . But then c(X|SS)c(XS|S)t , which contradicts 1. So c(S|S)<t , which implies that


Taken together, 3 and 4 give us 5, which combined with 3 lets us infer 6.



Now, since t(12,ϕ1] , we can use the special fact about the Golden Ratio that tϕ1 iff t21t to infer αα+βt , which contradicts our assumption 1. Thus, our initial assumptions were inconsistent and we infer that assuming t(12,ϕ1] together with Cogency suffices to guarantee that LIS satisfies 8.

To see that LIS can violate 8 for any t(ϕ1,1) —even under the assumption of Cogency—consider any credence function c satisfying the following constraints, where ε>0 is arbitrarily small:


By construction, we have that c(SX)>t and c(X|SS)<t , which shows that XCn(BS{S}) but XBSS , as desired. Note also that since c(S)=1 , BS=B . Furthermore, it can be verified that B=Cn(SX) holds for every (and only) t>ϕ1 , which establishes Cogency and confirms that S is consistent with BS .             

This completes our assessment of the relationship between the theories of suppositions provided by LIS and AGM. A full summary of the results from this section is given in table 3 below. In the next section, we turn our attention to the relationship between the subjunctive theories.

Table 3
Table 3
Table 3

LIS and the AGM Postulates.

4. Subjunctive Supposition

To begin, it will be worthwhile to see why AGM revision would be inappropriate to use as a theory of subjunctive supposition. Consider the following version of the widely discussed adaptation from Peppas (2008) of a classic case from Ginsberg (1986):

Philippa is looking through an open door into a room containing a table, a magazine and a book. One of the two items is on the table and the other is on the floor, but because of poor lighting, Philippa cannot distinguish which is which.

Now, imagine that Philippa thinks to herself, “Suppose that the book were on the floor.” Under this (subjunctive) supposition, what should she accept regarding the location of the magazine? Well, if some ‘local miracle’ occurred that resulted in the book being on the floor, this would not result in a change regarding the location of the magazine. Thus, her judgment regarding the magazine’s location in the suppositional context should remain unchanged from in the categorical one and she should accept that it is either on the table or the floor without accepting either individual disjunct. But, this is not what AGM would recommend. Let B and M respectively be the propositions ‘the book is on the floor’ and ‘the magazine is on the floor’. For simplicity, let Philippa’s beliefs include only B=Cn(B¬M) to capture her belief that only that one of the two is on the table. Then, since BB , we get ¬MBB and so AGM revision would recommend that she accept that the magazine is not on the floor. This is clearly the wrong result.

Cases like these motivated computer science and artificial intelligence researchers to develop alternative belief change operations, known as updates.16 Katsuno and Mendelzon (1992) introduced postulates axiomatising their update operation in similar fashion to the AGM postulates for revision.17 These postulates are formulated below, where saying that B is complete means that B is a singleton (or equivalently that either XB or ¬XB for any sentence X ).

( 0) BS=Cn(BS) Closure
( 1) SBS Success
( 2) If SB then BS=B Stability
( 3) If B and ¬S , then BS Consistency Preservation
( 4) If SS , then BS=BS Extensionality
( 5) BSSCn(BS{S}) Chernoff
( 6) If SBS and SBS , then BS=BS Reciprocity
( 7) If B is complete, then BSSCn(BSBS) Primeness
( 8) If B=BB , then BS=BSBS Compositionality

Some of these postulates are familiar from the AGM postulates, while some are new. Closure ( 0), Success ( 1), Extensionality ( 4), and Chernoff18 ( 5) are respectively identical to 1, 2, 6, and 7 from earlier. Stability ( 2) and Consistency Preservation ( 3) are each weakened versions of requirements familiar from AGM. Stability ( 2) says that whenever an agent takes one of their beliefs as a supposition, the set of suppositionally acceptable propositions should just be comprised of their beliefs. This is equivalent to 4w together with a version of 3 weakened to only apply when SB . Just as we think that 3 is unimpeachable, so too is its weakened version. On the other hand, 4w is not on such firm footing. We already saw that this can fail for LIS.19 Consistency Preservation ( 3) offers a weaker consistency requirement than is imposed by 5 and only applies when both the corpus and the supposition are each individually consistent.

The next two postulates are new. Reciprocity ( 6) corresponds to the widely discussed (CSO) axiom of conditional logics. This requirement says that if S is acceptable under the supposition that S and vice versa, then S and S generate the same suppositional judgments. Herzig (1998: 127–28) shows that, given 1, 5, and , 6 implies 2. Since these three postulates are relatively innocuous, any reservations about 2 carry over to 6. Primeness ( 7) can be seen as the requirement that when an opinionated agent supposes a disjunction, then their suppositional judgements should satisfy one of its disjuncts. This principle seems appropriate when using a finite language (as in the present case) when we are guaranteed a witness for the truth of a disjunction. It may be less desirable when the language is infinite and there is no such guarantee.

This brings us to KM update’s characteristic postulate, Compositionality20 ( 8), which provides the basis for regarding update as an operation of ‘local belief change’. This is made perspicuous by considering the limiting case in which B={w1,w2,,wn} and Bi={wi} where we see that 8 implies that


Thus, when an agent supposes that S , she should thereby accept each sentence that would be common to the suppositional judgements recommended for each of the opinionated (viz. complete) belief sets that are consistent with her beliefs. Just as we saw with imaging, the overall set of suppositional judgments is defined as a function of the suppositional judgments thFat would be given at each world consistent with the agent’s opinions. This point has been made in slightly different terms by Pearl (2000: 242). He observes a parallel between 8 and the fact—established by Gärdenfors (1988: 113)—that imaging “preserves mixtures”. That is, if a probability function Pr is a mixture of Pr and Pr , then PrS is a mixture of PrS and PrS . Put more carefully, Gärdenfors’s result shows us that every imaging operator satisfies the condition that if Pr(X)=[αPr(X)+(1α)Pr(X)] , then PrS(X)=[αPrS(X)+(1α)PrS(X)] . The structural similarity between this condition and 8 helps further reinforce the connection between update and imaging.

Lastly, observe that, as we saw with AGM, the KM postulates encode synchronic, diachronic, and dispositional coherence requirements. The synchronic requirements are given by 0 and 1, the diachronic ones by 2 and 3, and the dispositional requirements are found in the remaining postulates 4 – 8.

4.1. LSS and the KM Postuates

We now proceed to consider how LSS relates to the KM postulates from above. Beginning with the general case where no further constraints are imposed on LSS, we establish which of the KM postulates are satisfied by LSS. As recorded in the proposition below, LSS is guaranteed to satisfy five of the KM postulates: Success ( 1), Consistency Preservation ( 3), Extensionality ( 4), Chernoff ( 5), and Primeness ( 7).

Proposition 4. LSS must satisfy  1, 3, 4, 5 and  7. That is, each of the following is satisfied for any  B , S  and  t(12,1] :

  1. SBS

  2. If  B  and  ¬S , then  BS

  3. If  SS , then  BS=BS′

  4. BSSCn(BS{S})

  5. If  B  is complete, then  BSSCn(BSBS)

Proof. Proceeding sequentially:

  1. Simply recall that cS(S)=1 to infer SBS and, thus, conclude that LIS must satisfy 1.             

  2. First, suppose that B and ¬S . Next, note that whenever B is consistent, if wB , then c(w)>1t . We prove the contrapositive by first supposing that BS is inconsistent, i.e. BS . That implies that for any wW , ¬wBS and hence S¬wB . But, since S is consistent, there is no w such that wS¬w for every wW , and therefore B is also inconsistent.              

  3. Suppose that SS . This implies that σ(w,S)S just in case σ(w,S)S . By Uniformity we get σ(w,S)=σ(w,S) and conclude BS=BS . So, LSS must satisfy 4.             

  4. To show that LSS must satisfy 5, first suppose XBYZ so that cYZ(X)t . Now, we show that if σ(w,YZ)X , then σ(w,Y)ZX . To do so, we assume that σ(w,YZ)X . Then, either σ(w,Y)Z or σ(w,Y)¬Z . In the first case, we may infer σ(w,Y)=σ(w,YZ) , and by our assumption that σ(w,Y)X , we conclude σ(w,Y)ZX . In the second case, σ(w,Y)¬Z and so σ(w,Y)ZX . So either way σ(w,Y)ZX as desired. Applying the definition of imaging gives us

    cYZ(X)=wW:σ(w,YZ)  Xc(w)  and  cY(ZX)=wW:σ(w,Y)  ZXc(w),

    which imply cY(ZX)cYZ(X)t . From this we may then infer ZXBY and thus conclude that XCn(BY{Z}) .             

  5. We begin by supposing that B is complete, which means that there is a unique world satisfying all propositions in B —call this wB . This implies that c(wB)t>12 . Now, let XBSS and infer c(SS)(X)t . Since c(wB)t and cSS(X)t , it must be that σ(wB,SS)X . Clearly either σ(wB,SS)S must satisfy either S or S . Assuming the former, we infer σ(wB,S)=σ(wB,SS)X and thus cS(X)t and so XBS . The same reasoning suffices for the latter. Thus, we infer XCn(BSBS) to conclude that LSS must satisfy 7.             

Most of these results will not be unexpected. Success ( 1) should be validated by any plausible account of supposition, while Extensionality ( 4) will hold in any non-hyperintensional account like LSS. The generalisation of ( 3) given by Chernoff ( 5) holds in virtue of the fact that the probability of a material conditional cannot be less than the probability of its consequent. The satisfaction of Primeness ( 7) is intuitive, since if B is complete it should already decide either S or S and updating by their disjunction should not result in more propositions being accepted than by either disjunct. The only result that is remotely surprising is that LSS satisfies Consistency Preservation ( 3). Lockean accounts typically struggle to satisfy consistency requirements. So, it is interesting to note that LSS will not lead you to an inconsistent set of suppositional judgments when your beliefs are consistent.

We turn now to the remaining KM postulates: Closure ( 0), Stability ( 2), Reciprocity ( 6) and Compositionality ( 8). When no additional restrictions are imposed, LSS can violate each as shown below.

Proposition 5. LSS can violate  0, 2, 6, and  8. That is, each of the following is possible:

  1. BSCn(BS)

  2. SB , but  BSB

  3. SBS  and  SBS , BSBS

  4. B=BB , but  BSBSBS

Proof. Proceeding sequentially:

  1. To see that LSS can violate 0, simply recall that Lockean accounts generally permit violations of deductive closure, as demonstrated in the Lottery Paradox.             

  2. A counterexample showing that LSS can violate 2 for any t(12,1) is generated by the assignments provided on the table below, where ε>0 is arbitrarily small.

    Wφc(φ)cS(φ)w1SXtε1εw2S¬Xεεw3¬SX00w4¬S¬X1t0          σ(w1,S)=w1σ(w2,S)=w2σ(w3,S)=w2σ(w4,S)=w1

    It is easy to see that XB , but XBS .             

  3. Our counterexample showing that LSS can violate 6 proceeds by assuming that W contains the following six possible worlds.

    w1SSX                  w2SS¬X           w3S¬SXw4S¬S¬X          w5¬SSX           w6¬SS¬X

    Now, let c be such that c(w3)=c(w4)=c(w5)=c(w6)=14 and select any σ such that

    σ(w3,S)=σ(w4,S)=w1          σ(w5,S)=σ(w6,S)=w2.

    This gives us cS(S)=cS(S)=1,  cS(X)=0 and cS(X)=1 , which implies that SBS and SBS , but XBS and XBS . Note that the choice of t played no role here and this suffices as a counterexample to the postulate for any t(12,1] .             

  4. To build a counterexample showing that LSS can violate 8, fix some threshold t , let ε>0 be arbitrarily small, and let n be such that tεn31t . Then where W={w1,,wn} , let σ(wi,wn1wn)=wn1 for in3 and σ(wn2,wn1wn)=wn . The credence functions c,c , and c are defined piecewise below.

    c(wi):={tεn3in31t+εi=n20otherwise      c(wi):={1i=n20otherwise      c(wi):={1ti=1ti=n20otherwise

    Let B , B , B be the Lockean belief sets corresponding to c,  c,  c , respectively. It is easy to see that B=B=B=BB={wn2} . Imaging each of these credence functions on wn1wn results in the following assignments.

    cwn1wn(wn1)=tε        cwn1wn(wn1)=0       cwn1wn(wn1)=1t     cwn1wn(wn)=1t+ε      cwn1wn(wn)=1          cwn1wn(wn)=t

    Thus, we see that BS={wn1,wn}BSBS={wn} .             

The first three of these results are expected. As Lockean accounts generally fail to require Cogency, we find that LSS similarly may violate 0. We also see that LSS can violate 2. This postulate is equivalent to the conjunction of 3 and 4. Recall that LIS violated the latter and we find similar behaviour with LSS. Next, the fact that LSS can violate 6 is somewhat obvious. The violation of 8 is somewhat more surprising. As we briefly discussed earlier, 8 is deeply connected to the idea that update proffers a form of ‘local belief change’; and, as we have mentioned, Lewis presents imaging as a method for updating credences by a local dynamics. But, as we will see in the next section, all is not lost.

4.2. Closure under the Stalnaker Conditional Yields Convergence of LSS and KM

When we considered the relationship between the indicative theories given by LIS and AGM, we also saw divergences in the general case—most notably, LIS could violate AGM’s characteristic postulate 4. However, we also saw that the two could be made to converge so long as we assume Cogency and a sufficiently low Lockean threshold. We might then wonder whether there is a similar path towards convergence between LSS and KM.

As we will soon see, there is such a path. However, the requirements involved in establishing convergence between LSS and KM are different. In this case, neither restrictions on the Lockean threshold nor standard Cogency will suffice. Instead, we will augment Cogency with the additional requirement that B is closed under the Stalnaker conditional ( ). But this will take some work since our language does not officially include . To deal with this, we will augment our finite propositional language to the “flat fragment” of extended with the Stalnaker conditional. That is, we introduce into the language’s signature to generate + , which only adds conditional sentences of the form XY , where X,Y . The statement of Cogency remains unchanged from earlier. However, the type of logical consequence used in the expression of its requirements (Cn) is richer. We let ‘ Cogency ’ refer to the stronger requirement that results from imposing Cogency with the richer language + . At this stage, there are two important observations to make. Firstly, it is well known that the probability of the Stalnaker conditional XY is given by the probability of Y after imaging on X , cX(Y) . Thus, the conditions under which Stalnaker conditionals are believed are clear: XYB iff cX(Y)t . Second, observe that the Stalnaker conditional satisfies modus ponens, i.e.  XY,XY . This means that Cogency requires that XY,XY and XB imply YB .

Surprisingly, we find that in this richer environment where we have Cogency , LSS satisfies all of the KM postulates. We have already shown that LSS will always satisfy 1, 4, 5, and 7; it is straightforward to see that Propositions 4 and 5 will carry over to this richer environment. So, it remains only to show that, given Cogency , the remaining postulates are all satisfied.

Proposition 6. Assuming Cogency , LSS must satisfy  0, 2, 6, and  8. That is, for any  c  and t(12,1] , if  B  satisfies Cogency , then:

  1. BS=Cn(BS)

  2. If  SB  then  B=BS

  3. If  SBS , and  SBS , then  BS=BS

  4. If  B=BB , then  BS=BSBS

Proof. As before, we proceed sequentially, where Cogency is taken as a standing assumption:

  1. It is an immediate consequence of Cogency that LSS satisfies 0.             

  2. Suppose that SB to show that BBS . Let XB and by Cogency infer SXB . This implies c(SX)t . Since imaging on S won’t lower the probability of any SX worlds, it follows that cS(X)t and thus XBS . For the other direction, let XBS so that cS(X)t and hence SXB . By Cogency , we get XB as desired and thus conclude that LSS now satisfies 2.             

  3. Suppose that SBS and SBS . This gives us cS(S)t and cS(S)t , from which we infer that SSB and SSB and hence SSB . Now, letting XBS , we infer SXB . By Uniformity and Cogency , SXB and SSB jointly entail SXB . Thus we infer XBS and hence BSBS . The same argument shows the converse. Thus, given Cogency , LSS will satisfy 6.

  4. To show that LSS will now satisfy 8, let B=BB , and suppose that B , B and B are all cogent , and satisfy LTt with respect to the credence functions c , c and c . Let wBS , wBSBS . This implies that S¬wB , S¬wB,B , and hence that wBB , wS¬w , i.e. wB , wS¬w . Since B is cogent, we have


    This implies that S¬wB , which is a contradiction. So wBS implies wBSBS , i.e.  BSBSBS . Conversely, let wBS , wBSBS . For argument’s sake, let wBS . This implies that S¬wB , S¬wB . and hence that wB , wS¬w , i.e. wB , wS¬w . Since B is cogent, we have


    This implies that S¬wB , which is a contradiction. So wBSBS implies wBS , i.e.  BSBSBS .             

The results established in this section are summarised below in table 4, where we see that once Cogency is imposed LSS satisfies all of the KM postulates. Perhaps the most important observation is that, in the presence of Cogency , the quantitative norms of subjunctive supposition specified by LSS coheres perfectly with the qualitative norms provided by KM. This is in stark contrast to the vexed relationship between LIS and AGM, which falls short of perfect coherence, even when all relevant cogency constraints are imposed.

Table 4
Table 4
Table 4

LSS and the KM Postulates.

5. LIS vs. KM and LSS vs. AGM

We have now compared the most prominent extant quantitative theories of indicative and subjunctive supposition to their qualitative counterparts, and identified conditions under which the respective qualitative and qualitative accounts cohere with one another. In this section, we turn to the two further comparisons between (i) the judgments given by LIS that are based on our quantitative indicative theory, and the qualitative subjunctive theory based on KM update, and (ii) those given by LSS that are based on our quantitative subjunctive theory and the qualitative subjunctive theory based on AGM revision.

Our strategy will be the same as before: for (i) we determine which of the KM postulates are satisfied by LIS, and for (ii) we determine which of the AGM postulates are satisfied by LSS. Of course, these comparisons are less conceptually salient than those in Sections 3 and 4. There is no reason to expect quantitative norms of subjunctive (indicative) supposition to cohere with qualitative norms of indicative (subjunctive) supposition. Nonetheless, there are still a couple of reasons why they are worth exploring. One is simply the obvious technical interest in completeness. But, there is a more persuasive reason to consider these comparisons. As we will see, the results of these comparisons will offer a certain dialectical benefit of reinforcing our understanding of the relative importance of certain postulates to indicative and subjunctive supposition.

5.1. LIS vs. KM

We begin by cataloguing the relationship between LIS and KM. In the next two propositions, we consider the general case and establish which of the KM postulates are universally satisfied by LIS and which can be violated. In proposition 7, we see that LIS must satisfy Success ( 1), Extensionality ( 4), and Chernoff ( 5).

Proposition 7. LIS must satisfy  1, 4 and  5. That is, each of the following is satisfied for any  B , S , S , and  t(12,1] :

  1. SBS

  2. If  SS , then  BS=BS

  3. BSSCn(BS{S})

Proof. Since these principles are identical to 2, 6, and 7, respectively, and (as we saw in Section 3) LIS satisfies each of these postulates, LIS must then also satisfy 1, 4, and 5.             

Turning now to the postulates, Closure ( 0), Stability ( 2), Consistency Preservation ( 3), Reciprocity ( 6), Primeness ( 7), and Compositionality ( 8), the following proposition establishes that each can be violated by LIS.

Proposition 8. LIS can violate  0, 2, 3, 6, 7 and  8. That is, each of the following is possible:

  1. BSCn(BS)

  2. SB , but  BSB

  3. B  and  ¬S , but  BS

  4. SBS  and  SBS , but  BSBS

  5. B  is complete, but  BSSCn(BSBS)

  6. B=BB , but  BSBSBS

Proof. Proceeding sequentially:

  1. This is immediate since 0 is identical to 1, which LIS can violate.             

  2. Simply observe that 2 implies 4w, which can be violated by LIS.             

  3. To show that LIS can violate 3, consider the following counterexample. For arbitrary t(12,1) , let n be such that 1n11t , let W={w1,  ,  wn} , and let ε>0 be arbitrarily small. Finally, let c be given by c(w1)=1ε and c(wi)=εn for i>1 . Then B=Cn({w1}) , which is consistent. However, B¬w1 is inconsistent since for any wW,  ¬wB¬w1 .             

  4. To see that LIS can violate 6, first recall that LIS can violate 4w (so, it is possible that S,XB , but that there is some XB such that XBS ) and that LIS must satisfy (i.e. B=B ). Now, to find a counterexample to 6, simply find a counterexample to 4w and consider the two revisions: B and BS . By , we know that SB . And, it is trivial that BS . But, we also know that BBS and, thus, BBS .             

  5. For our counterexample to 7, set t=1720 and let c be defined as in the table below.


    It is straightforward to see that B , BS , and BS all satisfy Cogency and constitute a violation of 7: first, note that B={w4} and so B is complete, then observe that ¬(SS)BSS , but ¬(SS)Cn(BSBS)              

  6. For our counterexample to 8, fix a threshold t(12,1) and let c , c , and c be defined as in the table below, where ε>0 arbitrarily small.


    Since B=B=B={w4} , we see that all three are complete (thus satisfying Cogency) and that B=BB . Now, let S:=XY and inspect the table below.


    Here we find that BS={w2} , BS={w1} , and BS={w3} and thus BSBsBs .             

Unsurprisingly, these results show that in general LIS may significantly diverge from the KM postulates. However, we might wonder whether additional constraints can be imposed to bring them closer together. Although we will see that they can become much closer in their behaviour, there is no obvious way to get LIS to satisfy all of the KM postulates. In the postulate below, we show that assuming Cogency recovers 0, 2, 3, and 6. Nonetheless, as foreshadowed in the proofs above for 7 and 8, Cogency is not sufficient to ensure that they are satisfied by LIS.

Proposition 9. Assuming Cogency, LIS must satisfy  0, 2, 3, and  6. That is, assuming Cogency, all of the following are satisfied for any  B , S , S , and  t(12,1] :

  1. BS=Cn(BS)

  2. If  SB , then BS=B

  3. If  B  and  ¬S , then  BS

  4. If  SBS  and  SBS , then BS=BS

Proof. Taking Cogency as a standing assumption, we proceed sequentially:

  1. This is immediate from the assumption of Cogency.             

  2. Here, the satisfaction of 2 follows from its equivalence with the conjunction of 3 and 4w. As we saw earlier, LIS always satisfies 3, while Cogency suffices for LIS to satisfy 4w.             

  3. This is immediate from the assumption of Cogency.             

  4. Let B be cogent and let SBS , SBS and XBS . Since BS is cogent, it follows that SXBS , and hence that c(SX|S)t . It is easy to see that c(SX|S)t implies c(¬SX|S)t . Therefore, from SXBS , it follows that ¬SXBS . Now, from Cogency and SBS , it follows that XBS , and hence that BSBS . The other direction can be proved in analogous fashion.             

Interestingly, the following proposition demonstrates that by further adopting a sufficiently low threshold of t(12,ϕ1] , we are able to recover 7 (though it is insufficient to recover 8).

Proposition 10. Assuming Cogency, LIS must satisfy  7 just in case  t(12,ϕ1] .

Proof. Assume Cogency and that t(12,ϕ1] . We begin by observing that 7 holds where SS is consistent with B : If B is cogent and complete, then SS is consistent with B iff SB or SB . But, since LIS satisfies 4w provided Cogency and t(12,ϕ1] , this means that BSS=BS or BSS=BS . Either way, LIS satisfies 7. So, it remains to check the case where SS is inconsistent with B . For this case, let our algebra contain the following worlds:

w1SS       w2S¬S       w3¬SS        w4¬S¬S

Assuming the antecedent that B is complete (together with our assumption that SS is inconsistent with B ) gives us B={w4} , which in turn implies


Now, suppose for reductio that BSSCn(BSBS) . This implies that BSSBSBS . But, that can only be the case when w1BSS . Thus, we infer


Using 1 and simplifying, we get c(w1)12t+t2 . Recalling that tϕ1 iff t21t , we infer c(w1)23t . Plugging this value back into 1 gives us c(w2)+c(w3)2t2 . With 1 and 2, this gives us 2t21tt , which simplifies to t2+t2 . But, since t2+t1 iff tϕ1 , this contradicts our assumption that t(12,ϕ1] .             

At this stage, we would like to direct the reader’s attention to a few salient aspects of the results presented in this section. First, it is noteworthy that the conditions which ensure that LIS satisfies 7 are exactly the conditions which ensure LIS satisfies 4 and 8 (and, thus, all of the AGM postulates). On the face of it, this may seem surprising. However, those familiar with the literature may recall that 7 stands in a special relationship to 7 and 8. Gärdenfors (1988: 57) showed that given the basic postulates, 1 – 6, the conjunction of the two supplementary postulates, 7 and 8, is equivalent to the ‘factoring’ condition stated below.

( V) Either (i) BAB=BA or (ii) BAB=BB or (iii) BAB=BABB Factoring

It is simple to see that V implies BSSCn(BSBS) and, thus, as a corollary we see that taken together 1 – 8 imply 7. Since LIS satisfies all of the AGM postulates provided Cogency and t(12,ϕ1] , it follows that LIS satisfies 7 under the same conditions.

Secondly, it is worth noting explicitly that 8 is the only KM postulate that LIS can violate for any choice of Lockean threshold even under the Cogency assumption.21 This reinforces the already prevalent impression that 8 is in some sense the most distinctive and characteristic KM postulate when it comes to distinguishing between the kinds of belief change embodied by the KM and AGM postulates, respectively.

Finally, it is also worth making explicit the observation, entailed by the preceding analysis, that while there are certain (highly restrictive) conditions under which LIS perfectly coheres with the qualitative norms given by AGM belief revision, there are no similar conditions which ensure coherence of LIS with the qualitative norms given by the KM theory of belief update.

Table 5
Table 5
Table 5

LIS and the KM Postulates.

5.2. LSS vs. AGM

We turn now to the second ‘diagonal’ comparison between the theories featuring in table 1. Specifically, we focus now on identifying points of coherence and divergence between the quantitative norms of subjunctive supposition enshrined in LSS and the qualitative norms of indicative supposition encoded in the AGM postulates. Again, we begin with the most general case. Proposition 11 establishes which of the AGM postulates are universally satisfied by LSS, while Proposition 12 reports the divergences.

Proposition 11. LSS must satisfy  2, 3, 6, and  7. That is, each of the following is satisfied for any  B , S  and  t(12,1] :

  1. SBS

  2. BSCn(B{S})

  3. If  SS , then  BS=BS

  4. BSSCn(BS{S})

Proof. Proceeding sequentially:

  1. In Proposition 4, we saw that LSS satisfies 1, which is identical to 2.             

  2. Let XBS . Then SXB , which implies XCn(B{S}) , and thus LSS satisfies 3.             

  3. In Proposition 4, we saw that LSS satisfies 4, which is identical to 6.             

  4. Let XBSS . Then cSS(X)t , i.e.


    Next, note that


    Furthermore, for any wW , σ(w,S)¬(SX) iff σ(w,S)S¬X . This in turn entails by Uniformity that σ(w,S)=σ(w,SS) , and hence that σ(w,SS)¬X . So cS(¬(SX))cSS(¬X)1t . So cS(SX)t and SXBS , as desired.

Proposition 12. LSS can violate  1, 4, 5, and  8. That is, each of the following is possible:

  1. BSCn(BS)

  2. BS¬S , but  BBS

  3. ¬S , but  BS

  4. BS¬S , but  BSSCn(BS{S})

Proof. Proceeding sequentially:

  1. This is immediate from the fact that Lockean agents can violate closure requirements.             

  2. To show that LSS can violate Preservation for any threshold t(12,1] , even when we assume Cogency , let t(12,1] and suppose that σ(w4,S)=w1 and that c is as defined below.


    This yields the prior belief set B=Cn({¬X}) and the suppositional judgement set BS=Cn(S) , both satisfying Cogency .22 But then we see that B¬S , ¬XB and ¬XBS , since cS(¬X)=12<t .             

  3. This is immediate from the fact that Lockean agents can violate consistency requirements.             

  4. To see this, let W contain the following worlds.

    w1¬SS¬X      w2¬S¬S¬X       w3SSXw4S¬SX            w5SS¬X

    Now, let c be such that c(w1)=c(w2)=12 so we have B=Cn(¬S¬X) . Now, let σ satisfy the conditions below.

    σ(w1,S)=σ(w1,SS)=w3          σ(w2,S)=w4           σ(w2,SS)=w5

    This gives us cS(w3)=cS(w4)=12 and cSS(w3)=cSS(w5)=12 , which respectively yield BS=Cn(SX) and BSS=Cn(SS) . All three belief sets, B , BS , and BSS , are cogent. Thus, it’s clear that (i) S is consistent with BS , (ii) XCn(BS{S}) and (iii) XBSS , which gives us the desired counterexample to 8.             

Clearly, the violations of 1 and 5 noted in Proposition 12 are straightforwardly remedied by the assumption of Cogency . However, the violation of AGM’s distinctive 4 postulate does not disappear under the Cogency assumption, and limiting the range of available Lockean thresholds doesn’t help either. Thus, just as 8 is the one KM postulate that is universally violated by LIS (for all thresholds, and even given the relevant cogency assumption), 4 is the one AGM postulate that is universally violated by LSS (for all thresholds, and even given the relevant cogency assumption). Again, this reinforces the already prevalent impression that just as 8 is the most characteristic norm of subjunctive supposition encoded in the KM postulates, 4 is the most characteristic norm of indicative supposition encoded in the AGM postulates.

Before concluding, we turn briefly to investigating whether, and under what conditions, LSS satisfies the weakenings of 4 discussed in Section 3.

Proposition 13. LSS can violate  4w for any Lockean threshold t(12,1) . That is, for any t(12,1) , it is possible to have SB even though  BBS .

Proof. To see this, consider the following credence function, where ε>0 is arbitrarily small, and let σ(w3,S)=σ(w4,S)=w2 .


Then c(S),c(X)t and cS(X)<t , which gives us S,XB but XBS .             

Proposition 14. Assuming Cogency, LSS satisfies  4w. That is, SB entails  BBS  when we assume Cogency.

Proof. To see this, let S,XB . By Cogency, SXB and hence c(SX)t , which entails cS(X)t and hence XBS .             

So just as LIS can, in general, violate 4w, but satisfies it in the presence of Cogency, LSS does the same. Turning to its weaker cousin ( 4v) where we saw some interesting behaviour from LIS with respect to the Golden Threshold, we also find some interesting threshold related behaviour. Specifically, the proposition below establishes that just as LIS satisfies 4v when t>ϕ1 , LSS satisfies 4v when t>23 .

Proposition 15. LSS satisfies  4v, for all and only Lockean thresholds t(23,1] . That is, it is possible to have S,XB with  ¬XBS  if and only if  t23 .

Proof. Let t>23 and assume that X,SB . By the assumption, we know that c(S) , c(X)>23 , which implies c(SX)>13 . Since imaging by S does not decrease the probability of any S worlds, we infer c(SX)>13 and, thus, cS(X)>13 , which in turn implies cS(¬X)23 . So ¬XBS , as desired. For the other direction, set t23 and let W={w1,w2,w3,w4} as given below.

w1SX         w2S¬X          w3¬SX           w4¬S¬X

Since t23 , the credence function satisfying the conditions c(w1)=c(w2)=c(w3)=t2 and c(w4)=13t2 is probabilistic. Since c(S),  c(X)t , we have S,XB . Now suppose that σ(w3,S)=σ(w4,S)=w2 . Then cS(¬X)=c(w2)+c(w3)+c(w4)t . So ¬XBS , which is a violation of 4v.

The results established in this section are summarised in table 6 below.

Table 6
Table 6
Table 6

LSS and the AGM Postulates.

6. Conclusion and Future Work

Recall that one of the basic aims of this paper was to systematically evaluate the claim that ‘imaging is to KM as conditionalization is to AGM’ from the perspective of a Lockean theory of belief and supposition. Below is a summary of the most significant implications of our analysis for this evaluation and an overview of all results from this paper is found in an appendix.

  1. Firstly, there is a significant sense in which our analysis has vindicated the popular analogy between the relationship of conditionalization and AGM on the one hand, and the relationship of imaging and KM, on the other. Specifically, we have shown that while there are conditions—namely, t(12,ϕ1] and Cogency—under which LIS coheres perfectly with AGM, there are similarly conditions— Cogency —under which LSS coheres perfectly with KM. However, no combination of similar conditions suffice to establish coherence between LIS and KM or between LSS and AGM.

  2. We have also identified the diachronic postulates responsible for the divergences between LIS/LSS and AGM/KM, namely Preservation (and its generalisation Subexpansion)/Compositionality. Apart from these postulates, imposing the synchronic requirements of Cogency/ Cogency paves the way for perfect coherence between LIS/LSS and AGM/KM. This offers some formal justification for the intuitive claim that Preservation and Compositionality are the most distinctive diachronic norms of qualitative indicative and subjunctive suppositional reasoning, respectively.

  3. Finally, it is worth emphasising that in the presence of the relevant cogency assumptions, LIS/LSS actually coincide on every AGM/KM postulate other than Compositionality, Preservation, and its generalisation Subexpansion. Thus, the cogency assumptions largely obscure the most central differences between LIS and LSS when it comes to qualitative norms of suppositional judgement. In the absence of cogency assumptions, the differences between LIS and LSS are far greater.

One major problem that arises from our analysis is to find sets of qualitative suppositional reasoning norms that precisely axiomatise LIS and LSS respectively. Such axiomatisations would allow us to pinpoint the qualitative norms that are characteristic of the suppositional reasoning practices of all Lockean agents, and would constitute potentially compelling competitors to the AGM/KM postulates, which have dominated the discussion of qualitative belief change norms ever since their formulation.

A. Summary of Results

Table 7
Table 7
Table 7

Summary of Results.


First and foremost, we would like to single out Hans Rott for his detailed feedback and encouragement, which greatly improved the quality and depth of this paper. We are also appreciative of the valuable feedback received from Francesco Berto, Liam Kofi Bright, Peter Brössel, Catrin Campbell-Moore, Jake Chandler, Jessica Collins, Vincenzo Crupi, James Delgrande, Zoe Drayson, Kenny Easwaran, Eduardo Fermé, Tyrus Fisher, Melissa Fusco, Konstantin Genin, Nina Gierasimczuk, Remco Heesen, Gabriele Kern-Isberner, Hanti Lin, Jason Konek, Tamar Lando Pavlos Peppas, Krzysztof Mierzewski, Julien Murzi, Richard Pettigrew, Patricia Rich, Luis Rosa, Michal Sikorski, Shawn Standefer, and Michael Titelbaum. Lastly, we are grateful to audiences at MIT, the Rutgers Foundations of Probability Seminar, the Formal Epistemology Workshop in Torino, the Australasian Association of Philosophy Conference in Wollongong, the Foundations of Belief Change Workshop at the Pacific Rim International Conference on Artificial Intelligence in Fiji, and the University of Western Australia for their helpful discussions.


  1. Note that while individual propositions can be unproblematically identified with their corresponding truth sets, the same is not true for sets of propositions, since there can exist ΓΓ such that Γ=Γ .
  2. While their 1985 paper, cited above, was the first full characterisation of AGM’s revision operator, this work was the fusion of two independent projects. Alchourrón and Makinson (1981; 1982) had previously been investigating the derogation and revision of legal codes, while Gärdenfors (1978; 1981) had done considerable work on conditionals and belief change.
  3. It is worth mentioning that KM is not normally presented as a theory of subjunctive supposition. One of this paper’s main contributions is a novel argument for viewing the KM axioms as qualitative rationality norms for subjunctive supposition.
  4. Although this motivation for update as a distinct process from revision is prima facie plausible, it is only satisfactory for limited applications. Friedman and Halpern (1999) have persuasively argued that there are no deep difference between these two types of operations. In particular, they show that the apparent difference between revisions and updates can be recast as a relic of the chosen language. What may be described as a dynamically changing world in one language can be redescribed as a static world using appropriate temporal indices. It may be useful to retain the distinction between revision and update in areas like computer science where there is genuine import to the language in which a database management procedure is implemented. However, in epistemology, where questions are less bound to syntactic matters, other motivation is needed. Still, we see value in the distinction when these operations are understood in terms of supposition rather than belief change.
  5. The “epistemic”/“ontic” terminology was introduced in a series of papers by Lindström and Rabinowicz (1992b; 1992a; 1998) discussing indicative and subjunctive conditionals. It is widely acknowledged that the correspondence between indicative/subjunctive conditionals and epistemic/ontic conditionals is not perfect—there are a number of cases where the two come apart; see Rott (1999b). The same is true for supposition. Still, for the purposes of this paper, we will ignore these imperfections and rely on the indicative/subjunctive terminology to capture the epistemic/ontic distinction.
  6. Ahmed (2014) provides further explanation of the difference between evidential and causal decision theory from the perspective of an evidentialist, while Joyce (1999) does so from the point of view of a causalist.
  7. Note that the Uniformity condition can also be directly motivated in terms of subjunctive supposition (and without reference to subjunctive conditionals), since failures of Uniformity imply that it is sometimes rational to (i) believe X upon subjunctively supposing Y , (ii) believe Y upon subjunctively supposing X , (iii) believe Z upon subjunctively supposing X , (iv) believe ¬Z upon subjunctively supposing Y . Although this is less obviously bizarre than the problems that Uniformity violations create for the semantics of subjunctive conditionals, it is also a puzzling and intuitively irrational form of suppositional reasoning.
  8. For generalisations of Lewis’s imaging rule that allow for more than one closest world, see Gärdenfors (1982), Joyce (1999). For a generalisation of imaging to the context of partial supposition analogous to Jeffrey’s generalisation of Bayesian conditionalization, see Eva and Hartmann (2021).
  9. Further discussion can be found in Easwaran (2016), Leitgeb (2017), Dorst (2019), Douven and Rott (2018), Schurz (2019), and Jackson (2020).
  10. For present purposes, we assume that is the classical consequence relation, however, this is strictly speaking more than is required. In the theory’s original formulation, can be any consistent, compact, and supraclassical consequence relation satisfying modus ponens and the deduction theorem.
  11. While we embrace this commitment for present purposes, it should be acknowledged that there is room to disagree here. One might think that suppositional judgments should be hyperintensional due to considerations of topic-sensitivity or relevance. A recent discussion of these matters in the context of AGM is available in Berto (2019).
  12. The original formulation of these postulates do not include Preservation and, instead, include the stronger Vacuity principle requiring that if B¬S , then BSCn(B{S}) . However, Preservation implies Vacuity in the context of Closure and Success and is preferable for both aesthetic and conceptual reasons.
  13. Their investigations into the contrasting diachronic coherence requirements of Lockeanism and AGM explored a “Lockean revision” operation, which is formally identical to the operation characterising LIS. For an alternative presentation of their results and some discussion, see Genin (2019).
  14. For further results illustrating the significance of ϕ1 for conditional reasoning in Lockean agents, see Eva (2020).
  15. This is not the only way of reconciling Lockeanism with AGM. Building on his Stability Theory of Belief, Leitgeb (2013; 2017) has recently proposed a belief revision operator satisfying the Lockean thesis and all of the AGM postulates. However, that approach comes with certain definitive costs that have been discussed in the literature; see Titelbaum (2021) for further discussion of these issues.
  16. The first account of update was given by Winslett (1988) with her “Possible Models Approach”, which built on earlier work from Ginsberg (1986) and Ginsberg and Smith (1987; 1988). Notable subsequent offerings are given in Winslett (1990), Dalal (1988), Forbus (1989), Zhang and Foo (1996), and Herzig (1996). A systematic comparison of how these operations relate to the KM postulates, introduced below, is provided by Herzig and Rifi (1999).
  17. These postulates were originally stated in a more semantic formalism. For continuity with the AGM postulates, we provide them using an equivalent syntactic formulation.
  18. Following the terminology used in an unpublished manuscript by Jessica Collins, we adopt this alternative name for the Superexpansion postulate from AGM in honour of Herman Chernoff (1954) who proposed an analogous principle in the context of finite choice functions.
  19. Moreover, Herzig and Rifi (1999) show that this postulate is not satisfied by many of the competing update operators to KM update mentioned in footnote 16.
  20. While Katsuno and Mendelzon call this the “Disjunction Rule”, we prefer the terminology from Collins (1991), which we feel better captures the intuitive content of the postulate.
  21. To verify this, see the counterexample to 8 provided in Proposition 8.
  22. Of course, these belief sets will also contain some Stalnaker conditionals, but we can define the selection functions to ensure the satisfaction of Cogency .


1 Adams, Ernest W. (1970). Subjunctive and Indicative Conditionals. Foundations of Language, 6(1), 89–94.

2 Ahmed, Arif (2014). Evidence, Decision, and Causality. Cambridge University Press.

3 Alchourrón, Carlos E., Peter Gärdenfors, and David Makinson (1985). On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. The Journal of Symbolic Logic, 50(2), 510–30.

4 Alchourrón, Carlos E. and David Makinson (1981). Hierarchies of Regulation and their Logic. In Risto Hilpinen (Ed.), New Studies in Deontic Logic: Norms, Actions, and the Foundations of Ethics (125–48). D. Reidel.

5 Alchourrón, Carlos E. and David Makinson (1982). On the Logic of Theory Change: Contraction Functions and Their Associated Revision Functions. Theoria, 48(1), 14–37.

6 Bayes, Thomas (1763). An Essay Towards Solving a Problem in the Doctrine of Chances. Transactions of the Royal Society of London, 53, 370–418.

7 Berto, Francesco (2019). Simple Hyperintensional Belief Revision. Erkenntnis, 84(3), 559–75.

8 Chernoff, Herman (1954). Rational Selection of Decision Functions. Econometrica, 22(4), 422–43.

9 Collins, J. (1991). Belief Revision (Doctoral dissertation, Princeton University).

10 Dalal, Mukesh (1988). Investigations into a Theory of Knowledge Base Revision: Preliminary Report. In Howard E. Shrobe, Tom M. Mitchell, and Reid G. Smith (Eds.), AAAI’88: Proceedings of the Seventh AAAI National Conference on Artificial Intelligence (475–79). AAAI Press.

11 Dorst, Kevin (2019). Lockeans Maximize Expected Accuracy. Mind, 128(509), 175–211.

12 Douven, Igor and Hans Rott (2018). From Probabilities to Categorical Beliefs: Going beyond Toy Models. Journal of Logic and Computation, 28(6), 1099–124.

13 Easwaran, Kenny (2016). Dr. Truthlove or: How I Learned to Stop Worrying and Love Bayesian Probabilities. Noûs, 50(4), 816–53.

14 Eva, Benjamin (2020). The Logic of Conditional Belief. Philosophical Quarterly, 70(281), 759–79.

15 Eva, Benjamin and Stephan Hartmann (2021). The Logic of Partial Supposition. Analysis, 81(2), 215–24.

16 Foley, Richard (1993). Working without a Net: A Study of Egocentric Epistemology. Oxford University Press.

17 Forbus, Kenneth D. (1989). Introducing Actions into Qualitative Simulation. In Natesa S. Sridharan (Ed.), Proceedings of the 11th International Joint Conference on Artificial Intelligence (1273–78). Morgan Kaufmann.

18 Friedman, Nir and Joseph Y. Halpern (1999). Modeling Belief in Dynamic Systems, Part II: Revision and Update. Journal of Artificial Intelligence Research, 10(1), 117–67.

19 Gärdenfors, Peter (1978). Conditionals and Changes of Belief. In Ilkka Niiniluoto and Raimo Tuomela (Eds.), The Logic and Epistemology of Scientific Change (381–404). North Holland.

20 Gärdenfors, Peter (1981). An Epistemic Approach to Conditionals. American Philosophical Quarterly, 18(3), 203–11.

21 Gärdenfors, Peter (1982). Imaging and Conditionalization. The Journal of Philosophy, 79(12), 747–60.

22 Gärdenfors, Peter (1988). Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT Press.

23 Genin, Konstantin (2019). Full & Partial Belief. In Richard Pettigrew and Jonathan Weisberg (Eds.), The Open Handbook of Formal Epistemology (437–98). PhilPapers Foundation.

24 Ginsberg, Matthew L. (1986). Counterfactuals. Artificial Intelligence, 30(1), 35–79.

25 Ginsberg, Matthew L. and David E. Smith (1987). Possible Worlds and the Qualification Problem. In Kenneth D. Forbus and Howard E. Shrobe (Eds.), AAAI’87: Proceedings of the Sixth National Conference on Artificial Intelligence (212–17). Morgan Kaufmann.

26 Ginsberg, Matthew L. and David E. Smith (1988). Reasoning About Action I: A Possible Worlds Approach. Artificial Intelligence, 35(2), 165–95.

27 Herzig, Andreas (1996). The PMA Revisited. In Luigia Carlucci Aiello, Jon Doyle, and Stuart C. Shapiro (Eds.), Proceedings of the Fifth International Conference on Principles of Knowledge Representation and Reasoning (KR’96) (40–50). Morgan Kaufmann.

28 Herzig, Andreas (1998). Logics for Belief Base Updating. In Didier Dubois and Henri Prade (Eds.), Belief Change (189–231). Springer.

29 Herzig, Andreas and Omar Rifi (1999). Propositional Belief Base Update and Minimal Change. Artificial Intelligence, 115(1), 107–38.

30 Jackson, Elizabeth G. (2020). The Relationship between Belief and Credence. Philosophy Compass, 15(6), 1–13.

31 Joyce, James M. (1999). The Foundations of Causal Decision Theory. Cambridge University Press.

32 Katsuno, Hirofumi and Alberto O. Mendelzon (1992). On the Difference between Updating a Knowledge Base and Revising it. In Peter Gärdenfors (Ed.), Belief Revision (183–203). Cambridge University Press.

33 Keller, Arthur M. and Marianne Winslett Wilkins (1985). On the Use of an Extended Relational Model to Handle Changing Incomplete Information. IEEE Transactions on Software Engineering, 11(7), 620–33.

34 Kyburg, Henry E. (1961). Probability and the Logic of Rational Belief. Wesleyan University Press.

35 Leitgeb, Hannes (2013). The Review Paradox: On The Diachronic Costs of Not Closing Rational Belief under Conjunction. Noûs, 78(4), 781–93.

36 Leitgeb, Hannes (2017). The Stability Theory of Belief: How Rational Belief Coheres with Probability. Oxford University Press.

37 Levi, Isaac (1996). For the Sake of Argument. Cambridge University Press.

38 Lewis, David (1976). Probabilities of Conditionals and Conditional Probabilities. The Philosophical Review, 85(3), 297–315.

39 Lindström, Sten and Wlodek Rabinowicz (1992a). Belief Revision, Epistemic Conditionals and the Ramsey Test. Synthese, 91(2), 195–237.

40 Lindström, Sten and Wlodek Rabinowicz (1992b). The Ramsey Test Revisited. Theoria, 58(2–3), 131–82.

41 Lindström, Sten and Wlodek Rabinowicz (1998). Conditionals and the Ramsey Test. In Didier Dubois and Henri Prade (Eds.), Belief Change (147–88). Springer.

42 Nebel, Bernhard (1989). A Knowledge Level Analysis of Belief Revision. In Ronald J. Brachman, Hector J. Levesque, and Raymond Reiter (Eds.), Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning (KR’89) (301–11). Morgan Kaufmann.

43 Pearl, Judea (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.

44 Peppas, Pavlos (2008). Belief Revision. In Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter (Eds.), Handbook of Knowledge Representation (317–59). Elsevier.

45 Rott, Hans (1999a). Coherence and Conservatism in the Dynamics of Belief Part I: Finding the Right Framework. Erkenntnis, 50(2/3), 387–412.

46 Rott, Hans (1999b). Moody Conditionals: Hamburgers, Switches, and the Tragic Death of an American President. In Jelle Gerbrandy, Maarten Marx, Maarten de Rijke, and Yde Venema (Eds.), JFAK. Essays Dedicated to Johan van Benthem on the Occasion of His 50th Birthday. Amsterdam University Press.

47 Rott, Hans (2001). Change, Choice and Inference: A Study of Belief Revision and Nonmonotonic Reasoning. Oxford University Press.

48 Russell, Bertrand (1904). Meinong’s Theory of Complexes and Assumptions II. Mind, 13(1), 336–54.

49 Schurz, Gerhard (2019). Impossibility Results for Rational Belief. Noûs, 53(1), 134–59.

50 Shear, Ted and Branden Fitelson (2019). Two Approaches to Belief Revision. Erkenntnis, 84(3), 487–518.

51 Stalnaker, Robert (1968). A Theory of Conditionals. In Nicholas Rescher (Ed.) Studies in Logical Theory (98–112). Blackwell.

52 Titelbaum, Michael G. (2021). The Stability of Belief: How Rational Belief Coheres with Probability, by Hannes Leitgeb. Mind, 130(519), 1006–17.

53 Winslett, Marianne (1988). Reasoning about Action Using a Possible Models Approach. In Howard E. Shrobe, Tom M. Mitchell, and Reid G. Smith (Eds.), AAAI-88: Proceedings of the Seventh National Conference on Artificial Intelligence (89–93). AAAI Press.

54 Winslett, Marianne (1990). Updating Logical Databases. Cambridge University Press.

55 Zhang, Yan and Norman Y. Foo (1996). Updating Knowledge Bases with Disjunctive Information. In William J. Clancey and Daniel S. Weld (Eds.), AAAI-96: Proceedings of the Thirteenth National Conference on Artificial Intelligence (562–68). AAAI Press.