Article
Author: Benjamin Lennertz (Colgate University)
Elga (2010) argues that no plausible decision rule governs action with imprecise credences. I follow Moss (2015a) in claiming that the solution to Elga’s challenge is found in the philosophy of mind, not in devising a special new decision rule. Moss suggests that in decision situations that involve imprecise credences, we must identify with a precise credence, but she says little about identification. By reflecting on the common conception of identification and on what is necessary for Moss’s solution to succeed, I argue that identifying with a precise credence is fundamentally accepting (in the sense of Bratman 1992; Cohen 1989) a proposition about probabilities. The norm on action with imprecise credences is then a special case of the general norm on action and acceptance. I delineate a number of attractive features of this position.
Keywords:
How to Cite: Lennertz, B. (2022) “Imprecise Credences and Acceptance”, Ergo an Open Access Journal of Philosophy. 9(0). doi: https://doi.org/10.3998/ergo.2264
Suppose that a coin you know to be fair is about to be flipped. It is plausible that you ought to have a precise degree of confidence or credence in the proposition that it comes up heads. But in many cases, our credence is justifiably imprecise, mushy, or vague. Adam Elga describes such a case:
A stranger approaches you on the street and starts pulling out objects from a bag. The first three objects he pulls out are a regular-sized tube of toothpaste, a live jellyfish, and a travel-sized tube of toothpaste. To what degree should you believe that the next object he pulls out will be another tube of toothpaste? (2010: 1)
The natural thought is that one’s credence can permissibly be imprecise in such a scenario (Levi 1974; Jeffrey 1983; Joyce 2005; 2010; Sturgeon 2008).1
Some theorists disagree. They claim that it is impermissible to have imprecise credences, arguing that there is no plausible decision rule about how imprecise credences should license and forbid actions (Elga 2010; White 2009: 178–80; Dorr 2010: 198). Consider, for example, an imprecise credence of 10–80% in the proposition that it will rain tomorrow. Is someone who has that credence rational if they act as if rain tomorrow is both 10% and 80% probable? Doesn’t that seem impossible? Or maybe they must choose some particular value, n, in the interval and consistently act as if rain tomorrow is n probable? Then in what sense is their credence imprecise, rather than of precise degree n?2
Sarah Moss (2015a) suggests a picture of how to decide and act in these scenarios. In reasoning and making decisions with an imprecise credence, we must identify with some precisification of that credence. For instance, if you have a 10–80% credence that it will rain tomorrow, you must reason and decide as if you have some precise credence of degree between 10% and 80% in that proposition. But, on her picture, there is no general rule about which precisifications you can identify with. And it can be permissible to identify with different precisifications at different times.
I find Moss’s account insightful because it brings to light a different way of viewing the problem of acting with imprecise credences. It is natural to think that a response to this problem involves defending a decision rule that specifically governs imprecise credences. Joyce, for instance, says, “[w]hat you need is some decision rule that will tell you how to make choices when expected utility assessments are equivocal” (2010: 311). Moss, instead, inspires a response that involves reflection in the philosophy of mind. She doesn’t merely put forth a different decision rule; she ties decision to aspects of the reasoner’s state of mind—the precise credence they identify with—that aren’t necessitated by their imprecise credence.
Moss reasons to her position by analogy to moral dilemmas. Though this reasoning is suggestive and dialectically effective, it would be illuminating to give an explanation that is both more direct and more general. In order to do so, I will investigate the key notion in her account—identifying with a precise credence. I’ll argue that it is plausible that identifying with a precise credence, say of degree n in p, is accepting (in the sense of Bratman 1992; Cohen 1989) that the probability of p is n. Assimilating identification to the already explored notion of acceptance makes for a fuller and deeper Mossian response to the challenge to imprecise credences. My ground for this response is more direct than an analogical one. More importantly, my picture is more general, in that I don’t brutely posit Moss’s norm on action in scenarios where a person has an imprecise credence. Instead the norm falls out of a general account of the connection between acceptance and action.
I start by relaying Elga’s presentation of the challenge for imprecise credences. Then I present Moss’s solution. Next I make the case that identifying with a precise credence is accepting some probabilistic statement and show how this fits in a Mossian picture. This inspires a wider perspective, seeing Moss’s account of action with imprecise credences as a special case of a general picture of the connection between acceptance and action. The resulting picture allows for a satisfying answer to an objection to Moss’s account and for extensions into natural but rarely-discussed ways of having imprecise credences.
Elga’s argument that there is no plausible account that explains how imprecise credences license and forbid actions proceeds by eliminating candidate decision rules. I’ll focus on the rule most relevant to the view to be developed in this paper.
Suppose that you find yourself in the following betting situation (I’ll make the simplifying supposition throughout that you value money linearly and only value money). You know that you will be offered bets A and B in quick succession, so that you will gain no new evidence between the offerings.
Bet A: If it rains tomorrow, you lose $10. If it doesn’t rain, you win $15.
Bet B: If it rains tomorrow, you win $15. If it doesn’t rain, you lose $10.
Bets A and B are a great pair to go in for,3 since they guarantee you $5 no matter what happens. They comprise what Alan Hájek calls a Czech Book (2009). Now you might agree to only one of these bets. For instance, if you were certain it was going to rain tomorrow, you’d agree to Bet B and reject bet A. Elga’s central claim is that it is irrational to reject both bets. He claims that any plausible decision rule, including one governing imprecise credences, must require you to agree to at least one of the bets (2010: 4).4
Any agent with a precise credence will not reject both bets if they follow the standard expected utility maximizing decision rule (it is permissible to perform an action that has highest expected utility according to one’s credences and utilities and it is impermissible to perform an action that does not). If your credence that it rains tomorrow is of degree less than 60%, then agreeing to Bet A is permissible and declining is impermissible, and if your credence is of degree greater than 60%, declining Bet A is permissible and agreeing to it is impermissible; both agreeing to and declining Bet A are permissible if your credence is of degree 60%. Similarly, if your credence that it rains tomorrow is of degree greater than 40%, then agreeing to Bet B is permissible and declining is impermissible, and if your credence is of degree less than 40%, declining Bet B is permissible and agreeing to it is impermissible; both agreeing to and declining Bet B are permissible if your credence is of degree 40%. Note that there is no degree of credence that makes declining both bets permissible. Having precise credences and following the expected utility maximizing rule allow you to straightforwardly and determinately make decisions in this case.5
Things are murkier when we consider imprecise credences. An agent who always makes decisions on the basis of the expected utility maximizing rule won’t receive guidance in cases where they have imprecise credences, since there is no single probability to use in the calculations. So, many theorists devise special decision rules for this special situation. We’re going to focus on one such rule in this section. To understand it, let’s define the notion of a precisification of an imprecise credence. Let’s stipulate that an imprecise credence has an interval as its degree and a precise one has a particular real number as its degree. Then a precise credence, c′, is a precisification of an imprecise credence, c, if c and c′ have the same content and c′’s degree is a member of c’s degree.6 Consider the following decision rule:
Permissive: Suppose that c is an imprecise credence with content p. For any precisification of c, c′, it is permissible for you to act as if you have c′. It is impermissible to act as if you have a credence with content p that is not a precisification of c.
This is the natural first suggestion for a decision rule for imprecise credences. As Elga says, “If your evidence is so unspecific as to demand a widely spread-out probability function, it is natural that the requirements of rationality be correspondingly spread out” (2010: 5). Suppose that you have an imprecise credence of 10–80% that it will rain tomorrow and are offered the sequence of bets described above. Permissive says that it is permissible for you to go in for the sequence of bets—since the 10% precisification of your credence yields an expected value of $12.5 for Bet A (which is greater than the $0 alternative of rejecting) and the 80% precisification of your credence yields an expected value of $10 (>$0) for Bet B. That’s great because if you do agree to both bets, you’re guaranteed to make five bucks.
Permissive also has the consequence that there is a “whole range of bets such that for each one, it is rationally permitted that you accept it and also rationally permitted that you reject it” (Elga 2010: 5). This is true of Bet A and Bet B. Permissive allows you to reject Bet A based on the 80% precisification of your imprecise credence (EV(Bet A) = $-5 < $0) and then to reject Bet B based on the 10% precisification (EV(Bet B) = $-7.5 < $0). This is the situation that Elga claimed to be irrational. So, he thinks Permissive must be false.
Elga argues against other decision rules and the literature includes even more rules as attempted solutions, but Permissive is the one that will serve as our point of comparison. First, as Elga says, it is the natural candidate given the motivations for imprecise credences. Second, Moss’s rule is much like, though more restrictive than, Permissive.
Many theorists have responded to Elga’s problem (Joyce 2010; Bradley & Steele 2014; Chandler 2014; Sahlin & Weirich 2014; Sud 2014; Williams 2014; Rinard 2015), and many of these have done so by offering a decision rule that, for an imprecise credal state and a set of values, determines what a person may or should do in a choice situation.7 Moss’s response is different. While she does offer something like a decision rule, she realizes that this is also a problem in the philosophy of mind.8 What a person may do, according to her, depends on parts of their mental state other than their imprecise credal state and values. Moss agrees with Elga that someone with your imprecise credence that it will rain tomorrow must, on any particular occasion, act as if she has some precise credence. The credence she must act as if she has is the one that she identifies with. Her decision rule is:
Identify: Suppose that c is an imprecise credence with content p. For any precisification of c, c′, it is permissible for you to act as if you have c′ if and only if you rationally identify with c′. It is impermissible to act as if you have a credence with content p that is not a precisification of c.9
Identify is distinct from Permissive in content, but it isn’t clear whether it is distinct in extension. That depends on the constraints on which precisifications of your imprecise state you can rationally identify with. If there are no constraints, Identify makes the same predictions about rational action as Permissive. The more constraints there are, the more different the predictions of the two rules become. Moss often talks as if there are no constraints of rationality: “there is no rule of rationality saying that an agent cannot change which mental state she identifies with” (2015a: 673). But sometimes she is more moderate:
There are multiple readings of the claim that a rational agent may identify with different precise mental states. A weak reading says that changing what precise state you identify with is compatible with being rational, i.e. that it is sometimes rationally permissible for you to change what precise state you identify with. A stronger reading says that it is always rationally permissible for you to change what precise state you identify with. The weaker claim is all I need for my argument against Elga (2010). (Moss 2015a: 673)
I suspect that Moss supports the more extreme position that there are no rules of rationality in this realm.
Nonetheless, she does not take on the consequences of Permissive in its full generality. This is because she thinks it isn’t psychologically possible for you to identify with just any precise credence at just any time. Since the actions you may and may not perform are based on the precise credences you rationally identify with and since you can’t rationally identify with a precise credence that you can’t identify with, there will be, in some situations, precisifications of your imprecise credence that you are forbidden from acting in accordance with. That is, Permissive is extensionally inadequate. In particular, Moss thinks that it often isn’t possible to change what precise credence you identify with quickly or without much mental effort, or to go back and forth in quick succession. Given this, the kind of quick, effortless change that is required to reject both bets in Elga’s scenario won’t usually be possible. So, Identify predicts that, typically, rejecting both bets is irrational (Moss 2015b; 2015a: 186).10
It’s important to realize that the Mossian view makes the desired prediction in Elga’s scenario for substantive, not structural, reasons. According to Identify, the context and manner in which Elga’s bets are offered makes rejecting them both irrational. Moss’s theory “allows that subtle differences between cases may settle whether agents may reject good pairs of bets without learning” (2015a: 672). It is not the structural feature of being a sequence of choices that a person knows will leave them worse off. Indeed, Moss is clear that her theory does, at least sometimes, judge as rational sequences of actions that have the structure that Elga finds problematic (2015a: 673). We can see this in her analogy to moral dilemmas.11 Just as people who are stuck in moral dilemmas are not irrational for acting in one way sometimes and in another way at other times, people who are undecided between different positions in credal dilemmas are not irrational for acting in one way sometimes and in another at other times.12 Here’s Moss’s example:
suppose that your elderly mother must move in with family, either with you or with your sister in a distant city. There is a trusted psychologist who will soon make an expert recommendation about which living situation would make your mother happiest. From your perspective, the question seems impossible. There are many factors to consider, and nothing to decide the question. After agonizing, you conclude that your mother will likely end up being happiest with you. Just a few hours later, a friend offers you 5000 frequent flyer miles in exchange for five local bus tickets. But since your mother would enjoy using the bus to get around, it makes more sense for you to keep your tickets. The next morning, however, you start to feel differently. It is not that you have gotten relevant evidence. The question is just as intractable as before. It is just that you are having second thoughts about where your mother would be happiest. Of course, now you wish you had taken your friend up on the frequent flyer miles. By coincidence, another friend offers you six local bus tickets in exchange for 5000 miles. But since you figure that your mother will end up living with your sister, and since you plan on visiting them often, it makes more sense for you to keep your miles. (Moss 2015a: 669; see also 2015b: 185)
Agreeing to both trades would have netted you a bus ticket. Nonetheless, you first identified with a higher credence that your mother would be happier living with you and the next day identified with a lower credence in that proposition. So, you rejected both trades, and, it seems, did so rationally. Indeed, you would be rational to make these decisions, even if you knew all along that both offers were coming. As Moss observes, “To judge otherwise fails to recognize a simple and familiar way in which we can rationally change our minds” (2015a: 669).13
So far, Moss has motivated with a particular example a rule like Identify and described in a general way the contours of the view. But the most direct way to explain why a norm like Identify holds and to explore its consequences involves explaining what the key state of mind is. That is, Moss’s great insight is that understanding choices in Elga-style scenarios involves not just decision theory, but a good bit of philosophy of mind. Unfortunately, she doesn’t pursue this insight:
The main point of the present paper is that susceptibility to sure losses does not itself guarantee that you are irrational. This point does not depend on any particular psychological theory about what constitutes identifying with particular credences. … There is much that can be said about the nature of credal and moral dilemmas without spelling out necessary and sufficient conditions for the psychological states that determine what is rational for agents in dilemmas … (Moss 2015a: 673–74)
Much can be said. A solution to Elga’s problem is possible. But its plausibility depends on the “particular psychological theory about what constitutes identifying with particular credences.” Otherwise, it’s unclear whether there is anything in people’s psychologies that plays the role that is necessary for the solution to work—for Identify to justify your rejecting both bets in some situations while being distinct in extension from Permissive.14 In the next section, I’ll argue that there is a natural account of identifying with a precise credence that draws on an already-discussed attitude. This will help us see what a Mossian solution looks like and why it is promising, and it will allow for improvement on Moss’s treatment in important ways.
I’ll get at the notion of identifying with a precise credence by examining it in two ways. First, what is identifying in general? Second, what features would identifying have to play in the particular case of identifying with a precise credence in order to get Moss’s desired result? Both of these ways of thinking support the position that identifying with a precise credence, say of degree n in a proposition, is accepting that the probability of that proposition is n. In the next section I’ll show how such a state of acceptance serves the right role in a solution to Elga’s challenge.
We say things like “my mom identifies with her Italian heritage”, “I identify with your plight”, and “Sally identifies with the position that her state of mind can change her body”. Things we identify with fit, in a certain way, with our way of seeing, thinking about, and being in the world.15 But identifying also has downstream effects. What we identify with influences our actions and mental states. As a result of their identifications, my mom might have an Italian flag bumper sticker, I might offer you lessons learned from my experiences, and Sally might meditate daily. We also can say things like, “I’m trying to identify with the feelings he’s having but it’s hard, and might even prove impossible for me to do so.” This suggests that identification is somewhat in our control, since we can try to do it, but that it is not totally in our control, since it can be hard or impossible to do. Finally, we can be rational in identifying with different, even conflicting things in different contexts. I can rationally identify with the city’s desire to build green, affordable housing while at the council meeting, and then identify with the sadness of a resident whose property is taken by eminent domain, as the bulldozers descend. So, our ordinary concept of identification is somewhat loose about what kinds of things we identify with, but these things affect how we think and act, and they can rationally change in different contexts.16
Some of these same features are central to Moss’s notion. Her requirements are that the precise credence that we identify with (i) can affect our decisions and actions, (ii) can change across contexts, (iii) can do so based on practical considerations, without a change in evidence, (iv) is partially up to us, and (v) need not be consistent with the precise credence we identify with in a different context. (i)–(v) are consistent with the features of identification just discussed. We said in the previous paragraph that (i), (ii), (iv), and (v) are true. (iii) adds the additional constraint that a change in identification can happen even without a change in evidence, focusing the account in a way that makes sense for identification in the doxastic realm.17
Is there an attitude that has these features? Yes, what is sometimes called acceptance, acceptance in an inquiry, or acceptance in a context.18 To accept a proposition is to take it for granted in an inquiry (Bratman 1992), take it as a premise in an inquiry (Cohen 1989), or commit oneself to acting on it (Wright 2004: 182). It has some similarities to belief; as Alonso says (he calls the attitude reliance): “relying on [accepting] p involves a disposition to, among other things, deliberate on the basis of p, plan on the basis of p, act on the basis of p, and draw conclusions from p” (2014: 166). But acceptance can come apart from belief. For instance, Cohen suggests that “for professional purposes a lawyer might accept that his client is not guilty even though he does not believe it” (1989: 369). Or a person might accept a proposition because of one’s close friendship with someone who claims that the proposition is true, even though the evidence (including that provided by the friend’s testimony) does not favor believing such a proposition (Cohen 1989: 369; Bratman 1992: 8). Bratman gives an example where subcontractors in a construction project give estimate price ranges for their work, and one accepts that the cost will fall at the top of each range in planning the project (1992: 6–7). A classic example is that of accepting Newton’s laws of motion for the purposes of calculating the landing spot of a medium-sized, short-range projectile, though the total evidence (including that supporting relativity) is inconsistent with Newtonian mechanics.1920
Comparing the features of acceptance in a context with those of identifying with a precise credence supports the following position:
Identification: Suppose n is a real number and p is a proposition. To identify with a precise credence of degree n in p is to accept that the probability of p is n.21
Bratman contrasts believing and accepting as follows:
Belief has four characteristic features: (a) it is … context-independent; (b) it aims at the truth of what is believed; (c) it is not normally in our direct voluntary control; and (d) it is subject to an ideal of agglomeration. In contrast, what one accepts/takes for granted (a*) can reasonably vary … across contexts; (b*) can be influenced by practical considerations that are not themselves evidence for the truth of what is accepted; (c*) can be subject to our direct voluntary control; and (d*) is not subject to the same ideal of agglomeration across contexts. So acceptance in a context is not belief. (1992: 9, asterisks are added to more easily distinguish the properties of belief from acceptance)22
(ii)–(v) of the central requirements of a theory of identifying with a precise credence match up with Bratman’s (a*)–(d*). That is, both acceptance and identifying with a precise credence can vary across contexts, are influenced by things other than evidence, are under our control to some degree, and need not be consistent across contexts. More basic too, than any of these features is that acceptance is a practical attitude whose purpose is to play a role in reasoning and decision-making (specifically by fixing or amending part of the cognitive background against which reasoning and decision-making take place, which is different from the role other practical attitudes, like desire and intention, play). The same is true for identifying with precise credences, as stated in (i). Using language that sounds like that used to describe acceptance, Joyce echoes this idea: “In general, pragmatic sharpening is a matter of proceeding, for purposes of choosing actions, as if one’s credal state is a proper subset … of one’s actual credal state” (2010: 312). Overall, the role and features of identifying with a precise credence match with accepting a probability. Thus, Identification is plausible.23
Before moving on, I’ll say a bit more about the nature and structure of the key notion in Identification, accepting that the probability of p is n. I am ecumenical between a number of views. A flat-footed reading of accepting that the probability of p is n is that it is bearing an attitude of full acceptance to a proposition, that the probability of p is n. But many theorists reject that such propositions exist. This is related to a common objection to the hypothesis that a credence of degree n in p is a belief that the probability of p is n (Maher 1986: 367; Christensen 2004: 18–20; Staffel 2013: 3537; Moss 2018: 2).24 Indeed most epistemologists agree that when we talk about believing that the probability of p is n, we are really talking about having credence of degree n in p, not having a belief toward a proposition. We might, analogously, think that accepting that the probability of p is n is having some attitude of partial acceptance—one which relates to ordinary acceptance in the way that credence relates to belief. We could, if we wanted, call such a state an okay. Okaying is a degreed attitude toward an ordinary proposition; no notion of probability features in the content. But an okay has the major features of acceptance—context dependence, practicality, voluntariness, and exemption from norms across contexts.25 Finally, one might take a middle way between the accepting probabilistic propositions view and the partial acceptance view: accepting that the probability of p is n is accepting a probabilistic content, but such contents are probability spaces, not propositions. This is inspired by Moss’s own way of thinking of credences as beliefs with probabilistic, but non-propositional, contents (pursued after the work I’ve discussed in this paper; see Moss 2018 for guidance on how to develop the details of this strategy). What is important for our current inquiry is that we can understand accepting that the probability of p is n in any of these ways—accepting a probabilistic proposition, accepting a probability space, or partially accepting an ordinary proposition. The picture developed and advantages discussed in the following sections are compatible with all of these options.
Together with Identification, Identify yields that if you have an imprecise credence with content p, you can permissibly act as if you have a precise credence of degree n in p if and only if you rationally accept that the probability of p is n (provided that a credence of degree n in p is a precisification of your imprecise credence). Whether this is co-extensive with Permissive depends on what a person can rationally accept. There are two possible sources of restrictions. First, there might be constraints, of the sort that Moss discusses, on what a person is able to accept. Second, there might be constraints on which possible acceptances are rational. I’ll discuss each in turn as a way of motivating a key generalization—that given Identification, we need not posit Identify as an independent norm. Given the features of acceptance, a norm on acting with imprecise credences much like Identify follows from a general norm that governs the relationship between acceptance and action. As Rinard notes, many accounts of acting with imprecise credences rely on a special decision rule for dealing with Elga-style cases (2015: 3). But this isn’t so for my Mossian account. There is nothing ad hoc about its reply to Elga’s challenge; there is no need for an independent rational norm governing action while in the specific mental state of imprecise credence. Rather, this way of thinking is just a piece of the picture of the general relationship between mind and action.26
Let’s first look at constraints on what it is psychologically possible to accept. Even though there is something voluntary about what a person accepts, they cannot accept just anything they want. In most scenarios, I cannot accept that gravity doesn’t apply to me or that I will survive if I don’t breathe. I can imagine those things, but acceptance is not imagination. Acceptance provides a willingness to reason, make decisions, and act as if what is accepted is true; imagination does not (Bratman 1992: 9; Cohen 1989: 368). It seems impossible to reason, make decisions, or act as if gravity doesn’t apply to me or as if I will survive if I don’t breathe.
Given this, we might wonder why we would ever accept something that goes beyond or is in conflict with what we believe. Stalnaker offers one answer:
Accepting a certain false proposition may greatly simplify an inquiry, or even make possible an inquiry not otherwise possible, while at the same time it is known that the difference between what is accepted and the truth will have no significant effect on the answer to the particular question being asked. (1984: 93)
The example of accepting Newton’s laws of motion in answering questions about mid-sized, short-range projectiles is of this kind. A related kind of scenario is when one is unsure whether a proposition is true but needs to come down one way or another in order to come to a conclusion in one’s inquiry or decide how to act.2728 Think of, for instance, being unsure whether Ulaanbaatar is the capital of Mongolia on a game show but accepting that it is for the purposes of trying to win the prize. Or think of an interlocutor who simply won’t proceed with the inquiry until you grant an assumption about which you have no settled belief—for instance, that the keys might be in your office, even though you are pretty sure you checked.
Imprecise credences land us in situations much like these. In order to act on standard expected utility considerations, one needs to identify with some precise credence, that is, accept some precise probability.29 Sometimes it won’t matter which precise credence you identify with. For example, if you have a 10–80% credence that it will rain and are wearing your $3000 suede coat, then every precisification supports bringing an umbrella. So, for any degree between 10% and 80%, you can accept that the probability of rain is that degree. But in other cases, as in the betting scenario from Section 2, it does make a difference which precise credence you identify with. In these cases, accepting some precise probability in your decision-making is what allows you to, on standard expected utility grounds, decide or act in one way rather than another. According to my way of thinking, these are interesting new instances of the phenomenon discussed in the previous paragraph, where, for deliberation and action to proceed, it is necessary for a person to accept something that is different from or more precise than what is supported by their epistemic state. Nonetheless, as discussed above, there may be probabilistic claims that I’m not psychologically able to accept—like that it is 95% probable that I will survive without breathing or that it is 2% likely that if I jump off this bridge I will fall. I am not able to reason and act as if such things are the case.
The next question is whether there are possible cases of accepting that a proposition has some probability that are not rational. As we saw above, Moss suggests that “there is no rule of rationality saying that an agent cannot change which mental state she identifies with” (2015a: 673). But now that we have an account of what identifying with a precise credence is, Identification, we can investigate the issue more directly. It is plausible that there are rules that restrict which states of acceptance are rational for a person to enter. One example is the case of a person continually switching back and forth between accepting that rain is 10% and is 80% probable in the course of deciding. Moss thinks this is impossible. She might be right, but I’m not sure that she is (such a case seems different from trying to accept that gravity doesn’t apply to me or that I will survive if I don’t breathe). Nonetheless, if such switching is possible, it is irrational. Though some changes may be allowed, frequent, repeated, effortless changes are fickle. They defeat the purpose of acceptance by leading to a person being too fractured to deliberate and act. This explains why acting in ways that the varying and frequently changing states would suggest—for instance, rejecting Bet A and Bet B repeatedly and in quick succession—is, even if possible, irrational.
Though the previous example is Moss’s focus (in light of Elga’s problem) and the one that explains why my account doesn’t imply Permissive in its full generality, the following example is important as well. Consider a case where a person with a 10–80% degreed imprecise credence that it will rain tomorrow accepts that the probability of rain tomorrow is 1%. It usually is irrational to accept that a proposition has a probability that falls outside the interval range of one’s imprecise credence. This observation, together with Identification (that identifying with a precise credence in a proposition is accepting that that proposition has a precise probability) allows Identify to follow from a general thesis about the connection between acceptance and action:
Accept: It is permissible to act in accordance with what you rationally accept and impermissible to act otherwise.
Here is the reasoning: Suppose that Identification is true. Then according to Accept, it is rational to act in accordance with the precise credences that you rationally identify with and irrational to act otherwise. Earlier in the paragraph I suggested the following: in order to rationally identify with some precise credence—that is, accept some probability—about a proposition that you have an imprecise credence about, that precise credence must be a precisification of your imprecise credence; it cannot be something that falls outside the range. So, Identify follows:
Identify: Suppose that c is an imprecise credence with content p. For any precisification of c, c′, it is permissible for you to act as if you have c′ if and only if you rationally identify with c′. It is impermissible to act as if you have a credence with content p that is not a precisification of c.
It is striking that a norm about the connection between action and a rather specific sort of mental state, imprecise credence, follows from a norm about the connection between action and a more general sort of mental state, acceptance. This is where we’ve come to by running with Moss’s insight that Elga’s challenge isn’t just a plea for a new decision theory, but depends on understanding what’s going on in the decider’s mind.
Though Identify appears to follow from a more general principle, that doesn’t vindicate confidence in it, unless the general principle, Accept, is itself plausible. Different authors’ views of acceptance support Accept in different ways. For instance, Alonso says, “reliance [acceptance] constitutively aims at … providing cognitive guidance that is sensible or correct from the standpoint of relevant ends, values and so on” (2014: 169; see also 2016)—and this is so whether or not acceptance aims to track the truth (2016: 326–27). So, acting in accordance with what you accept would be rational and acting in other ways irrational. Additionally, here’s a general picture that undergirds Accept, which I have adapted from Bratman (1992: 10–11): A person’s reasoning and decision-making takes place against what Bratman calls a cognitive background, which is what the person takes for granted. Their belief/credal state is the default cognitive background. We tend to take for granted what is part of our default cognitive background. In this way our beliefs and credences play a central role in our practical reasoning and decision making. But in some contexts of reasoning and action, we adjust this default cognitive background by accepting propositions that we don’t believe or even disbelieve, or by removing beliefs or credences from this background. The literature has noted some scenarios of these sorts—we’ve seen ones involving defense lawyers, physics problems, game show guesses, and stubborn interlocutors. In these cases, it is what we accept, not what we believe, that rationalizes our decisions and actions—that is, that makes us present a certain argument before the court, calculate using Newton’s laws, make the guess we do for the grand prize, or agree to search our office again. In this sort of picture, part of what makes our actions rational or irrational is what Bratman calls our adjusted cognitive background, or what we accept. The norm Accept fits nicely with this picture.
The picture just sketched for the kinds of acceptances discussed in the literature applies just as well to cases of acting with imprecise credences. Our degrees of uncertainty in our default cognitive background can be imprecise in ways that restrict us from using them alone to make some decisions about how to act. In such cases the default background tells us some ways that we cannot act—like in ways that aren’t sanctioned by any precisification. But it doesn’t tell us a particular way to act, since different precise credences would result in different actions. In such cases, we can plump for or identify with one such credence. To do so is to adjust our cognitive background. It is to enter a state of acceptance. As we’ve seen, accepting propositions to enter this adjusted state is not only a rational way to reason and make decisions. In some cases, it is what we need to do to allow us to reason and decide on standard expected utility grounds. Our default cognitive background is not enough by itself to tell us how to reason, decide, and act; additional states of acceptance fill that gap. So, our total acceptance state (or adjusted cognitive background) is what makes our action rational or irrational. Again, this is represented in general in Accept, and in the particular case of imprecise credences in Identify.
This striking generalization, which assimilates the particular case of imprecise credences to that of the general connection between acceptance and action, makes a picture like that expressed in Identify more plausible. A Mossian reply to Elga’s objection is plausible not only for the reasons she gives—that is, from reflecting on cases and the analogy to moral dilemmas. It is also plausible based on its close relationship to a general account of rational action, given my picture of what is going on in a decider’s mind when they reason with imprecise credences.
It’s important to note that Identify follows from Accept and Identification only if it is always irrational to accept a probability that is outside of your imprecise interval.30 I think that it usually, but not always, is. So, I’ll argue that, in fact, we shouldn’t accept Identify in its full generality, as I’ve been suggesting in the past few pages. However, I’ll also suggest that this is the right result; Accept is much like Identify, and gets right the kinds of cases about which these views offer different predictions.
Such cases are the ones we’d expect from the literature on acceptance. One kind of case is when you need to accept something based on your interlocutors’ demands, in order for the deliberation to commence or proceed. For instance, suppose that your partner demands that you accept that the probability of the keys being in your office is 25% when your credence is 1–5%. Otherwise, they will refuse to help you look for your keys. If you think that their help might be crucial in succeeding in your inquiry about the location of the keys, it seems rational to acquiesce and accept that the probability of the keys being in your office is 25% (even while recognizing that doing so might have different consequences for the inquiry than accepting that the probability of the keys being in your office is, say, 2%). So, this is a case where it is rational to accept some probability that doesn’t correspond to a precisification of your imprecise credence. Nonetheless, the fact that you do accept it makes it rational for you to inquire and act according to it. Accept predicts this while Identify (assuming the bridge principle of Identification) does not.
Another example of this sort is a version of a Stalnaker-style case where you alter what you accept for ease of calculation. For instance, suppose that, because of some complex meteorological models, you are 51.073–54.924% confident of the proposition that it will rain tomorrow. In inquiring about whether or not to lug around a hefty umbrella on your day-long hike, it would be easier and wouldn’t yield any significant differences in your inquiry to accept that the probability of rain tomorrow is 50%. It seems rational for you to accept this and to reason and make decisions on its basis. Again, Accept but not Identify predicts this.
Cases like these speak in favor of Accept over Identify.31 Though Identify is attractive, Accept is a more general principle that (given Identification) agrees with Identify about its plausible predictions, makes better predictions in the kinds of cases we’ve just seen, and assimilates decision-making with imprecise credences to decision-making more generally.32
In closing this section, I want to expand on the general inspiration I’ve been taking from Moss—that how to understand decision and action with imprecise credences is not just a question for decision theorists, but is also a question for philosophers of mind. That insight notwithstanding, there must be some principle of decision in the background when I say things like, “For any precisification of c, c′, it is permissible for you to act as if you have c′ if you rationally identify with c‴ (in Identify) or “It is permissible to act in accordance with what you rationally accept and impermissible to act otherwise” (in Accept). What is it for you to act as if you have (or in accordance with) one of these attitudes? I think any plausible decision rule could be applied, though they may yield different results. But the important point of Moss’s discussion (2015a: 673), which holds of my picture as well, is that standard expected utility maximization is one such rule—contrary to what many responses to Elga’s challenge in the literature have suggested (Bradley & Steele 2014; Chandler 2014; Joyce 2010; Sahlin & Weirich 2014; Sud 2014; Weatherson 1998). The picture of decision-making with imprecise credences presented here is consistent with standard expected utility maximization. On my picture, what allows for making sense of decision and action with imprecise credences is the person’s state of mind—and, in particular, what they accept—not their following some special decision rule for just this sort of occasion.
While the key notion involved in Moss’s solution is that of identifying with a precise credence, another important one is the notion of changing one’s mind. She says:
my theory predicts that we will be more likely to judge that an agent can rationally reject good pairs of bets insofar as we have evidence that she genuinely changes her mind between bets. Since an agent is unlikely to change her mind immediately or without any evidence of psychological effort, we are unlikely to judge that an agent can rationally reject good pairs of bets under those circumstances. For the same reason, we are unlikely to judge that an agent is acting rationally if she is acting in ways that are permissible only if she is relentlessly reconsidering her decisions. In addition, my theory correctly predicts that our leniency increases in proportion with our evidence that agents are genuinely changing their minds. (Moss 2015a: 675–76)
This is plausible, but it isn’t clear in this passage what she means by a change of mind. The following insightful reflection is important:
There is an intuitive sense in which imprecise agents make up their minds when they act, while there is an intuitive sense in which they do not. The theory I defend captures both senses. An agent identifies with a precise credal state when she acts, while multiple credal states continue to be represented by members of her representor. An agent is judged according to what some distinguished precise credence function recommends, while every member of her representor is eligible for that position of distinction. (Moss 2015a: 765)
This remark can be interpreted in non-technical language according to Identification. The way in which people with imprecise credences do change their minds when they deliberate about how to act is that they come to accept some precise probability. The way that they don’t is that they retain the imprecise credence throughout. Their evidence hasn’t changed, nor have their estimates about how the world is. But they can’t make decisions and act on those imprecise estimates in the standard way prescribed by expected utility maximization. So their mind changes in taking some more precise stand in the context of decision and action.
Moss’s own remarks don’t always reinforce the insight from the above quotation. This is the case when she responds to the following objection to her view: even if Moss is right about the changes in the precise credences we act as if we have, that is compatible with a view where we have only precise credences and they change in the way she suggests. She responds, “According to my theory, your action is governed by the precise credal state that you ‘identify with.’ According to the alleged alternative, your action is governed by the precise credal state that you ‘have.’ This is a distinction without a difference” (Moss 2015a: 680). But according to the account developed in this paper (and what I believe should be Moss’s account too), there is a difference. Though something does change according to her, me, and “the alleged alternative”, something relevant remains the same on both her and my picture—your imprecise credence—but nothing relevant does on the alternative picture. On the picture I advocate, there is a clear way to stop the slide to a behaviorist way of thinking of credences, which can run together how a person acts with their estimations of the truth. Moss can help herself to this idea too, but it is nicely explained by the antecedent distinction between what you accept and what you believe (in a wide sense including your credence). And this view has an advantage over “the alleged alternative” of rejecting these sorts of cases as counterexamples to the position that one’s epistemic state can rationally change without evidence—since what changes is your acceptance state, a practical attitude.
The notion of changing one’s mind plays a role in Seamus Bradley’s (2019) objection to Moss’s view. Remember that the structure of Moss’s view doesn’t rule out thinking that a person can act in a way that could only be rationalized by quick or repeated changes in the precise credence one acts as if they have. But Moss argues substantively that we can’t change the credences that we identify with in this way. It is this substantive position that allows her to avoid Elga’s challenge—where Bet A and Bet B are offered in close succession. However, Bradley presents a case that pushes against this substantive position and aims to undermine the core of her reply to Elga. It is a slight variation on a famous case presented by Ellsberg (1961: 653–55) and can be used to motivate imprecise credences and decision rules like Permissive. Here is Bradley’s presentation (I take it that the uses of ‘risk’ and ‘ambiguity’ are clear from the example):
I have an urn that contains ninety marbles. Thirty marbles are red. The remainder are blue or yellow in some unknown proportion. We are going to consider some bets that win 1 utility if the event in question occurs and nothing otherwise. Consider a choice between a bet that wins if the marble drawn is red (I), versus a bet that wins if the marble drawn is blue (II). You might prefer I to II since I involves risk while II involves ambiguity. Now consider a choice between a bet that wins if the marble drawn is not blue (III) versus a bet that wins if the marble drawn is not red (IV). Now it is III that is ambiguous, while IV is unambiguous but risky, and thus IV might seem better to you if you preferred risky to ambiguous prospects. … Let the probabilities for red, blue and yellow marbles be r, b and y respectively. If you were an expected utility maximizer and preferred I to II, then r > b and a preference for IV over III entails that r + y < b + y. No numbers can jointly satisfy these two constraints. Therefore, no probability function is such that an expected utility maximizer with that probability would choose in the way described above. (2019: 23)
To mirror Elga’s problem, Bradley makes Ellsberg’s case diachronic by offering the bets successively, but with little time in between. Let’s call this the Ellsberg Problem. In such a case, one way to rationally choose bet I over II and bet IV over III is to bet as if you had a different precise credence for each choice—that is, to identify with one precise credence for the first wager and another for the second. Again, the structure of Moss’s view doesn’t rule this out, but if these bets were offered in quick succession, her substantive constraint against such switches would. And her constraint would also make problems for the persistence of people’s preferences in a case where these offers were repeated in an alternating fashion. This, Bradley thinks, is a problem since that substantive constraint is what allows Moss’s picture to predict the results that Elga says must be so in his decision scenario—that you ought not reject both Bet A and Bet B.
Moss does often talk as if a short amount of time wouldn’t be enough for a rational change of mind—in the identification sense—and that frequent switching between the credences you identify with would be irrational if not impossible. But neither time nor number of switches plays a fundamental role in my acceptance-based account. Rather the key role is played by a context or an inquiry. Once one enters into a different context or inquiry, it may be rational to change the precise credence one identifies with, to accept a different probability—even if those inquiries take place at the same time and even if they happen repeatedly. This follows from general features of how acceptance works. At a single time, one can rationally accept a proposition in one inquiry and not accept it in another. For instance, it is possible and rational to accept that the keys might be in your office while texting with your partner about where to look, and at the same time to not accept that while talking to your friend about how stubborn your partner can be. Likewise, one can rationally switch what one accepts repeatedly in different inquiries, even in quick succession and even if those changes happen in a stable, predictable way. For instance, you might accept Newton’s laws of motion when you are in your mechanics class at noon and Einsteinian principles when you are in your cosmology class at one. And you might do so every day. But there is no mystery about whether this is possible or if it is rational (see also Bratman 1992: 5–6). This doesn’t mean anything goes in the realm of acceptance. It wouldn’t be rational, say, to repeatedly switch from Newtonian to Einsteinian to Aristotelian laws in the course of solving a single problem in your mechanics class. The same is true of accepting probabilistic claims—that is, of identifying with precise credences. Different contexts of reasoning and decision making—no matter their temporal relationships—allow for rational changes in what precise credences a person identifies with, though it is irrational to make such changes in a single context of inquiry.
This allows for the following response to the Ellsberg Problem. The inquiries introduced by the bets in Ellsberg’s case are different enough for it to be rational for a person to prefer bet I to II but also prefer bet IV to III—by accepting that the probability of drawing a blue marble is lower in the context of the first choice and accepting that the probability of drawing a blue marble is higher in the context of the second. How are the inquiries different? They differ in the assumptions that they incline an inquirer toward given a fixed preference toward risk over ambiguity (or vice versa). The first is an inquiry where preferring risk to ambiguity leads to a conservative supposition of the probability of drawing a blue marble while the second is an inquiry where the same preference between risk and ambiguity leads to a liberal supposition of the probability of drawing a blue marble. That the same preference leads to different sorts of assumptions, even holding fixed one’s decided beliefs about frequencies, suggests that the inquiries are different in substantive ways. And this is so even if these choices are offered in quick succession and even if they are alternated repeatedly. Indeed, Ellsberg’s own words about this common strategy suggest decisions based on something like acceptance: “our subject does not actually expect the worst, but he chooses to act ‘as though’ the worst were somewhat more likely than his best estimates of likelihood would indicate” (1961: 667 emphasis in original). The different contexts of the choices in Ellsberg’s problem contrast with Elga’s Bet A and Bet B, where the bets are of the same kind and don’t introduce different considerations (in particular of risk vs. ambiguity) into the inquiry. Accepting different precise probabilities in that case seems fickle, rather than principled and, so, seems irrational.
Though my acceptance account grounds this response to the Ellsberg Problem, this sort of response is available to Moss even if she rejects Identification. After all, the constraints that create the problem for her account are not structural and inviolable, but are substantive and, as I read her, depend on nuances of the inquiry. For instance, in the long quotation I used to start this section, Moss doesn’t forbid the sorts of changes required. Rather she calls them unlikely to rationally occur. That is compatible with some cases being ones where quick, repeated, and systematic changes in what precise credence a person identifies with are rational. I submit, for the reasons presented above, that the Ellsberg Problem is such a case.
This discussion has a more general lesson: time is not the essential factor in the rationality or irrationality of the subject of such scenarios. Moss thinks her solution to Elga’s problem supports a research program in time-slice epistemology, where synchronic constraints are of central interest and force, and there are few or no diachronic constraints in epistemology (Moss 2015b; 2015a: 681; see also Hedden 2015). We’ve discovered a lesson that is different in two ways. First, it is contexts of inquiry, reasoning, or decision-making that are fundamental, not time. Second, we are dealing with a practical attitude, acceptance, not a purely epistemic one, like belief or credence. The interesting norms here are those of what we can call context-of-inquiry rationality, rather than time-slice epistemology.
There are two incidental, but important advantages of realizing that a person can accept that some claim has a probability for the purposes of reasoning, decision-making, and action—that is, that they can identify with a precise credence. This position illuminates action in cases where our credences are imprecise in ways that are less precise than the interval-valued model used in this paper allows and in ways that are more precise than this model allows.
Sometimes our evidence is precise in neither its extent nor its endpoints. In such cases, our degrees of confidence may rationally be neither point-valued nor interval-valued. For instance, I might be pretty confident of some proposition, sort of confident of another, and extremely confident of a third (Sturgeon 2008: 158–60; Price 1986: 23). Agreeing that these credences are neither precise nor interval-valued introduces a number of theoretical obstacles, one of which is like the issue I’ve discussed in this essay.33 How do these credences license and forbid action? Norms on imprecise credences like Permissive and Identify don’t apply, since they govern interval-valued imprecision. But Accept helps illuminate such situations. As when a person has interval-valued imprecise credences, when a person has imprecise credences without a numerical measure, they may need to accept some probability in order to be able to decide and act in ways that maximize expected utility. We saw that it is often irrational to accept probabilities that fall outside the range of one’s interval-valued credence. Something similar, though less precise, applies in non-numerical cases. If you are pretty confident in a proposition, it would often (though not always—see Section 5) be irrational to accept that a proposition is 2% likely, and it would make sense to accept that that proposition is 65% likely. So, the position that action often goes by way of accepting claims about probabilities helps us understand action with imprecise credences—both interval-valued and non-numerical.
There are also imprecise credences that are more precise, in a sense, than the interval-valued picture allows. A person can have evidence that makes their confidence unsettled in a way that is not spread over an interval. Perhaps you have evidence that you know is strongly indicative of whether your favorite team won last night. But the evidence is complicated and you can’t figure out which way it indicates. For instance, suppose you know that the evidence makes it either 99% likely that they won or 1% likely that they won, but you can’t figure out which. It may be reasonable, in such a case, to represent the strength of your credence in the proposition that your favorite team won last night as {.01, .99}, neither a particular degree, nor an interval. Though this is prima facie possible, Moss presents what she calls the problem of cheap evidence (in press). The problem arises in a scenario where you will be offered the opportunity to guess whether your favorite team won last night, gaining $100 if your guess is correct and $0 if your guess is incorrect. Additionally, have the option to pay $20 to phone a friend who you are sure knows the answer. So, your three options are (i) guess that your favorite team won, (ii) guess that your favorite team lost, or (iii) phone a friend (after which you know you will be offered options (i) and (ii) again).34 Paying $20 for the phone call seems to be the, or at least a, rational thing to do. But this isn’t so according to rules like Permissive or Identify. This is because according to the 1% precisification of your credence, guessing that your favorite team lost without calling your friend maximizes your expected utility, and according to the 99% precisification of your credence, guessing that your favorite team won without calling your friend maximizes your expected utility. There is no precisification that sanctions paying for the extra info, and doing so is predicted to be impermissible.
My acceptance picture offers a way out of this troubling prediction. Each precisification of your imprecise credence supports a different decision. Just as in the cases discussed above, you need to take a practical stance that goes beyond what your epistemic state suggests. But in this case, it is natural to accept that the probability that your favorite team won is 50% or something near there. Of course, you don’t think that this really is the probability that they won given your evidence, but it makes sense to split the difference and act as if it is (Moss 2015b: 192 makes a similar point). If you have this state of acceptance, then, according to Accept, you must act in accordance with it. So, you must call your friend for more info, since that is what maximizes your expected utility given what you accept. My picture predicts this happy result. However, two features of the case are worth noting. First, my picture does not predict that making either guess, rather than phoning for more information, is irrational, provided that you accept that the probability is 99% or 1%. And accepting one of these is not irrational either. This might be thought to be too permissive, but I think making one of the guesses is less irrational than risky or foolhardy. Second, this is a case where the probability it is natural for you to accept is not one that comes from a precisification of your credence. Though this is out of line with Moss’s norm Identify, splitting the difference in this case is reasonable and in line with her broader motivations for that view and with how rational acceptance can work.
In this paper I have developed a mentally well-founded explanation of a Mossian picture of action under imprecise uncertainty, which also serves as a response to Elga’s objection. This picture draws on the not-purely-epistemic attitude of acceptance and grounds Moss’s reply in a more general account of the connection between acceptance and action. The discussion also reveals the kinds of situations in which it is possible and rational to have an attitude of acceptance that doesn’t match one’s epistemic attitudes (belief and credence). Unlike many discussed examples in which acceptance makes inquiry easier or more efficient, some cases of decision-making with imprecise credences are ones where the person cannot, at least on the standard grounds of maximizing expected utility alone, decide without coming to adopt some acceptance state or other. Exploring these sorts of cases make it even more clear that we have acceptance-like attitudes and that they play a central role at the intersection of our epistemic and practical lives.
Thanks to Shyam Nair and two anonymous reviewers from this journal for comments on earlier versions of this paper.
⮭In all such strategies [that a person might use to make decisions with imprecise credences], the underlying credences do not change. An agent merely acts, in a specific decision-making context, as if she has more precise beliefs than she actually does. She might sharpen in different ways in different contexts, so that different C#’s are used for different decision problems. But, by adopting some strategy of this sort, she is able to make choices even when her beliefs are too imprecise to definitively recommend any utility maximizing action. (2010: 313)
One could generalize even more broadly by dropping the first sentence of Identify* and allowing that it applies even in cases that don’t involve imprecise credences. If one goes these ways, the discussion of acceptance doesn’t play a role in grounding our eventual theory, but it did play a valuable role in providing the scaffolding that let us see the cases that spurred these amendments to Identify. ⮭Identify*: Suppose that c is an imprecise credence with content p. It is permissible for you to act as if you have a precise credence with content p, c′, if and only if you rationally identify with c′.
1 Alonso, Facundo M. (2014). What Is Reliance? Canadian Journal of Philosophy, 44(2), 163–83.
2 Alonso, Facundo M. (2016). Reasons for Reliance. Ethics, 126(2), 311–38.
3 Alston, William (1996). Belief, Acceptance, and Religious Faith. In Jeff Jordan and Daniel Howard-Snyder (Eds.), Faith, Freedom, and Rationality: Philosophy of Religion Today (3–27). Roman & Littlefield Publishers.
4 Bradley, Seamus (2019). A Counterexample to Three Imprecise Decision Theories. Theoria, 85(1), 18–30.
5 Bradley, Seamus and Katie Siobhan Steele (2014). Should Subjective Probabilities Be Sharp? Episteme, 11(3), 277–89.
6 Bratman, Michael E. (1992). Practical Reasoning and Acceptance in a Context. Mind, 101(401), 1–16.
7 Bratman, Michael E. (1996). Identification, Decision, and Treating as a Reason. Philosophical Topics, 24(2), 1–18.
8 Bratman, Michael E. (2006). Thinking How to Live and the Restriction Problem. Philosophy and Phenomenological Research, 72(3), 707–13.
9 Carr, Jennifer Rose (2019). Subjective Probability and the Content/Attitude Distinction. In Tamar Szabó Gendler and John Hawthorne (Eds.), Oxford Studies in Epistemology, (Vol. 6, 35–57). Oxford University Press.
10 Carr, Jennifer Rose (2020). Imprecise Evidence without Imprecise Credences. Philosophical Studies, 177(9), 2735–58.
11 Chandler, Jake (2014). Subjective Probabilities Need Not be Sharp. Erkenntnis, 79(6), 1273–86.
12 Chang, Ruth (2002). The Possibility of Parity. Ethics, 112(4), 659–88.
13 Chang, Ruth (2005). Parity, Interval Value, and Choice. Ethics, 115(2), 331–50.
14 Chang, Ruth (2017). Hard Choices. Journal of the American Philosophical Association, 3(1), 1–21.
15 Christensen, David (2004). Putting Logic in its Place: Formal Constraints on Rational Belief. Oxford University Press.
16 Clarke, D. S. (1994). Does Acceptance Entail Belief? American Philosophical Quarterly, 31(2), 145–55.
17 Cohen, L. Jonathan (1989). Belief and Acceptance. Mind, 98(391), 367–89.
18 Dorr, Cian (2010). The Eternal Coin: A Puzzle about Self-Locating Conditional Credence. Philosophical Perspectives, 24(1), 189–205.
19 Elga, Adam (2010). Subjective Probabilities Should Be Sharp. Philosophers’ Imprint, 10(5), 2–11.
20 Ellsberg, Daniel (1961). Risk, Ambiguity, and the Savage Axioms. The Quarterly Journal of Economics, 75(4), 643–69.
21 Engel, Pascal (1999). Dispositional Belief, Assent, and Acceptance. Dialectica, 53(3–4), 211–26.
22 Eriksson, Lina and Alan Hájek (2007). What Are Degrees of Belief? Studia Logica, 86(2), 183–213.
23 Foley, Richard (1992). Working Without a Net: A Study of Egocentric Epistemology. Oxford University Press.
24 Frankfurt, Harry G. (1988a). Identification and Externality. In The Importance of What We Care About: Philosophical Essays (58–68). Cambridge University Press.
25 Frankfurt, Harry G. (1988b). Identification and Wholeheartedness. In The Importance of What We Care About: Philosophical Essays (159–76). Cambridge University Press.
26 Hájek, Alan (2009). Arguments For—Or Against—Probabilism? British Journal for the Philosophy of Science, 59(4), 793–819.
27 Hammond, Peter J. (1988). Orderly Decision Theory: Peter J. Hammond. Economics and Philosophy, 4(2), 292–97.
28 Hedden, Brian (2015). Options and Diachronic Tragedy. Philosophy and Phenomenological Research, 90(2), 423–51.
29 Holton, Richard (1994). Deciding to Trust, Coming to Believe. Australasian Journal of Philosophy, 72(1), 63–76.
30 Jeffrey, Richard (1983). Bayesianism With a Human Face. In John Earman (Ed.), Testing Scientific Theories (Vol. 10, 133–56). University of Minnesota Press.
31 Jeffrey, Richard (1987). Indefinite Probability Judgment: A Reply to Levi. Philosophy of Science, 54(4), 586–91.
32 Joyce, James M. (2005). How Probabilities Reflect Evidence. Philosophical Perspectives, 19(1), 153–79.
33 Joyce, James M. (2010). A Defense of Imprecise Credences in Inference and Decision Making. Philosophical Perspectives, 24(1), 281–323.
34 Kelly, Thomas (2002). The Rationality of Belief and Some Other Propositional Attitudes. Philosophical Studies, 110(2), 163–96.
35 Konek, Jason (2016). Probabilistic Knowledge and Cognitive Ability. Philosophical Review, 125(4), 509–87.
36 Lennertz, Benjamin (2021). Are Credences Thoughts about Probability? A Reply to the Inscrutable Evidence Argument. Unpublished manuscript.
37 Levi, Isaac (1974). On Indeterminate Probabilities. Journal of Philosophy, 71(13), 391–418.
38 Levi, Isaac (1980). The Enterprise of Knowledge: An Essay on Knowledge, Credal Probability, and Chance. MIT Press.
39 Maher, Patrick (1986). The Irrelevance of Belief to Rational Action. Erkenntnis, 24(3), 363–84.
40 Maher, Patrick (1990). Acceptance without Belief. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1990, 381–92.
41 Mahtani, Anna (2018). Imprecise Probabilities and Unstable Betting Behaviour. Noûs, 52(1), 69–87.
42 Moon, Andrew (2017). Beliefs do not come in degrees. Canadian Journal of Philosophy, 47(6), 760–78.
43 Moon, Andrew and Elizabeth Jackson (2020). Credence: A Belief-First Approach. Canadian Journal of Philosophy, 50(5), 652–69.
44 Moss, Sarah (2015a). Credal Dilemmas: Credal Dilemmas. Noûs, 49(4), 665–83.
45 Moss, Sarah (2015b). Time-Slice Epistemology and Action Under Indeterminacy. In Tamar Szabó Gendler and John Hawthorne (Eds.), Oxford Studies in Epistemology (Vol. 5, 172–94). Oxford University Press.
46 Moss, Sarah (2018). Probabilistic Knowledge. Oxford University Press
47 Moss, Sarah (in press). Global Constraints on Imprecise Credences: Solving Reflection Violations, Belief Inertia, and Other Puzzles. Philosophy and Phenomenological Research.
48 Perl, Caleb (2020). Might Moral Epistemologists Be Asking the Wrong Questions? Philosophy and Phenomenological Research, 100(3), 556–85.
49 Price, Huw (1986). Conditional Credence. Mind, 95(377), 18–36.
50 Proust, Joëlle (2012). The norms of acceptance. Philosophical Issues, 22(1), 316–33.
51 Rinard, Susanna (2015). A Decision Theory for Imprecise Probabilities. Philosophers’ Imprint, 15(7), 1–16.
52 Ross, Jacob (2006). Acceptance and Practical Reason. Rutgers University.
53 Sahlin, Nils-Eric and Paul Weirich (2014). Unsharp Sharpness. Theoria, 80(1), 100–3.
54 Seidenfeld, Teddy (1994). When Normal and Extensive Form Decisions Differ. In Dag Prawitz, Brian Skyrms, and Dag Westerståhl (Eds.), Logic, Methodology and Philosophy of Science (451–63). Elsevier.
55 Staffel, Julia (2013). Can There Be Reasoning With Degrees of Belief? Synthese, 190(16), 3535–51.
56 Stalnaker, Robert (1984). Inquiry. MIT Press.
57 Sturgeon, Scott (2008). Reason and the Grain of Belief. Noûs, 42(1), 139–65.
58 Sud, Rohan (2014). A Forward Looking Decision Rule for Imprecise Credences. Philosophical Studies, 167(1), 119–39.
59 Ullman-Margalit, Edna (1983). On Presumption. Journal of Philosophy, 80(3), 143–63.
60 van Fraassen, Bas C. (1980). The Scientific Image. Oxford University Press
61 Velleman, J. David (2005). Identification and Identity. In Self to Self: Selected Essays (330–60). Cambridge University Press.
62 Watson, Gary (1975). Free agency. Journal of Philosophy, 72(8), 205–20.
63 Watson, Gary (1987). Free Action and Free Will. Mind, 96(382), 154–72.
64 Weatherson, Brian (1998). Decision Making with Imprecise Probabilities. Unpublished manuscript. Retrieved from http://brian.weatherson.org/vdt.pdf
65 White, Roger (2009). Evidential Symmetry and Mushy Credence. In T. Szabo Gendler and J. Hawthorne (Eds.), Oxford Studies in Epistemology (Vol. 3, 161–86). Oxford University Press.
66 Williams, J. Robert G. (2014). Decision-Making Under Indeterminacy. Philosophers’ Imprint, 14(4), 1–34.
67 Wright, Crispin (2004). Warrant for Nothing (and Foundations for Free)? Aristotelian Society Supplementary Volume, 78(1), 167–212.