Article

**
Author:**
Alejandro Pérez Carballo
(University of Massachusetts, Amherst)

**Keywords:**

**How to Cite:**
Pérez Carballo, A.
(2023) “Generalized Immodesty Principles in Epistemic Utility Theory”,
*Ergo an Open Access Journal of Philosophy*. 10(0).
doi: https://doi.org/10.3998/ergo.4661

According to one of the better known constraints on epistemic utility functions, each probabilistically coherent function should be *immodest* in a particular sense: for any probabilistically coherent credence function $P$ and any alternative $Q\ne \text{}P$ to $P$, the expected epistemic utility of $P$ relative to $P$ should be greater than that of $Q$ relative to $P$. This constraint, often known as *Strict Propriety*, is usually motivated by appealing to a combination of two independent claims. The first is a certain kind of admissibility principle: that any probabilistically coherent function can sometimes be epistemically rational.^{1} The second is an abstract principle linking epistemic utility and rationality: that an epistemically rational credence function should always expect itself to be epistemically better than any of its alternatives.^{2} If we assume, as most typically do, that the alternatives to any probabilistically coherent function are all and only those credence functions with the same domain, these two principles arguably entail Strict Propriety.

What happens if we enlarge the class of alternatives to include a wider range of probability functions, including some with a different domain? This would strengthen the principle linking epistemic utility and rationality: it would no longer suffice, for a credence function to be deemed epistemically rational, that it expects itself to be doing better, epistemically, than credence functions with the same domain. And this stronger principle would arguably give us a more plausible theory of epistemic rationality, at least on some ways of widening the range of alternatives. Suppose an agent with a credence function defined over a collection of propositions takes herself to be doing better, epistemically, than she would be by having another credence function defined over the same collection of propositions. But suppose she thinks she would be doing better, epistemically, having a credence function defined over a smaller collection of propositions—perhaps she thinks she would be doing better, epistemically, not having certain defective concepts and thus that she would be doing better, epistemically, simply not having propositions with those concepts as constituents in the domain of her credence function. Such an agent would seem to be irrational in much the same way as an agent who thinks she would be doing better, epistemically, by assigning different credences to the propositions she assigns credence to.^{3}

Now, my interest here is not with the question what is the right principle linking epistemic utility and rationality. Rather, I am interested in understanding how strong a principle we can consistently endorse: I am interested in the kinds of constraints on epistemic utility functions that come from different views on how epistemic utility and epistemic rationality are related to one another. So I start by considering the strongest version of a principle linking epistemic utility and rationality, one that says that an epistemically rational credence function should take itself to be doing better than any other credence function, regardless of its domain. As we will see, the resulting immodesty constraint is far too strong, in that, perhaps surprisingly, it cannot be satisfied by any reasonable epistemic utility function—that this is so is a consequence of the main results in this paper (Subsection 3.1–Subsection 3.2).^{4}

I then consider different possible ways of weakening this principle, study the resulting constraints on epistemic utility functions and their relationship to one another, and establish a few characterization results for the class of epistemic utility functions satisfying these constraints (Subsection 3.3). Before concluding, I discuss (Section 4) how my results relate to recent work on the question whether epistemic utility theory is incompatible with imprecise, or ‘mushy’, credences.

Fix a collection $W$ of possible worlds and a finite partition $\pi $ of $W$ —a collection of pairwise disjoint, jointly exhaustive subsets of $W$, which we call *cells*.

I will say that a real-valued function $P$ defined over $\pi $ is *coherent* iff for each $s\in \pi $, $P(s)\in [0,1]$, and ${\sum}_{s\in \pi}P(s)\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}1$. A coherent function over $\pi $ uniquely determines a probability function over the Boolean closure of $\pi $. Accordingly, and slightly abusing notation, I will refer to coherent functions over $\pi $ as *probability functions over* $\pi $.^{5}

Let ${\mathcal{P}}_{\pi}$ denote the collection of probability functions over $\pi $. An *epistemic utility function* (*for* $\pi $) is a function $\mathfrak{u}:{\mathcal{P}}_{\pi}\times W\to \overline{\mathbb{R}}:=\mathbb{R}\cup \{-\infty ,\infty \}$ such that for each $P\in {\mathcal{P}}_{\pi},\text{\hspace{0.33em}}\mathfrak{u}(P,\xb7)\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}:\text{}W\text{}\to \text{\hspace{0.33em}}\overline{\mathbb{R}}$ is $\pi $-measurable—where $f:\text{}W\to \text{}\overline{\mathbb{R}}$ is $\pi $-measurable iff for each $r\in \text{}\overline{\mathbb{R}},\{w:f(w)\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}r\}$ is in the Boolean closure of $\pi $.^{6}

Throughout, I assume that epistemic utility functions are *bounded above* (for each $\mathfrak{u}$ there exists a finite $M$ such that $\mathfrak{u}<M$) and *truth-directed* in the following sense: for all $w\in W$, if (i) for any proposition $s$ in $\pi $, $P(s)$ is at least as close as ${P}^{\prime}(s)$ is to the truth-value of $s$ in $w$,^{7} and (ii) for some proposition $s$ in $\pi $, $P(s)$ is strictly closer than ${P}^{\prime}(s)$ is to $s$’s truth-value in $w$, then $\mathfrak{u}(P,w)\text{\hspace{0.17em}\hspace{0.17em}}>\text{}\mathfrak{u}({P}^{\prime},w)$.^{8}

By definition, for fixed $\mathfrak{u}$ and $Q\in {\mathcal{P}}_{\pi},\mathfrak{u}(Q,\cdot )$ is a discrete random variable. Accordingly, for $P\in {\mathcal{P}}_{\pi}$ I will let ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]\text{:=\hspace{0.17em}}{\mathbb{E}}_{X}{}_{~P}[\mathfrak{u}\text{(}Q,X)]$ denote the *expectation* of $\mathfrak{u}(Q)$ relative to $P$, so that

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}={\displaystyle \sum _{s\in \pi}P}(s)\text{}\mathfrak{u}(Q,{w}_{S}),$

where $s\mapsto {w}_{s}$ is a *choice function*, in that for each $w$ and $s$, ${w}_{s}\in s$. The $\pi $-measurability of $\mathfrak{u}(Q,\xb7)$ ensures that our definition does not depend on our choice function. Indeed, I will simply write $\mathfrak{u}(Q,s)$ to denote $\mathfrak{u}(Q,{w}_{s})$, so that ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}={\sum}_{s\in \pi}P(s)\mathfrak{u}(Q,s)$.

I will say that an epistemic utility function $\mathfrak{u}$ for $\pi $ is *proper* iff for each $P,Q\in {\mathcal{P}}_{\pi}$,

${\mathbb{E}}_{P}\text{[}\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)].$

I will say that $\mathfrak{u}$ is *strictly proper* iff for each $P\ne \text{}Q\in {\mathcal{P}}_{\pi}$, the above inequality is always strict. (When $\mathfrak{u}$ is proper but not strictly proper I will sometimes say that $\mathfrak{u}$ is *weakly* proper.) A variety of characterization results can be found in the literature—see especially Gneiting and Raftery (2007).

Strictly proper epistemic utility functions have been the subject of considerable interest. In discussions of how to reward a forecaster’s predictions, strictly proper functions are of interest because they reward honesty—someone whose forecasts will be rewarded using a strictly proper epistemic utility function cannot expect to do better than by reporting her true credences (Brier 1950; Savage 1971). In general discussions of epistemic value, strictly proper functions are of interest because they incorporate a certain kind of *immodesty*—if your epistemic values are represented by a strictly proper epistemic utility function and you are rational, you will never expect any other credence function to be doing better, epistemically, than your own (Joyce 2009; Gibbard 2008; Horowitz 2014; Greaves & Wallace 2006; inter alia).^{9}

And in discussions of justifications of Probabilism—the requirement on degrees of belief functions that they satisfy the axioms of the probability calculus—strictly proper utility functions have played a starring role in a range of dominance results to the effect that probabilistic credences strictly dominate non-probabilistic credences and are never dominated by any other credence function (Joyce 1998; 2009; Leitgeb & Pettigrew 2010; Pettigrew 2016; Predd, Seiringer, Lieb, Osherson, Poor, & Kulkarni 2009).^{10}

One natural question to ask is how to generalize the framework of epistemic utility theory to allow for comparisons of probability functions defined over distinct algebras of propositions. And given such a generalization, an equally natural question is how to generalize the notion of (strict) propriety. Let me take each of these questions in turn.

Let $\Pi $ denote the collection of finite partitions of $W$. For $\pi ,{\pi}^{\prime}\in \Pi $, say that $\pi $ is a *refinement* of ${\pi}^{\prime}$ iff for each $s\in \pi $ there is ${s}^{\prime}\in {\pi}^{\prime}$ such that $s\subseteq \text{}{s}^{\prime}$. If $\pi $ is a refinement of ${\pi}^{\prime}$, I will say that ${\pi}^{\prime}$ is a *coarsening* of $\pi $. Of course, the refinement relation induces a partial ordering over $\Pi $, which I will denote by $\u2291$, where ${\pi}^{\prime}\u2291\pi $ iff ${\pi}^{\prime}$ is a refinement of $\pi $. In fact, the resulting partially ordered set constitutes a *lattice*, in that any subset $\mathcal{S}$ of $\Pi $ admits of an *infimum* (a coarsest partition that is a refinement of all elements of $\mathcal{S}$) and a *supremum* (a coarsening of each element of $\mathcal{S}$ that refines any other partition that coarsens each element of $\mathcal{S}$).

Define now $\mathcal{P}\text{\hspace{0.17em}\hspace{0.17em}}:=\text{\hspace{0.17em}\hspace{0.17em}}{\cup}_{\Pi}{\mathcal{P}}_{\pi}$, and, for a given $P\in \mathcal{P}$, let ${\pi}_{P}$ denote the domain of $P$. If ${\pi}^{\prime}\text{}\u2291\pi ,P\in {\mathcal{P}}_{\pi}$, and $Q\in {\mathcal{P}}_{{\pi}^{\prime}}$, I will say that $Q$ is an *extension of* $P$ to ${\pi}^{\prime}$ (and $P$ is a *restriction of* $Q$ to $\pi $) iff for each $s\in \pi $,

$\sum _{{s}^{\u02b9}\subseteq s}Q}({s}^{\u02b9})=\text{\hspace{0.17em}}P(s),$

where ${s}^{\prime}$ ranges over elements of ${\pi}^{\prime}$. I will say that $P$ is a *restriction of* $Q$ (and $Q$ an *extension of* $P$) iff ${\pi}_{P}\u2292{\pi}_{Q}$ and $P$ is a restriction of $Q$ to ${\pi}_{P}$.

A *generalized epistemic utility function* is a real-valued function $\mathfrak{u}$ defined over $\mathcal{P}\times W$ such that for each $\pi \in \Pi $, the restriction^{11} of $\mathfrak{u}$ to ${\mathcal{P}}_{\pi}\times W$ is a truth-directed, epistemic utility function for $\pi $. I will say that a generalized epistemic utility function $\mathfrak{u}$ is *partition-wise proper* iff for each $\pi \in \Pi $, the restriction ${\mathfrak{u}}_{\pi}$ of $\mathfrak{u}$ to ${\mathcal{P}}_{\pi}\times W$ is proper. I will say that $\mathfrak{u}$ is (partition-wise) *strictly proper* iff ${\mathfrak{u}}_{\pi}$ is strictly proper for all $\pi \in \Pi $.

It is straightforward to define generalized epistemic utility functions that are partition-wise proper. For example, take the generalized version of the *Brier score*, defined by

$\mathfrak{b}(P,w)=-{\displaystyle \sum _{s\in {\pi}_{P}}(P(s)-\U0001d7d9\{w\in s\}}{)}^{2},$

where $\U0001d7d9\{w\in s\}$ equals $1$ if $w\in s$ and $0$ otherwise. It is easy to check that $\mathfrak{b}$ is a generalized epistemic utility function that is partition-wise strictly proper. Indeed, for any family $\{{\mathfrak{u}}_{\pi}:\pi \in \Pi \}$ of functions such that ${\mathfrak{u}}_{\pi}$ is a partition-wise strictly proper utility function for each $\pi $, the function $\mathfrak{u}(P,w)=\text{}{\mathfrak{u}}_{{\pi}_{P}}(P,w)$ is a generalized epistemic utility function that is partition-wise strictly proper.

If we are working with a fixed partition and only considering probability functions defined over that partition, a strictly proper epistemic utility function for that partition ensures the kind of immodesty that is allegedly a feature of epistemic rationality (Lewis 1971). And in the context of elicitation, strictly proper epistemic utility functions for a given partition can be used to devise systems of penalties and rewards that ensure the kind of honest reporting of forecasts over that partition that made epistemic utility functions, or *scoring rules*, play the starring role in a wide body of literature.^{12}

Once we relax the assumption that we are working with a fixed partition, however, partition-wise strict propriety does not suffice to ensure immodesty, nor to encourage honest reporting. To see why, first note that for any $Q$, our assumptions so far allow us to define the expectation of $\mathfrak{u}(Q)$ relative to any $P$ defined over a refinement of ${\pi}_{Q}$,^{13} and in fact, where ${P}_{Q}$ is the restriction of $P$ to the domain of $Q$, we have:

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}Q)].$

(1)
We can now see, using the Brier score as our epistemic utility function, that any probability function that is not *maximally opinionated*—any probability function that assigns values other than 0 or 1 to some propositions—will assign a greater expected epistemic utility to a probability function other than itself.^{14} (Consequently, if we do not fix a partition but allow a forecaster to choose which partition to report her forecasts on, she will expect to do better by reporting a strict restriction of her credence function as long as her credence function is not maximally opinionated.^{15})

*Example* 1. Suppose $P$ is not maximally opinionated. Let ${\pi}^{*}$ be a coarsening of ${\pi}_{P}$ such that the restriction ${P}^{*}$ of $P$ is maximally opinionated. Note now that for $s\in {\pi}^{*}$ with ${P}^{*}(s)\ne 0$, $\mathfrak{b}({P}^{*},s)=0$, and hence that

${\mathbb{E}}_{P}[\mathfrak{b}({P}^{\ast})]=\text{}{\mathbb{E}}_{P}{}_{{}^{\ast}}[\mathfrak{b}({P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}0.$

Now let ${s}_{0}\in {\pi}_{P}$ be such that $P({s}_{0})\in (0,1)$, and let ${\pi}_{P}^{-}={\pi}_{P}\setminus \{{s}_{0}\}$. By definition,

$\mathfrak{b}(P,{s}_{0})=-{\text{\hspace{0.17em}}(P({s}_{0})-1)}^{2}-{\displaystyle \sum _{s\in {\pi}_{P}^{-}}P}{(s)}^{2}<\text{\hspace{0.17em}}-{\displaystyle \sum _{s\in {\pi}_{P}^{-}}P}{(s)}^{2}\le 0,$

and thus

${\mathbb{E}}_{P}[\mathfrak{b}(P)]<0\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{b}({P}^{\ast})].$

□

An interesting question, then, is whether there are epistemic utility functions that capture the relevant kind of immodesty once we consider probability functions defined over any partition. In other words, the question is whether there are epistemic utility functions such that, for any probability function $P$, $P$ ‘takes itself’ to be doing better than any other $Q\ne \text{}P$ in terms of epistemic utility. But in order to answer this question, of course, we need to make clear what it is for some probability function to ‘take itself’ to be doing better than another in terms of epistemic utility. After all, we cannot just use the familiar notion of expectation here since, in general, for given $P,Q\in \mathcal{P}$, the expectation of $\mathfrak{u}\text{(}Q)$ relative to $P$ is not well-defined.

Before turning to this question, let me introduce a few more pieces of terminology. Fix $P$ and let $\pi $ be some partition of $W$. I will denote by ${[P]}_{\pi}$ the collection of all extensions of $P$ to the coarsest common refinement of ${\pi}_{P}$ and $\pi $—thus, each ${P}^{+}$ in ${[P]}_{\pi}$ will be an extension of $P$ whose domain refines both ${\pi}_{P}$ and $\pi $.^{16} Slightly abusing notation, for a given $Q$ and $P$, I will use ${[P]}_{Q}$ as shorthand for ${[P]}_{{\pi}_{Q}}$. (Note that if $\pi $ is a refinement of ${\pi}_{P}$, ${[P]}_{\pi}$ is just the set of extensions of $P$ to $\pi $, and that if $\pi $ is a coarsening of ${\pi}_{P}$, ${[P]}_{\pi}$ is just the singleton set of the restriction of $P$ to $\pi $.)

It will be convenient to also have at our disposal three different quantities which (albeit imperfectly) summarize some of the information about how $\mathfrak{u}(P)$ and $\mathfrak{u}\text{(}Q)$ compare relative to members of ${[P]}_{Q}$. First, define the *lower expectation*^{17} of $\mathfrak{u}\text{(}Q)$ relative to $P$, which I denote by ${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$, by

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]\text{}:\text{\hspace{0.17em}}=\underset{{P}^{\prime}\in {[P]}_{Q}}{\mathrm{inf}}{\mathbb{E}}_{{P}^{\prime}}[\mathfrak{u}\text{(}Q)].$

Similarly, define the *upper expectation* of $\mathfrak{u}\text{(}Q)$ relative to $P$, which I denote by ${\overline{\mathbb{E}}}_{P}[\text{u(}Q)]$, by

${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]:=\underset{{P}^{\prime}\in {[P]}_{Q}}{\mathrm{sup}}{\mathbb{E}}_{{P}^{\prime}}[\mathfrak{u}\text{(}Q)].$

Finally, for $\alpha \in [0,1]$, we can define the $\alpha $-expectation of $\mathfrak{u}\text{(}Q)$ relative to $P$, which I denote by ${\mathbb{E}}_{P}^{\alpha}[\mathfrak{u}\text{(}Q)]$, by

${\mathbb{E}}_{P}^{\alpha}\text{\hspace{0.17em}}[\mathfrak{u}\text{(}Q)]:=\alpha \cdot \text{}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]+(1-\alpha )\cdot \text{}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)].$

Intuitively, the lower expectation of $\mathfrak{u}\text{(}Q)$ relative to $P$ can be thought of as $P$’s worst-case estimate for the value of $\mathfrak{u}\text{(}Q)$; similarly, the upper expectation of $\mathfrak{u}\text{(}Q)$ relative to $P$ can be thought of as $P$’s best-case estimate for the value of $\mathfrak{u}\text{(}Q)$. (For a given $\alpha $, the $\alpha $-expectation of $\mathfrak{u}\text{(}Q)$ relative to $P$ is a weighted average of the two estimates.)

Clearly,

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}\le {\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)],$

with equality if ${\pi}_{P}\u2291{\pi}_{Q}$, in which case

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]=\text{}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]={\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)].$

Also note that for any $\alpha \in [0,1]$,

$\text{If}{\pi}_{P}\u2291{\pi}_{Q},\text{then}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q\text{)]\hspace{0.17em}\hspace{0.17em}=}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q\text{)]\hspace{0.17em}\hspace{0.17em}=}{\mathbb{E}}_{P}^{\alpha}[\mathfrak{u}\text{(}Q\text{)]\hspace{0.17em}\hspace{0.17em}=}{\mathbb{E}}_{P}[\mathfrak{u}\text{(Q)]}.$

(2)
so that for any $\alpha \in [0,1]$, we have:

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]={\mathbb{E}}_{P}^{0}[\mathfrak{u}\text{(}Q)]\le \text{}{\mathbb{E}}_{P}^{\alpha}[\mathfrak{u}\text{(}Q)]\le \text{}{\mathbb{E}}_{P}^{1}[\mathfrak{u}\text{(}Q)]={\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)],$

(3)
Given all of these resources, we have two ways of formulating a generalized immodesty principle.^{18} Say that an epistemic utility function $\mathfrak{u}$ is *universally* $u$-proper iff for each $P\ne \text{}Q$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)],$

and *strictly universally* $u$-proper iff the above inequality is always strict. Say that it is *universally* $l$-proper iff for each $P\ne Q$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)],$

and *strictly universally* $l$-proper iff the above inequality is always strict. The two generalized immodesty principles I will consider are (*strict*) *universal* $u$-*propriety*—the claim that all epistemic utility functions must be (strictly) universally $u$-proper—and (*strict*) *universal* $l$-*propriety*—the claim that all epistemic utility functions must be (strictly) universally $l$-proper. My question will be whether there are any epistemic utility functions that satisfy any of these principles.

Before turning to this question, I want to spend some time explaining why these two principles stand out among other plausible generalizations as worthy of our attention. (Those who find $u$-propriety and $l$-propriety independently interesting are welcome to skip to the next section.)

One way to think about immodesty is as the claim that epistemic utility functions should make all coherent credence functions immodest in the following sense: an agent with that credence function will think her own credence function is *choice-worthy*—and perhaps uniquely so—among alternative credence functions she could have and relative to that epistemic utility function. When the alternatives all have a well-defined expectation, and on the assumption that an option is choice-worthy if it maximizes expected utility, immodesty thus understood amounts to the claim that any epistemic utility function should be proper or strictly proper. So in order to formulate generalizations of immodesty to the case where alternative credence functions lack a well-defined expectation, we need to consider alternative ways of identifying when a credence function is choice-worthy among a given set of alternatives.

The literature on decision-making with imprecise probabilities contains a number of options we can make use of: rules for deciding between options whose outcomes depend on the state of the world when we do not have well-defined credences for each of the relevant states of the world.^{19} Each of them can be used to formulate a way to understand what it is for a credence function to take itself to be choice-worthy when the alternatives include all credence functions regardless of their domain, and accordingly to formulate a generalized immodesty principle.^{20}

First, we could say that $P$ takes itself to be choice-worthy iff it has greater expectation relative to all members of ${[P]}_{Q}$:

$\text{Foreach}Q\ne \text{}P\text{\hspace{0.17em}andeach}{P}^{\text{+}}\in {[P]}_{Q},\text{}{\mathbb{E}}_{P}{}_{{}^{\text{+}}}[\mathfrak{u}\text{(}P)]\ge \text{}{\mathbb{E}}_{P}{}_{{}^{\text{+}}}[\mathfrak{u}\text{(}Q)]$

(4)
Alternatively, we could say that $P$ takes itself to be choice-worthy iff there is no other option that gets greater expectation relative to all members of ${[P]}_{Q}$:

$\text{Foreach}Q\ne \text{}P,\text{thereis}{P}^{\text{+}}\in {[P]}_{Q}\text{suchthat}{\mathbb{E}}_{P}{}_{{}^{\text{+}}}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{P}{}_{{}^{\text{+}}}[\mathfrak{u}\text{(}Q)]$

(5)
We could instead say that $P$ takes itself to be choice-worthy iff

$\text{Foreach}Q\ne P,\text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge {\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(Q)]},$

(6)
Or that $P$ takes itself to be choice-worthy iff

$\text{Foreach}Q\text{\hspace{0.17em}\hspace{0.17em}}\ne P,{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(Q)]}.$

(7)
We could also say that $P$ takes itself to be choice-worthy iff

$\text{Foreach}Q\text{\hspace{0.17em}\hspace{0.17em}}\ne P,{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[(\mathfrak{u}\text{(Q)]}.$

(8)
Finally, we could say that $P$ takes itself to be choice-worthy iff for a given $\alpha \in (0,1)$,

$\text{Foreach}Q\ne P,{\mathbb{E}}_{P}^{\alpha}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}^{\alpha}[\mathfrak{u}\text{(}Q)].$

(9)
For each of these ways of understanding what it is for $P$ to take itself to be choice-worthy, we could have a generalized version of weak propriety. Now, any objection to using one of the above principles—the detailed formulation of the principles to the more general decision-theoretic setting need not concern us here—can arguably be used to object to a particular way of making precise the fully general version of immodesty.^{21} But since it remains largely an open question whether any of the objections to the above principles are decisive, I want to remain neutral as to which is the best way of characterizing a fully general immodesty principle.

Fortunately, these generalizations are not logically independent of one another. To see that, start by fixing $\mathfrak{u}$ and noting that the supremum and infimum in the definitions of upper and lower expectations can be replaced with a maximum and a minimum. (This follows from the fact that $\{{\mathbb{E}}_{{P}^{+}}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}:\text{}{P}^{+}\in \text{\hspace{0.17em}\hspace{0.17em}}{[P]}_{Q}\}$ is compact in $\mathbb{R}$.^{22}) It follows from this and the observation in (3) that (5) and (7) are equivalent to each other; that (4), (6), and (8) are equivalent to each other; and that (6) entails (7). As a result, (7) is weaker than all of (4), (5), (6), and (8). Further, since for a fixed $P$, any counterexample to (7) is itself a counterexample to (6), we have that the weakest form of immodesty we could hope for is given by (7): if $\mathfrak{u}$ does not satisfy (7) for all $P$, it cannot satisfy any of the other generalizations.

Similarly, it follows from these observations that (6) is the strongest generalization of immodesty from among those we have considered. In short, the most we can hope for when formulating a generalized immodesty principle is essentially the requirement that all epistemic utility functions satisfy (6)—that is, universal $u$-propriety; but at the very least, we want a generalized immodesty principle equivalent to the claim that all epistemic utility functions satisfy (7)—that is, universal $l$-propriety. The question now is whether there are epistemic utility functions satisfying either of these generalizations.

I begin by asking whether there are any universally $u$-proper epistemic utility functions. The answer, perhaps unsurprisingly, is *no*, at least if we restrict our attention to strictly partition-wise proper epistemic utility functions.

Say that an epistemic utility function $\mathfrak{u}$ is *downwards proper* iff for each $P$ and each $Q$ defined over a coarsening of ${\pi}_{P}$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)],$

and *strictly downwards proper* iff the above inequality is always strict. Say that $\mathfrak{u}$ is *upwards* $u$-proper iff for each $P$ and each $Q$ defined over a refinement of ${\pi}_{P}$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$

and *strictly upwards* $u$-proper iff the above inequality is always strict.^{23}

Using these definitions we can make a few simple observations. First, and most clearly, (strict) downwards propriety and (strict) upwards $u$-propriety individually suffice for (strict) partition-wise propriety. Second, for partition-wise proper epistemic utility functions, (strict) downwards propriety (resp. (strict) upwards $u$-propriety) can be established by looking only at comparisons between credence functions and their restrictions (resp. extensions).

**Fact 3.1**. *Suppose* $\mathfrak{u}$ *is partition-wise proper. Then*:

$\mathfrak{u}$

*is downwards proper iff for each*$P$*and each restriction*$Q$ of $P$, ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]$; $\mathfrak{u}$*is strictly downwards proper iff*${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}>\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]$*whenever*$Q$*is a restriction of*$P$.$\mathfrak{u}$

*is upwards*$u$-*proper iff for each*$P$*and each extension*$Q$*of*$P$, ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$; $\mathfrak{u}$*is strictly upwards*$u$-*proper iff*${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}>\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$*whenever*$Q$ is an extension of $P$*and*$Q\ne P$.

*Proof*. Only the right-to-left direction of each biconditional is non-trivial, and that of (i) follows immediately from (1) and the fact that if $Q$ is defined over a coarsening of $P$ and ${P}_{Q}$ is the restriction of $P$ to ${\pi}_{Q}$, partition-wise propriety ensures that ${\mathbb{E}}_{P}{}_{{}_{Q}}[\mathfrak{u}({P}_{Q})]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}{}_{{}_{Q}}[\mathfrak{u}(P)]$.

For the right-to-left direction of (ii), simply note that for $P$ and $Q$ with ${\pi}_{Q}\u2291{\pi}_{P}$, our assumptions ensure that for each extension ${P}^{+}$ of $P$ to ${\pi}_{Q}$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{{P}^{+}}[\mathfrak{u}\text{(}{P}^{+})]\ge {\mathbb{E}}_{{P}^{+}}[\mathfrak{u}\text{(}Q)],$

which ensures ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$.

□

Say that an extension $Q$ of $P$ is *opinionated* iff for each $s\in {\pi}_{P}$ there is ${S}_{Q}\in {\pi}_{Q}$ with ${s}_{Q}\subseteq s$ and $Q({s}_{Q})=\text{}P(s)$—in other words, an extension is opinionated if for each cell of ${\pi}_{P}$, $Q$ assigns all of the probability $P$ assigns to it to a single one of its subsets in ${\pi}_{Q}$. In order to determine the value of the upper or lower expectation of any extension $Q$ of $P$, all we need to look at are the opinionated extensions of $P$ defined over ${\pi}_{Q}$.

**Fact 3.2**. *Fix an epistemic utility function* $\mathfrak{u}$, *a probability function $P$ and any $Q$ defined over a refinement of ${\pi}_{P}$. There are opinionated extensions ${P}_{\text{\hspace{0.17em}}Q}^{+}$ and ${P}_{\text{\hspace{0.17em}}Q}^{-}$ of $P$ defined over ${\pi}_{Q}$ such that*

${\overline{\mathbb{E}}}_{P}[\text{u(}Q)]={\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}and\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]={\mathbb{E}}_{{P}_{Q}^{-}}[\mathfrak{u}\text{(}Q)].$

*Proof*. For each $s\in {\pi}_{P}$, pick ${s}_{Q}^{+},\text{}{s}_{Q}^{-}\in {\pi}_{Q}$, with ${s}_{Q}^{+}\subseteq s$ and ${s}_{Q}^{-}\subseteq \text{}s$, such that for all $t\in {\pi}_{Q}$ with $t\subseteq s$

$\mathfrak{u}(Q,{s}_{Q}^{+})\ge \mathfrak{u}(Q,t)\text{\hspace{1em}and\hspace{0.17em}\hspace{0.17em}}\mathfrak{u}(Q,{s}_{Q}^{-})\le \mathfrak{u}(Q,t),$

and let ${P}_{Q}^{+}$ and ${P}_{Q}^{-}$ be the unique opinionated extensions of $P$ such that for all $s$, ${P}_{Q}^{+}({s}_{Q}^{+})=P(s)$ and ${P}_{Q}^{-}({s}_{Q}^{-})=P(s)$.

□

A consequence of the last two results is that for determining whether $\mathfrak{u}$ is upwards $u$-proper, we don’t really need to compute upper-expectations.

**Corollary 3.3**. *A partition-wise proper epistemic utility function* $\mathfrak{u}$ *is upwards* $u$-*proper (resp. strictly upwards* $u$-*proper) iff for each* $P$ *and each opinionated extension* $Q$ *of* $P$, ${\mathbb{E}}_{P}\text{[}\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}(\mathrm{resp}.\text{\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]>{\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{when}\text{\hspace{0.17em}\hspace{0.17em}}Q\ne P.)$

*Proof*. The left-to-right direction follows immediately from (1) and the definition of upper expectation. For the right-to-left direction, take $P$ and fix $Q$ defined over a refinement of ${\pi}_{P}$. From Fact 3.2, we know that there is an opinionated extension ${P}_{Q}^{+}$ of $P$ to ${\pi}_{Q}$ such that ${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]={\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)]$.

But by assumption,

${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}={\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}{P}_{Q}^{+})]$

(resp. the above inequality is strict when $Q\ne \text{}P$), and from partition-wise propriety we know that

${\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}{P}_{Q}^{+})]\ge {\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)].$

We can thus conclude that ${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\ge {\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$ (resp. ${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}>\text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$ when $Q\ne P$). From Fact 3.1, we conclude that $\mathfrak{u}$ is upwards $u$-proper (resp. strictly upwards $u$-proper).

□

**Corollary 3.4**. *A partition-wise proper epistemic utility function* $\mathfrak{u}$ *is upwards* $u$-*proper (resp. strictly upwards* $u$-*proper) iff for each* $P$ *and each extension* $Q$ of $P$, ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]$ (resp. ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}>\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]$ *when* $Q\ne P$.)

*Proof*. The right-to-left direction is an immediate consequence of Corollary 3.3. For the converse, simply note that if ${P}_{Q}^{+}$ is an opinionated extension of $P$ such that

${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}={\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)],$

the definition of upper expectation entails that ${\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]$, so the left-to-right direction of Corollary 3.3. yields the desired result.

□

Now, a natural question to ask is whether there are epistemic utility functions that are both (strictly) upwards $u$-proper and (strictly) downwards proper. But this just turns out to be the question whether there are universally $u$-proper epistemic utility functions.

**Fact 3.5**. *An epistemic utility function is universally* $u$-*proper (resp. strictly universally* $u$-*proper) iff it is downwards proper (resp. strictly downwards proper) and upwards* $u$-*proper (resp. strictly upwards* $u$-*proper)*.

*Proof*. The left-to-right direction is immediate. For the right-to-left direction, suppose $\mathfrak{u}$ is downwards proper and upwards $u$-proper and fix $P\ne Q$. Let ${P}_{Q}$ be an arbitrary probability function in ${[P]}_{Q}$, so that ${P}_{Q}$ is an extension of $P$ to the coarsest partition that refines both ${\pi}_{P}$ and ${\pi}_{Q}$. From the fact that $\mathfrak{u}$ is upwards $u$-proper, together with Corollary 3.4, we know that

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}{P}_{Q})].$

And since $\mathfrak{u}$ is downwards proper, we know that

${\mathbb{E}}_{P}{}_{{}_{Q}}[\mathfrak{u}\text{(}{P}_{Q})]\ge {\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}Q)].$

We thus have that for any ${P}_{Q}$ in ${[P]}_{Q}$, ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{}{\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}Q)]$, which entails ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$, as desired. If $\mathfrak{u}$ is both strictly upwards $u$-proper and strictly downwards proper, then for any $P\ne Q$ we know that ${P}_{Q}$ cannot equal both $P$ and $Q$, and thus either we have ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]>{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}{P}_{Q})]$ or ${\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}{P}_{Q})]>{\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}Q)]$; either way, we can conclude that ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]>{\mathbb{E}}_{{P}_{Q}}[\mathfrak{u}\text{(}Q)]$, and thus that ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]>{\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]$, as desired.

□

And as announced above, there just are no strictly universally $u$-proper epistemic utility functions.

**Theorem 3.6** *There are no strictly universally* $u$-*proper epistemic utility functions*.

*Proof*. Suppose $\mathfrak{u}$ is strictly downwards proper. Fix $P$ and ${\pi}^{*}\u2291{\pi}_{P}$ with ${\pi}^{*}\ne {\pi}_{P}$, and let ${P}^{*}$ be some extension of $P$ to ${\pi}^{*}$. Strict downwards propriety entails ${\mathbb{E}}_{P*}[\mathfrak{u}\text{(}P*)]\text{\hspace{0.17em}\hspace{0.17em}}>\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P*}[\mathfrak{u}\text{(}P)]$. And combined with the definition of upper-expectation and (1), this entails

${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P\ast )]\ge {\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}P\ast )]\text{\hspace{0.17em}}>{\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}P)]={\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]$

which shows that $\mathfrak{u}$ is not strictly upwards $u$-proper.

□

Finally, we can strengthen Theorem 3.6 if we restrict ourselves to the class of partition-wise strictly proper epistemic utility functions.

**Theorem 3.7**. *There are no universally* $u$-*proper epistemic utility functions that are strictly partition-wise proper*.

*Proof*. Let $\mathfrak{u}$ be strictly partition-wise proper, and suppose $\mathfrak{u}$ is downwards proper. Pick $P$ and let ${\pi}^{*}$ be a refinement of ${\pi}_{P}$ such that such that for all $s\in {\pi}_{P}\setminus {\pi}^{*}$, $P(s)\ne 0$. Let $Q$ be an extension of $P$ to ${\pi}^{*}$ that is not opinionated and pick ${s}_{0}\in {\pi}_{P}$ and ${s}_{0}^{\ast}\in {\pi}^{\ast}$ such that ${s}_{0}^{\ast}\in {s}_{0}$ and $Q({s}_{0}^{\ast})\ne P({s}_{0})$. From Fact 3.2 and the fact that $\mathfrak{u}$ is downwards proper we know, again using (1) and the definition of upper-expectation, that there is an opinionated extension ${P}_{Q}^{+}$ of $P$ defined over ${\pi}_{Q}$ such that

${\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)]={\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]\ge \text{}{\mathbb{E}}_{Q}[\mathfrak{u}\text{(}Q)]\ge {\mathbb{E}}_{Q}[\mathfrak{u}\text{(}P)]={\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)].$

Since by construction ${P}_{Q}^{+}\ne Q$, strict partition-wise propriety ensures that

${\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}{P}_{Q}^{+})]>{\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)].$

Putting all of this together and using the definition of upper-expectation, we have that there is an extension ${P}_{Q}^{+}$ of $P$ such that

${\overline{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}{P}_{Q}^{+})]\ge {\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}{P}_{Q}^{+})]>{\mathbb{E}}_{{P}_{Q}^{+}}[\mathfrak{u}\text{(}Q)]\ge {\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)],$

which shows that $\mathfrak{u}$ is not upwards $u$-proper.

□

The next question to ask is whether there are any universally $l$-proper epistemic utility functions. If we require that epistemic utility functions be *continuous*,^{24} the answer to this question also turns out to be *no*—again, at least if we restrict ourselves to the class of strictly partition-wise proper epistemic utility functions.

Much like in the previous section, I will define an analogue of upwards $u$-propriety that relies on the lower expectation, rather than on upper expectation, in the obvious way: $\mathfrak{u}$ is *upwards* $l$-proper iff for each $P$ and each refinement $Q$ of $P$,

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}P)]\ge {\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)];$

$\mathfrak{u}$ is *strictly upwards* $l$-proper iff the above inequality is always strict.

Before asking whether there are strictly universally $l$-proper epistemic utility functions, we could ask whether there are any epistemic utility functions that are both strictly downwards proper and strictly upwards $l$-proper. If we restrict ourselves to the class of continuous epistemic utility functions, we can answer this question in the negative.^{25}

**Theorem 3.8**. If $\mathfrak{u}$ *is continuous and strictly downwards-proper, then it is not upwards* $l$-*proper*.

So we can conclude that if $\mathfrak{u}$ is continuous, it is not strictly universally $l$-proper.

**Corollary 3.9** *There are no continuous, strictly universally* $l$-*proper epistemic utility functions*.

□

*Proof of Theorem 3.8*. This result is a straightforward consequence of the following lemma (essentially due to Grünwald & Dawid 2004), a proof of which is in the appendix.

**Lemma 3.10**. *Suppose* $\mathfrak{u}$ *is continuous and partition-wise proper. For any* $P\in \mathcal{P}$ *and any* $\pi \u2291{\pi}_{P}$, *there is some* $\widehat{P}\in {[P]}_{\pi}$ *such that, for all* $Q\in {\mathcal{P}}_{\pi}$, *and all* ${P}^{*}\in {[P]}_{\pi}$

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)]\le {\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}\widehat{P})]={\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]\le {\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})].$

Suppose now $\mathfrak{u}$ is continuous and strictly downwards proper, fix $P$, and let $\pi \u2291{\pi}_{P}$ with $\pi \ne {\pi}_{P}$. Lemma 3.10 ensures that there is $\widehat{P}$ with

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}\widehat{P})]={\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})].$

And since $\mathfrak{u}$ is strictly downwards proper and $P$ is a restriction of $\widehat{P}$, we can conclude that

${\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}\widehat{P})]>{\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}P)]={\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)],$

which means $\mathfrak{u}$ is not upwards $l$-proper.

□

Before concluding this subsection, let me note two consequences of Lemma 3.10, which serve as counterparts to Corollary 3.4 and Fact 3.5.

**Fact 3.11**. *Suppose* $\mathfrak{u}$ *is partition-wise proper and continuous. Then* $\mathfrak{u}$ *is upwards* $l$-*proper (resp. strictly upwards* $l$*-proper) iff for each* $P$ *and each* $\pi \u2291{\pi}_{P}$ there is ${P}^{*}\in {[P]}_{\pi}$ *such that* ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$ (resp. ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$ *if* $\pi \ne {\pi}_{P}$).

*Proof*. From Lemma 3.10, we know that for each $P$ and each $\pi \u2292{\pi}_{P}$ there is $\widehat{P}\in {[P]}_{\pi}$ such that

$\underset{{}_{Q\in {\mathcal{P}}_{P}}}{\mathrm{max}}\text{\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}(Q)]={\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}(\widehat{P})]={\mathbb{E}}_{\widehat{P}}[\mathfrak{u}(\widehat{P})]=\text{\hspace{0.17em}\hspace{0.17em}}\underset{Q\in {[P]}_{\pi}}{\mathrm{min}}{\mathbb{E}}_{Q}[\mathfrak{u}(Q)].$

The left-to-right direction now follows immediately (simply let $\widehat{P}={P}^{*}$). For the right-to-left direction, simply note that for each $P$ and $\pi \u2292{\pi}_{P}$ we have ${P}^{*}\in {[P]}_{\pi}$ with ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$ (resp. ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$). But of course,

${\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}({P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}\underset{Q\in {[P]}_{\pi}}{\mathrm{min}}\text{\hspace{0.17em}}{\mathbb{E}}_{Q}[\mathfrak{u}(Q)]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}\underset{Q\in {\mathcal{P}}_{P}}{\mathrm{max}}\text{\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}(Q)],$

where the last equality follows from Lemma 3.10. We can thus conclude that $\mathfrak{u}$ is upwards $l$-proper (resp. strictly upwards $l$-proper).

□

**Fact 3.12**. *A continuous epistemic utility function* $\mathfrak{u}$ *is (strictly) universally* $l$-*proper iff it is (strictly) upwards* $l$-*proper and (strictly) downwards proper*.

*Proof*. Again, we only need to show the right-to-left direction, so fix $P\ne Q$. From Fact 3.11 and the fact that $\mathfrak{u}$ is continuous and upwards $l$-proper, we know that there is ${P}^{*}\in {[P]}_{Q}$ such that ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$. But the fact that $\mathfrak{u}$ is downwards proper entails that ${\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}P\ast )]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}Q)]$, so that

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}Q)],$

as desired. If $\mathfrak{u}$ is strictly upwards $l$-proper and strictly downwards proper, then repeat the above reasoning after first assuming ${\pi}_{Q}$ is neither a refinement nor a coarsening of ${\pi}_{P}$, so that ${P}^{*}$ is either different from $P$ or from $Q$.

□

We have seen that there are no strictly universally $u$-proper or $l$-proper epistemic utility functions. But we can easily find examples of downwards proper and upwards $u$-proper (and hence upwards $l$-proper) epistemic utility functions.

Say that an epistemic utility function $\mathfrak{u}$ is an *additive accuracy measure*^{26} iff there is a function $\text{u}:\text{\hspace{0.17em}\hspace{0.17em}}[0,1]\text{\hspace{0.17em}\hspace{0.17em}}\times \text{\hspace{0.17em}\hspace{0.17em}}\{0,1\}\to \text{}\overline{\mathbb{R}}$ such that

$\mathfrak{u}(P,w)={\displaystyle \sum _{s\in {\pi}_{P}}\text{u}}(P(s),\U0001d7d9\{w\in s\}).$

Say that a function $\text{u}:\text{\hspace{0.17em}\hspace{0.17em}}[0,1]\text{\hspace{0.17em}\hspace{0.17em}}\times \text{\hspace{0.17em}\hspace{0.17em}}\{0,1\}\to \text{}\overline{\mathbb{R}}$ is *proper* iff for all $x\ne y\in [0,1]$,

$x\text{\hspace{0.17em}}\xb7\text{\hspace{0.17em}}\mathfrak{u}\left(x,1\right)\text{}+\text{}\left(1\text{}-\text{}x\right)\text{\hspace{0.17em}}\xb7\text{\hspace{0.17em}}\mathfrak{u}\left(x,0\right)\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}x\text{\hspace{0.17em}}\xb7\text{\hspace{0.17em}}\mathfrak{u}\left(y,1\right)\text{}+\text{}\left(1\text{}-\text{}x\right)\text{\hspace{0.17em}}\xb7\text{\hspace{0.17em}}\mathfrak{u}\left(y,0\right),$

and say that $\text{u}$ is *strictly proper* iff the above inequality is always strict.

If $\mathfrak{u}$ is an additive accuracy measure, I will call $\text{u}$ its *local accuracy measure*. It is easy to see that an additive accuracy measure is partition-wise proper (resp. strictly proper) iff its local accuracy measure is proper (resp. strictly proper).

For a given local accuracy measure $\text{u}$ I will call ${\varphi}_{\text{u}}:\text{\hspace{0.17em}\hspace{0.17em}}[0,1]\to \text{}\overline{\mathbb{R}}$ its *self-expectation function*, where

${\varphi}_{\text{u}}(x)\text{}:=x\cdot \text{u}(x,1)+(1-x)\cdot \text{u}(x,0).$

The linearity of expectation ensures that if $\mathfrak{u}$ is an additive accuracy measure with local accuracy measure u,

${\mathbb{E}}_{P}[\mathfrak{u}(P)]={\displaystyle \sum _{s\in {\pi}_{P}}}{\varphi}_{\text{u}}(P(s)).$

From this we can easily derive the following characterization result.^{27}

**Theorem 3.13**. *An additive accuracy measure* $\mathfrak{u}$ *with a proper local accuracy measure* $\text{u}$ *is downwards proper (resp. strictly downwards proper) iff its self-expectation function* ${\varphi}_{\text{u}}$ *is subadditive, in that for* $x,y\in [0,1]$, with $x+y\in [0,1]$

${\varphi}_{\text{u}}(x+y)\le {\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y),$

*(resp. strictly subadditive, in that the above inequality is always strict)*.

□

*Proof*. For the left-to-right direction, start by taking a three celled partition $\pi =\{{s}_{0},\text{}{s}_{1},\text{}{s}_{2}\}$ and let ${\pi}^{*}=\{{s}_{0},{\text{s}}^{*}\}$ be a coarsening of $\pi $ (and hence, ${s}^{*}={s}_{0}\cup {s}_{1}$). Fix $x,y\in [0,1]$ with $x+y\in [0,1]$ and let $P$ be the unique probability function in ${\mathcal{P}}_{\pi}$ assigning $x$ to ${s}_{1}$ and $y$ to ${s}_{2}$. Let ${P}^{*}$ be the restriction of $P$ to ${\pi}^{*}$ and note that

${\mathbb{E}}_{P}[\mathfrak{u}(P)]={\displaystyle \sum _{s\in \pi}{\varphi}_{\text{u}}}(P(s))={\varphi}_{\text{u}}(1-(x+y))+{\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$

and

${\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}P\ast )]={\displaystyle \sum _{s\in \pi \ast}{\varphi}_{\text{u}}}(P\ast (s))={\varphi}_{\text{u}}(1-(x+y))+{\varphi}_{\text{u}}(x+y)$

Since $\mathfrak{u}$ is downwards proper, we know that ${\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]$, and hence that ${\varphi}_{\text{u}}(x+y)\le {\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$, as required.

For the converse, fix $P$ and $\pi \u2292{\pi}_{P}$ with $\text{|}{\pi}_{P}\text{|}-|\pi \text{|}=1$. In other words, $\pi $ is a coarsening of ${\pi}_{P}$ such that there are ${s}_{0},{\text{s}}_{1}\in {\pi}_{P}$ with ${s}_{0}\cup {s}_{1}={s}^{*}\in \pi $ such that $\pi =\{{s}^{*}\}\cup ({\pi}_{P}\setminus \{{s}_{0},{s}_{1}\})$. Let ${P}^{*}$ be the restriction of $P$ to $\pi $, set ${x}_{0}=P({s}_{0})$, ${x}_{1}=\text{}P({s}_{1})$, and note that, letting $t$ range over elements of $\pi $,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]={\displaystyle \sum _{s\in {\pi}_{P}}{\varphi}_{\text{u}}}(P(s))={\varphi}_{\text{u}}({x}_{0})+{\varphi}_{\text{u}}({x}_{1})+{\displaystyle \sum _{t\ne \text{}{s}^{\ast}}{\varphi}_{\text{u}}}(P(t))$

and

$\begin{array}{l}\begin{array}{cc}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]={\displaystyle \sum _{s\in {\pi}^{\ast}}{\varphi}_{\mathfrak{u}}}({P}^{\ast}(s))& ={\varphi}_{\mathfrak{u}}({x}_{0}+{x}_{1})+{\displaystyle \sum _{t\ne {s}^{\ast}}{\varphi}_{\mathfrak{u}}}({P}^{\ast}(t))\end{array}\\ \begin{array}{cc}\text{\hspace{0.17em}}& ={\varphi}_{\mathfrak{u}}({x}_{0}+{x}_{1})+{\displaystyle \sum _{t\ne {s}^{\ast}}{\varphi}_{\mathfrak{u}}}(P(t))\end{array}\end{array}$

Since ${\varphi}_{\text{u}}$ is subadditive, we conclude that ${\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]$. A simple inductive argument on the size of $\text{|}{\pi}_{P}\text{|}-\text{|}{\pi}_{Q}\text{|}$ shows that for any $P$ and any restriction $Q$ of $P$, ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}Q)]$, as required.

Parallel reasoning shows that, for proper $\mathfrak{u}$, strict subadditivity is equivalent to strict downwards propriety.

□

As we saw in Example 1, the (generalized) Brier score is not downwards proper, but the (generalized version of the) well-known *spherical score* (which, like the Brier score, is an additive accuracy measure) is strictly downwards proper.

*Example* 2. Define $\text{s}:[0,1]\text{\hspace{0.17em}\hspace{0.17em}}\times \text{\hspace{0.17em}\hspace{0.17em}}\{0,1\}\to \text{}\overline{\mathbb{R}}$ by

$\text{s}(x,i)\text{}:=\text{\hspace{0.17em}\hspace{0.17em}}\frac{|1-(i+x)|}{\sqrt{{x}^{2}+{(1-x)}^{2}}}$

and let

$\mathfrak{s}(P,w)\text{:=}{\displaystyle \sum _{s\in {\pi}_{P}}\text{s}(P(s),}\text{\hspace{0.17em}}\U0001d7d9\{w\in s\}).$

Clearly, the restriction of $\mathfrak{s}$ to any partition is just the familiar spherical score, which is strictly proper, so that $\mathfrak{s}$ is strictly partition-wise proper. But $\mathfrak{s}$ is also strictly downwards proper.

To see why, note that for any $x\in [0,1]$,

${\varphi}_{\text{s}}(x)=x\cdot \left(\frac{x}{\sqrt{{x}^{2}+{(1-x)}^{2}}}\right)+(1-x)\cdot \left(\frac{1-x}{\sqrt{{x}^{2}+{(1-x)}^{2}}}\right)=\sqrt{{x}^{2}+{(1-x)}^{2}}=\text{\hspace{0.17em}}\Vert \langle x,(1-x)\rangle \Vert ,$

where $\Vert \xb7\Vert $ is the Euclidean norm.

Since the Euclidean norm is a norm, it satisfies the triangle inequality, and thus for any $x,y\in [0,1]$ with $x+y\in [0,1]$,

${\varphi}_{\text{s}}(x)+{\varphi}_{\text{s}}(y)=\text{\hspace{0.17em}\hspace{0.17em}}\Vert \langle x,1-x\rangle \Vert +\Vert \langle y,1-y\rangle \Vert \ge \Vert \langle x+y,1+1-(x+y)\rangle \Vert >\Vert \langle x+y,1-(x+y)\rangle \Vert \text{\hspace{0.17em}\hspace{0.17em}}={\varphi}_{\text{s}}(x+y),$

which means ${\varphi}_{\text{s}}$ is strictly subadditive and thus that $\mathfrak{s}$ is strictly downwards proper.

□

We also need not look far to find an example of an upwards proper additive accuracy measure.

*Example* 3. Let

$\mathfrak{l}(P,w)=\mathrm{log}(P({[w]}_{P})),$

where ${[w]}_{P}$ is the unique $s\in {\pi}_{P}$ with $w\in s$. As is well known, $\mathfrak{l}$ is partition-wise proper. But it is also upwards $u$-proper. To see that, first note that for any $P$,

${\mathbb{E}}_{P}[\mathfrak{l}(P)]={\displaystyle \sum _{s\in {\pi}_{P}}P}(s)\cdot \text{\hspace{0.17em}}\mathfrak{l}(P,s)={\displaystyle \sum _{s\in {\pi}_{P}}P}(s)\mathrm{log}(P(s)).$

Fix now $P$, let ${P}^{*}$ be an opinionated extension of $P$, and for each $s\in {\pi}_{P}$ let ${s}^{*}$ denote the unique $t\in {\pi}_{{P}^{*}}$ with $t\subseteq \text{s}$ and ${P}^{*}(t)\ne 0$. Note now that

${\mathbb{E}}_{P}[\mathfrak{l}\text{(}P)]={\displaystyle \sum _{s\in {\pi}_{P}}P}(s)\cdot \mathrm{log}(P(s))={\displaystyle \sum _{s\in {\pi}_{P}}{P}^{\ast}}({s}^{\ast})\cdot \mathrm{log}({P}^{\ast}({s}^{\ast})).$

And of course,

$\sum _{s\in {\pi}_{P}}{P}^{\ast}}({s}^{\ast})\cdot \mathrm{log}({P}^{*}({s}^{*}))={\displaystyle \sum _{t\in {\pi}_{{P}^{*}}}{P}^{\ast}}(t)\cdot \text{}\mathfrak{l}({P}^{*},\text{}t)={\mathbb{E}}_{{P}^{\ast}}[\mathfrak{l}({P}^{\ast})].$

From Corollary 3.3 we conclude that $\mathfrak{l}$ is upwards $u$-proper, and thus upwards $l$-proper.

□

Note that the log score is also an additive accuracy measure with local accuracy measure l, where

$\text{l}(x,i)=i\cdot \mathrm{log}(x).$

Note too that $\text{l}(0,0)=0$. Interestingly, any additive accuracy measure $\mathfrak{u}$ whose local accuracy measure $\text{u}$ satisfies $\text{u}(0,0)=0$ will be upwards $u$-proper, as the following makes clear.

**Theorem 3.14**. *An additive accuracy measure* $\mathfrak{u}$ *is upwards* $u$-*proper (resp. strictly upwards proper) iff* $\text{u}(0,0)\le 0$ (*resp*. $\text{u}(0,0)<0$), *where* $\text{u}$ *is* $\mathfrak{u}$’*s local accuracy measure*.

*Proof*. Suppose $\mathfrak{u}$ is an upwards $u$-proper additive accuracy measure with local accuracy measure $\text{u}$. Take a two cell partition $\pi =\{{s}_{0},{s}_{1}\}$ of $W$. Let ${P}_{0}$ be the unique probability function in ${\mathcal{P}}_{\pi}$ that assigns probability $1$ to ${s}_{0}$ and let ${P}_{\top}$ be the unique probability function defined over the trivial partition $\{W\}$. Of course ${P}_{0}$ is an opinionated extension of ${P}_{\top}$, so that the upwards $u$-propriety of $\mathfrak{u}$ and Fact 3.2 entails ${\mathbb{E}}_{{P}_{\top}}[\mathfrak{u}\text{(}{P}_{\top})]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}_{0}}[\mathfrak{u}\text{(}{P}_{0})]$. But ${\mathbb{E}}_{{P}_{\top}}[\mathfrak{u}\text{(}{P}_{\top})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}u(1,1})$, and ${\mathbb{E}}_{{P}_{0}}[\mathfrak{u}({P}_{0})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}u(1,1)+u(0,0)$, and thus u(0,0) ≤ 0. A similar argument shows that if $\mathfrak{u}$ is strictly upwards $u$-proper, then u(0,0) < 0.

To establish the other direction, fix $P$ and let ${P}^{*}$ be an opinionated extension of $P$. For each $s\in {\pi}_{P}$, let ${s}^{*}$ be the unique $t\in {\pi}_{{P}^{*}}$ such that $t\subseteq s$ and ${P}^{*}(t)\ne 0$, and let ${n}_{s}=|\{t\in {\pi}_{{P}^{*}}:t\subseteq s\}\text{|}$.

Note now that

${\mathbb{E}}_{{P}^{\ast}}[\text{u(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}={\displaystyle \sum _{t\in {\pi}_{{P}^{\ast}}}{\varphi}_{\text{u}}}({P}^{\ast}(t))={\displaystyle \sum _{s\in {\pi}_{P}}{\varphi}_{\text{u}}}({P}^{\ast}({s}^{\ast}))+{\displaystyle \sum _{s\in {\pi}_{P}}({n}_{s}-1)}\cdot \text{u}(0,0).$

And since clearly

$\sum _{s\in {\pi}_{P}}{\varphi}_{\text{u}}}({P}^{\ast}({s}^{\ast}))={\mathbb{E}}_{P}[\text{u(}P)],$

u(0,0) ≤ 0 (resp. u(0,0) < 0) entails that ${\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]\ge {\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]$ (resp. ${\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{0.17em}\hspace{0.17em}}>\text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]$, and hence, using Fact 3.2, we can conclude that $\mathfrak{u}$ is upwards $u$-proper (resp. strictly upwards $u$-proper).

□

Now, it is well-known^{28} that if $\text{u}$ a proper local accuracy measure (resp. strictly proper), the function ${\varphi}_{\text{u}}$ is convex (resp. strictly convex), in the sense that for each $\alpha \in (0,1)$ and $x,y\in [0,1]$,

${\varphi}_{\text{u}}(\alpha x+(1-\alpha )y)\le \alpha {\varphi}_{\text{u}}(x)+(1-\alpha ){\varphi}_{\text{u}}(y)$

(resp. the above inequality is always strict). And it is well-known (see, e.g., Bruckner 1962) that for a convex $f$ over $[0,1]$, $f(0)\le 0$ (resp. $f(0)<0$) iff $f$ is *superadditive* (resp. *strictly superadditive*), in the sense that for $x,y\in [0,1]$ with $x+y\in [0,1]$,

$f\left(x\text{}+\text{}y\right)\ge f\left(x\right)\text{}+f\left(y\right)$

(resp. the above inequality is always strict). Since by definition of ${\varphi}_{\text{u}},{\varphi}_{\text{u}}(0)=\text{u}(0,0)$, we can put all these observations together with Theorem 3.14, to establish the following analogue Theorem 3.13:

**Corollary 3.15**. *An additive accuracy measure* $\mathfrak{u}$ *is upwards* $u$-*proper (resp. strictly upwards* $u$-*proper) iff its local accuracy measure* $\text{u}$ *is proper and* ${\varphi}_{\text{u}}$ *is superadditive (resp. strictly superadditive)*.

□

To conclude this section, let me state one final characterization result, this time for the class of upwards $l$-proper additive accuracy measures.

**Theorem 3.16**. *A continuous, additive accuracy measure* $\mathfrak{u}$ *with a strictly proper local accuracy measure* $\text{u}$ *is upwards* $l$-*proper (resp. strictly upwards* $l$-*proper) iff for each* $z\in [0,1]$ *there are* $x,y\in [0,1]$ with $x+y=x$ and ${\varphi}_{\text{u}}(x+y)\ge {\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$ (resp. ${\varphi}_{\text{u}}(x+y)>{\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$).

*Proof*. We know from Fact 3.11 that $\mathfrak{u}$ is upwards $l$-proper iff for each $P$ and each $\pi \u2292{\pi}_{P}$ there is ${P}^{*}\in {[P]}_{\pi}$ such that

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})],$

and that $\mathfrak{u}$ is strictly upwards $l$-proper iff the above inequality is always strict whenever $\pi \ne {\pi}_{P}$. For the left-to-right direction, assume $\mathfrak{u}$ is upwards $l$-proper (resp. strictly upwards $l$-proper). Given $z\in [0,1]$, take $P$ defined over a two-celled partition $\{{s}_{0},\text{}{s}_{1}\}$ with $P({s}_{0})=z$ and take a three-celled refinement $\pi =\{{s}_{0}^{0},\text{}{s}_{0}^{1},{s}_{1}\}$ of ${\pi}_{P}$. From Fact 3.11 we know that there is some ${P}^{*}\in {[P]}^{*}$ such that ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$ (resp. ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\text{\hspace{0.17em}\hspace{0.17em}}\ge \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$). Let $x={P}^{\ast}({s}_{0}^{0})$ and $y={P}^{\ast}({s}_{1}^{0})$, and note that the above inequality entails that ${\varphi}_{\text{u}}(x+y)\ge {\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$.

For the right-to-left direction, start by fixing $P$, $\pi \u2292{\pi}_{P}$ with $\text{|}\pi \text{|}-\text{|}{\pi}_{P}\text{|}=1$, and let ${s}^{*}\in {\pi}_{P}$ and ${s}_{0}^{\ast},{s}_{1}^{\ast}\in \pi $ be such that ${s}^{\ast}={s}_{0}^{\ast}\cup {s}_{1}^{\ast}$. Let $z=\text{}P({s}^{*})$ and fix $x,y\in [0,1]$ with $x+y=z$ such that ${\varphi}_{\text{u}}(z)\ge {\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$. Let ${P}^{*}$ be the unique extension of $P$ to $\pi $ that assigns probability $x$ to ${s}_{0}^{\ast}$, and note that, letting $s$ range over ${\pi}_{P}$,

${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]={\varphi}_{\text{u}}(x+y)+{\displaystyle \sum _{s\ne {s}^{\ast}}{\varphi}_{\text{u}}}(P(s))$

and

${\mathbb{E}}_{P\ast}[\mathfrak{u}\text{(}P\ast )]={\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)+{\displaystyle \sum _{s\ne {s}^{\ast}}{\varphi}_{\text{u}}}(P(s)),$

whence ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$. A simple induction argument on the size of $\text{|}\pi \text{|}-\text{|}{\pi}_{P}\text{|}$ allows us to conclude that for each $\pi \u2291{\pi}_{P}$ there is ${P}^{*}$ with ${\mathbb{E}}_{P}[\mathfrak{u}\text{(}P)]\ge {\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]$, and thus that $\mathfrak{u}$ is upwards $l$-proper. Parallel reasoning shows that if for each $z\in [0,1]$ there are $x,y\in [0,1]$ with $x+y=z$ such that ${\varphi}_{\text{u}}(x+y)>{\varphi}_{\text{u}}(x)+{\varphi}_{\text{u}}(y)$, $\mathfrak{u}$ is strictly upwards $l$-proper.

□

Surprisingly, it follows from this that for additive accuracy measures, upwards $u$-propriety and $l$-propriety coincide:

**Corollary 3.17**. *A continuous, additive accuracy measure* $\mathfrak{u}$ *with a strictly proper accuracy measure is upwards* $l$-*proper (resp. strictly upwards* $l$-*proper) iff it is upwards* $u$-*proper (resp. strictly upwards* $u$-*proper)*.

*Proof*. Apply Theorem 3.16 with $z=1$, to conclude that if $\mathfrak{u}$ is upwards $l$-proper (resp. strictly upwards $l$-proper), then ${\varphi}_{\text{u}}(0)\le 0$, since

${\varphi}_{\text{u}}(1)={\varphi}_{\text{u}}(1+0)\ge {\varphi}_{\text{u}}(1)+{\varphi}_{\text{u}}(0).$

Using Theorem 3.14, we conclude that $\mathfrak{u}$ is upwards $u$-proper. Strictly parallel reasoning shows that if $\mathfrak{u}$ is strictly $l$-proper, then it is strictly upwards $u$-proper.

□

According to the standard, Bayesian picture we have been taking for granted, an agent’s epistemic state can be adequately represented with a single probability function. But many think this is a mistake: on their view, an agent’s epistemic state is best represented not with a single probability function but with a *set* thereof. This view can model any agent the more standard Bayesian picture can equally well—identify each probability function with its singleton set. But it is, at least on the face of it, more flexible. It can, for example, represent the kind of epistemic state most of us are arguably in with respect to the proposition that the last person to arrive in Australia in the year 2000 was wearing a white shirt: a state that seems hard to represent by assigning any one number to that proposition.

Grant that proponents of this dissenting view are right—grant, in other words, that one can be in the kind of epistemic state that is better modeled with a set of probability functions than with a single probability function.^{29} An interesting question is whether it is ever epistemically *rational* to be in the kind of state that cannot be aptly represented with a unique probability function.

There has been much debate around this question and it is not my purpose here to take a stance either way.^{30} But a family of related and interesting results that emerged from this debate bear some resemblance to the results established in this paper and it is worth clarifying exactly how they differ from my results.^{31}

In the literature on epistemic utility theory, it is by and large taken for granted that something like the following principle captures an important relationship between epistemic utility and epistemic rationality:^{32}

Dominance: If for any world $w$, the epistemic utility of $P$ at $w$ is strictly lower than that of ${P}^{\prime}$ at $w$, and if for some world $w$, the epistemic utility of $P$ at $w$ is strictly less than that of ${P}^{\prime}$ at $w$, then $P$ is epistemically irrational.

So, much attention has been paid to the question what kinds of reasonable epistemic utility functions can be defined that allow us to compare the epistemic utility of a ‘precise’ credence function at a world with that of an ‘imprecise’ one—here we think of sets of probability functions as ‘imprecise’ or ‘indeterminate’ credence functions since for many propositions they do not determine a unique degree of credence.

For example, generalizing some results in Schoenfield (2017), Berger and Das (2020) have argued that for any imprecise credence function there is a precise credence function that is at least as accurate relative to any world—at least given some assumptions about what a measure of accuracy must be like. And this, at least if we think that epistemic utility functions should be measures of accuracy, arguably shows that no epistemic utility function can be strictly upwards $l$-proper, and *a fortiori* that no epistemic utility function can be strictly universally $l$-proper or strictly universally $u$-proper.

To see why, note that for a fixed $\pi $ and a refinement ${\pi}^{\prime}$ of $\pi $, we can identify any probability function $P$ defined over $\pi $ with an imprecise probability function defined over ${\pi}^{\prime}$—essentially, we identify $P$ with ${[P]}_{\pi}$ (see footnote 17). Berger and Das’s results can then be used to show that on any reasonable measure of accuracy, for any $P$ defined over $\pi $ there is some ${P}^{\prime}$ defined over ${\pi}^{\prime}$ that is as accurate as $P$ relative to any state of the world. Thus, if we identify epistemic utility functions with measures of accuracy satisfying their constraints, their result can be used to show that for any $\pi $, any refinement ${\pi}^{\prime}$ of $\pi $, and any $P$ defined over $\pi $, there is ${P}^{\prime}$ defined over ${\pi}^{\prime}$ such that for any $w$, the epistemic utility of $P$ at $w$ equals that of ${P}^{\prime}$ at $w$. And this in turn would suffice to show that there are no strictly upwards $l$-proper epistemic utility functions.

Now, we can first observe that in a sense my results are more general, in that they do not make any substantive assumptions about epistemic utility functions—at most, we assume that epistemic utility functions are continuous and truth-directed.^{33} I do not, for instance, assume that epistemic utility is atomistic (I do not assume that the epistemic utility of a credence function at a world is determined by the utility of the individual credence assignments that make up that credence function at that world), nor that it is extensional (I do not assume that the epistemic utility of a credence function at a world is independent of the content of the propositions it assigns credence to).^{34}

But there is a more significant difference between my results and those from the literature on imprecise probability functions. The question at the center of impossibility results for imprecise probability functions takes as given a fixed partition and asks whether there are reasonable ways of measuring accuracy or epistemic utility *for that partition* that will sometimes have imprecise probability functions doing better than precise probability functions. And one assumption all in the literature seem to take for granted—an assumption which seems perfectly natural given the presuppositions of the question—is that any reasonable measure of accuracy for a given partition $\pi $ should satisfy the following constraint:^{35}

Perfection: For any $w$, there is a credence function ${P}_{w}$ that has maximal epistemic utility with respect to $\pi $: the epistemic utility of any credence function, precise or not, defined over $\pi $ and different from ${P}_{w}$, is strictly less than that of ${P}_{w}$.

To get a handle on what Perfection says, it helps to focus on a simple case with a two-cell partition ${\pi}^{*}=\{{\text{s}}_{1},{\text{s}}_{2}\}$. Take some world $w\in {s}_{1}$ and consider the credence function ${P}_{w}$ that assigns $1$ to ${s}_{1}$ and $0$ to ${s}_{2}$. It seems natural to say that, with respect to ${\pi}^{*}$, any credence function different from ${P}_{w}$ has lower epistemic utility, at $w$, than ${P}_{w}$ has. In particular, relative to $w$, any (non-trivially) imprecise credence function—any set of probability functions defined over $\pi $ with more than one element—is worse, epistemically and with respect to π^{*}, than ${P}_{w}$. After all, relative to $w$, ${P}_{w}$ has a legitimate claim to being as good as it gets, epistemically *with respect to* ${\pi}^{*}$.

Now, in this paper I have not trafficked in anything quite like the notion of epistemic utility relative to a partition. So it is not completely straightforward to translate Perfection into a constraint on the kind of epistemic utility functions we have been interested in. But there is a somewhat natural way to recast Perfection into a constraint on generalized epistemic utility functions in my sense. And once we see what that constraint amounts to, we will see both that it is not quite so plausible (as a constraint on generalized epistemic utility functions) and that my results do not depend on it.

Recall that in comparing the discussion of imprecise probability functions over a partition with my discussion of credence functions whose domain does not include elements of that partition, I identified a (precise) credence function defined over a coarsening ${\pi}^{\prime}$ of $\pi $ with an imprecise credence function defined over $\pi $. Hence, saying that $P$ is better, epistemically and relative to $w$, than any imprecise credence function defined over the same domain as $P$, entails that for any coarsening of $\pi $, $P$ is better, relative to $w$, than any other credence function defined over that coarsening. So in my framework, Perfection amounts to the claim that relative to any $\pi $, any non-trivial coarsening ${\pi}^{\prime}$ of $\pi $, and any $w$, there is some credence function ${P}_{w}$ defined over $\pi $ that is better, epistemically relative to $w$, than any credence function defined over ${\pi}^{\prime}$. Equivalently, in my framework Perfection amounts to the claim that for any partition, any world $w$, and any refinement of $\pi $, there is a credence function defined over that refinement that is better relative to $w$ than any credence function defined over $\pi $:

Refinement: For any $\pi $, any refinement ${\pi}^{\prime}$ of $\pi $, and any $w$, there is ${P}_{w}^{{\pi}^{\prime}}$ defined over ${\pi}^{\prime}$ such that for any $P$ defined over $\pi $, $\mathfrak{u}$$({P}_{w}^{\pi},w)\text{\hspace{0.17em}\hspace{0.17em}}>\text{\hspace{0.17em}\hspace{0.17em}}\mathfrak{u}(P,w)$.

Now, it should be clear that my results do not depend on anything like Refinement. After all, Refinement rules out as admissible any upwards $u$-proper epistemic utility function, whereas my assumptions are compatible with the admissibility of such epistemic utility functions. So, strictly speaking, this is another sense in which my results are more general. But it is worth highlighting that, whereas in the discussion of imprecise probability functions something like Refinement may well be uncontroversial, in the present context it is far from it.

The constraint imposed by Refinement is incompatible with thinking of some refinements as an unalloyed epistemic bad: if epistemic utility satisfies Refinement, there can be no proposition such that that you are epistemically worse off no matter what when you come to form an opinion on that proposition. Whether it be a proposition about phlogiston, or about miasma, Refinement entails that it is always in principle possible to do better, epistemically, by forming a view on that proposition.

Of course, it may be that this is the right way to think about epistemic utility, but it is certainly not *obviously* the right way to think about it. One might, for example, think that there is an ideal language for theorizing about the world, and that the ideal epistemic state is the one that is maximally accurate with respect to propositions expressible in that ideal language and simply fails to even entertain hypotheses that cannot be formulated in that language. If that’s how we think about epistemic utility, we will want to reject Refinement.^{36}

At any rate, it is not my goal here to suggest that the right way to think about epistemic utility is incompatible with Refinement. But I do want to point out that it is yet another substantive assumption about epistemic utility that is required for the impossibility results mentioned above to go through. In contrast, my results make no substantive assumptions about epistemic utility. Rather, they establish that no matter *how* we think of epistemic utility, there are hard limits on the degree of immodesty we can expect to come from epistemic rationality.^{37}

In contexts where probability functions are stipulated to all be defined over a fixed domain, strictly proper epistemic utility functions arguably capture a certain kind of immodesty. Once we move on to contexts where probability functions are allowed to be defined over different domains, strictly proper epistemic utility functions do not capture the relevant sense of immodesty. My question was whether there was a way of characterizing immodesty in this general setting. I considered a variety of strong, generalized immodesty principles and showed that, under minimal assumptions, no epistemic utility function satisfies any of these stronger immodesty principles.

I also considered some very weak generalizations of strict propriety and showed that some of the familiar epistemic utility functions satisfy one or another of these weak immodesty principles. One interesting question left outstanding is how strong an immodesty principle can be imposed without ruling out every reasonable epistemic utility function. In particular, one interesting question is whether there are immodesty principles that distinguish among partitions—say, immodesty principles that say that for any partition of a certain kind, all credence functions defined over that partition take themselves to be doing better, in terms of epistemic utility, than any of their restrictions without thereby taking themselves to be worse than any of their extensions.

I have not, of course, argued that epistemic utility functions ought to satisfy any of these stronger immodesty principles. But it is at the very least not obvious that strict partition-wise propriety suffices to capture the sense in which epistemic rationality is said to be immodest. What else, if anything, suffices to capture that kind of immodesty is a question for some other time.

- Cf. Joyce (2009: 279) on ‘Minimal Coherence’. ⮭
- Cf. the principle ‘Immodest Dominance’ in Pettigrew (2016: 24). See also Joyce (2009: 280) on what he calls ‘Coherent Admissibility’. ⮭
- Cf. Pérez Carballo (2023: esp. §3.2). ⮭
- Previous work on related issues include Carr (2015), Pettigrew (2018), Talbot (2019). Unlike those authors, I make very minimal assumptions about the nature of epistemic utility—I do not assume, for example, that epistemic utility functions are simply measures of accuracy (my results are thus independent of whether we endorse the program of ‘accuracy first’ epistemology), nor that the epistemic utility of a credence function at a world is determined by the epistemic utility of individual credence assignments to propositions (my results do not rely an ‘atomistic’ conception—in the sense of Joyce 2009: § 5—of epistemic utility). ⮭
- Since I will be taking Probabilism for granted, we can work with these simplified definitions without loss of generality. ⮭
- The standard definition of $\pi $-measurability of course requires that the preimage of any open set in $\overline{\mathbb{R}}$ be in the Boolean closure of $\pi $. But since $\pi $ is finite, this simpler definition is equivalent to the standard one. ⮭
- I identify the truth-value of $s$ in $w$ with $1$ if $s$ is true in $w$ ($w\in s$) and $0$ otherwise. ⮭
- Arguably, epistemic utility functions would need to satisfy additional constraints to count as genuinely epistemic ways of comparing probability functions relative to a given state of the world. For a sense of the wide range of possible constraints, see Joyce (2009). ⮭
- Strictly speaking, immodesty alone is not enough to motivate something as strong as Strict Propriety. The assumption that epistemic rationality is immodest ensures at best that at any one time, an agent’s epistemic values
*at that time*must be such as to render her current credence function immodest: it must judge that it is doing better, by the light of the agent’s current values, than any alternative credence function. In order to motivate Strict Propriety, we would need additional assumptions, e.g., that an agent’s epistemic values at a time should never by themselves rule out as irrational any coherent credal state, or that there is a single admissible epistemic utility function. ⮭ - See, however, Campbell-Moore and Levinstein (2021) for arguments that in the context of additive and truth-directed epistemic utility functions, weak propriety suffices; relatedly, see Nielsen (2022) for generalizations of the results of Predd et al. (2009) using a condition weaker than strict propriety. ⮭
- I’m using ‘restriction’ here in the standard way, where the restriction of a function $f$ defined over $X$ to some $Y\subseteq X$ just is a function $f{\upharpoonright}_{Y}$ whose domain is $Y$ and is such that for all $y\in Y,f{\upharpoonright}_{Y}(y)=f(x)$. The terminological ambiguity here is merely apparent: if $P$ is a credence function over $\pi $, ${\pi}^{\prime}\u2292\pi $, and $Q$ the restriction of $P$ to ${\pi}^{\prime}$, we can identify $P$ with a unique probability function defined over the smallest algebra ${\mathbb{A}}_{\pi}$ containing $\pi $ and $Q$ with the restriction of that probability function to the smallest algebra containing ${\pi}^{\prime}$, which of course is a subset of ${\mathbb{A}}_{\pi}$. ⮭
- Again, see Gneiting and Raftery (2007) and references therein. ⮭
- This is because our assumptions ensure that $\mathfrak{u}(Q,\text{\hspace{0.17em}}\xb7\text{\hspace{0.17em}})$ is measurable with respect to $P$, and thus that the expectation is well-defined. ⮭
- Cf. Carr (2015), Pettigrew (2018), and Pérez Carballo (2023). ⮭
- Note that the example below suffices to show that the normalized version of the Brier score, defined by
${\mathfrak{b}}^{*}(P,w)=-\frac{1}{\left|{\pi}_{P}\right|}{\displaystyle \sum _{s\in {\pi}_{P}}(P(}s)-\mathrm{\U0001d7d9}\{w\in s\}{)}^{2},$is also not downwards proper. ⮭
- Note that ${[P]}_{\pi}$ is always non-empty, since whenever $\mathbb{A}$ and ${\mathbb{A}}^{\prime}$ are finite Boolean algebras, and $\mathbb{A}\subseteq \mathbb{A}\prime $, any probability function over $\mathbb{A}$ can be extended to ${\mathbb{A}}^{\prime}$. ⮭
- Note that we can think of ${[P]}_{\pi}$ as an
*imprecise*probability function, most naturally identified with a*representor*in the sense of van Fraassen (1990), viz. a set of probability functions. In this case, we can think of ${[P]}_{\pi}$ as an imprecise probability function that assigns precise values to each member of ${\pi}_{P}$ and imprecise values to any other member of $\pi $. The definitions to follow can thus be seen as variants of the familiar definition of upper and lower expectation for imprecise probabilities (Gilboa 1987; Satia & Lave 1973), at least assuming that all representors are*convex*, in the sense that representors are closed under convex combinations (linear combinations with non-negative weights adding up to one). ⮭ - To anticipate, while I will focus on these two formulations, the reason is not that I think either one of them is the best way to generalize immodesty to allow for alternatives to a credence function with different domains. Rather, it is because these two principles stand at the extreme ends of a much larger family of plausible generalizations: one is stronger and the other is weaker than any other generalization. ⮭
- See, e.g., Troffaes (2007) for a recent overview of the relevant literature. For reasons that will emerge in Section 4, however, my concerns are somewhat orthogonal to questions animating the debate over the rationality of imprecise probability functions, so we do not want to take the analogies here too seriously. ⮭
- Specifically, (4), below, corresponds to the fourth preference ranking listed in §5.4.3 of Halpern (2003); (5) corresponds to using what is sometimes called the
*Maximality*rule (e.g., Walley 1991); (6) to using the $\Gamma $-*Maximax*rule (e.g., Satia & Lave 1973); (7) to using the $\Gamma $-*Maximin*rule (e.g., Gilboa & Schmeidler 1989); (8) to using*Interval Dominance*(e.g., Ramoni & Sebastiani 2001); and (9) corresponds to using the so-called*Hurwicz Criterion*(Hurwicz 1951). I should note that the list above is incomplete. Some of the rules that have been discussed in the literature—for example, the so-called ‘Ellsberg’s rule’ (Ellsberg 1961) and*Minimax regret*(Savage 1951)—do not correspond to any of the principles above. As far as I can tell, whether an immodesty principle could be formulated using one of these other decision rules is an interesting question left open by anything I have to say. ⮭ - As an anonymous referee rightly points out, all of these principles violate the arguably unobjectionable principle of
*weak dominance*—that any rational agent should prefer $A$ to $B$ if $A$ is never worse and sometimes strictly better than $B$. We can avoid these concerns by reformulating our principles as principles of choice from among non-weakly-dominated options (as in, e.g., Troffaes 2007). Even then, objections to each of the rules considered above remain, especially when dealing with sequential choice problems. For discussion, see, for example, Seidenfeld (1988) and Bradley (2018). ⮭ - It can also be seen as a direct consequence of Fact 3.2, below. ⮭
- A helpful mnemonic: for
*downwards*propriety you compare by going*down*in size: you compare a credence function only with those defined over a smaller domain; for*upwards*propriety, you go*up*in size: you check only those with a*larger*domain. ⮭ - For a given $\mathfrak{u}$ and $\pi $, we can think of the restriction of $\mathfrak{u}$ to ${\mathcal{P}}_{\pi}\times \text{}W$ as a function ${f}_{\mathfrak{u}}^{\pi}:{\Delta}^{N-1}\to \text{}\overline{\mathbb{R}}$, where $N=\text{\hspace{0.17em}}|\pi |$ and for each $n$, ${\Delta}^{n}$ is the standard $n$-simplex, that is,
${\Delta}^{n}\text{:=\hspace{0.17em}}\{x=\text{\hspace{0.17em}\hspace{0.17em}}\langle {x}_{1},\dots ,{x}_{n+1}\rangle \in {\mathbb{R}}^{n+1}:{\displaystyle \sum {x}_{i}}=1,\text{andforall}i\le n\text{+1},{x}_{i}\ge \text{0}\}.$(Simply fix an enumeration of $\pi $ and identify each $P\in {\mathcal{P}}_{\pi}$ with the vector $\langle P({s}_{1}),\dots ,\text{}P({s}_{N})\rangle $.) An epistemic utility function $\mathfrak{u}$ is
*continuous*iff for each $\pi $ and $w$, the function $x\mapsto {f}_{\text{u}}^{\pi}(x,w)$ is continuous. ⮭ - In fact, something slightly weaker than the full continuity assumption may be all that is really needed—see Grünwald and Dawid (2004) for a more general result, especially their Theorem 6.2 together with their Corollary 4.2. ⮭
- I am thus implicitly assuming that accuracy measures satisfy what Joyce (2009: 273) calls ‘Extensionality’. Nothing hinges on this assumption, of course—you can simply read ‘additive accuracy measure’ as shorthand for ‘extensional, additive accuracy measure’. ⮭
- See Pérez Carballo (2023: Proposition 2), for a slightly more general result. ⮭
- The canonical reference here is Savage (1971: §4). See also Theorem 2 in Gneiting and Raftery (2007). ⮭
- Cf. Joyce (2010: 283): “It is rare, outside casinos, to find opinions that are anywhere near definite or univocal enough to admit of quantification. An agent with a precisecredence for, say, the proposition that it will rain in Detroit next July 4th should be able to assign an exact ‘fair price’ to a wager that pays $100 if the proposition is true and costs $50 if it is false. The best most people can do, however, is to specify some vague range.” ⮭
- See, e.g., Levi (1974), Joyce (2005), White (2010). For a helpful overview of this vast body of literature, see Bradley (2019). ⮭
- See, e.g., Schoenfield (2017), Seidenfeld, Schervish, and Kadane (2012), Berger and Das (2020), Mayo-Wilson and Wheeler (2016), Konek (in press). ⮭
- I say ‘something like’ because the principle as stated is in need of clarification and arguably subject to a number of powerful objections. For one thing, we need to clarify whether the principle holds for any admissible measure of epistemic utility, or whether it needs to be understood as quantifying over all admissible ways of measuring epistemic utility—Schoenfield (2017) opts for the latter, in formulating a principle she calls ‘Permission’, but Pettigrew (2016) opts for the former (see especially his discussion of the well-known ‘Bronfman objection’ in ch. 5). (As stated, the principle is perhaps closest to what Joyce 2009 calls ‘Admissibility’.) For another, the principle might be subject to counterexamples in cases where
*every*credence function suffers from a similar defect—if any credence function is dominated by another, say (if for any $P$ there is some ${P}^{\prime}$ that has always at least as much and sometimes more epistemic utility than $P$), we may think some dominated credence functions are rationally permissible, even if not rationally required (see again the formulation of ‘Permission’ in Schoenfield 2017). Other, weaker alternatives to Dominance include the principles Pettigrew (2016) calls ‘Undominated Dominance’ and ‘Immodest Dominance’, as well as the principle that Joyce (1998) relies on in his argument for Probabilism—the principle Pettigrew (2016) calls ‘Dominance’, which is weaker than what we are calling ‘Dominance’. ⮭ - An additional assumption, worth pointing out since it may go unnoticed, is that epistemic utility functions are real-valued. For a way of thinking about epistemic utility for imprecise probabilities that does without this assumption—a view on epistemic utility on which imprecise probabilities are only partially ranked in terms of epistemic utility relative to any world—see Seidenfeld et al. (2012). ⮭
- The results in Seidenfeld et al. (2012) and Schoenfield (2017) rely on similar assumptions. ⮭
- Cf. the principle Schoenfield calls ‘Boundedness’ (2017: 672), and what Berger and Das call ‘Local Boundedness’ (2020: 13), a principle which is implicitly assumed in Seidenfeld et al. (2012) (see, e.g., the proof of their Proposition 5 at p. 1256). ⮭
- Note that this view is incompatible with Extensionality—the thesis that the epistemic utility of a credence function at a world is independent of the content of the propositions it assigns credence to. Indeed, it may be that a commitment to Extensionality all but requires a commitment to Refinement. ⮭
- I should add that whereas my results rely on much weaker assumption than those from the literature on imprecise credence functions, they are not stronger than them, since the conclusions they derive from their stronger assumptions are stronger than those we derive from my weaker assumptions. For instance, as mentioned above, Berger and Das show that, given their assumptions on epistemic utility functions, for any imprecise credence function there will be precise credence function with the same domain that is as good, epistemically, as the imprecise credence function relative to any world. The analogous conclusion, in my framework, would be that for any credence function $P$ defined over some partition $\pi $, and any refinement ${\pi}^{\prime}$ of $\pi $, there is a credence function defined over $\pi $ that is as good, epistemically, as $P$ relative to any world. Without any additional assumptions on what epistemic utility functions are like, this cannot be guaranteed. (For one thing, without additional assumptions, we could have epistemic utility functions that make any credence function defined over $\pi $ dominate any credence function defined over ${\pi}^{\prime}$.) It is an interesting question, beyond the scope of this paper, what additional constraints on generalized epistemic utility functions are needed to establish this analogous result. ⮭
- My proof strategy follows some of the reasoning in the first five sections of Grünwald and Dawid (2004). ⮭
- See, e.g., (Mertens et al. 2015: Theorem i.1.1.) for a proof. ⮭
- See, e.g., (Kechris 1995: Theorem 17.22). ⮭

For helpful conversations, comments, and advice, I am grateful to Kenny Easwaran, Richard Pettigrew, Itai Sher, and Henry Swift. Special thanks to Chris Meacham who, in addition to indulging me on many conversations about the material in this paper, went through an earlier draft of the paper with great care. Last but not least, thanks are also due to two anonymous referees for this journal for their many generous and extremely helpful comments. Much of this paper was written while I was a fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University: I’m grateful to the Center for its financial support.

1 Berger, Dominik and Nilanjan Das (2020). Accuracy and Credal Imprecision. *Noûs*, 54 (3), 666–703. http://doi.org/10.1111/nous.12274

2 Bradley, Seamus (2018). A Counterexample to Three Imprecise Decision Theories. *Theoria*, 85 (1), 18–30. http://doi.org/10.1111/theo.12170

3 Bradley, Seamus (2019). Imprecise Probabilities. In Edward N. Zalta (Ed.), *The Stanford Encyclopedia of Philosophy* (Spring 2019 ed.). https://plato.stanford.edu/archives/spr2019/entries/imprecise-probabilities/

4 Brier, Glenn W. (1950). Verification of Forecasts Expressed in Terms of Probability. *Monthly Weather Review*, 78 (1), 1–3. http://doi.org/10.1175/1520-0493(1950)078<0001:vofeit>2.0.co;2

5 Bruckner, A. M. (1962). Tests for the Superadditivity of Functions. *Proceedings of the American Mathematical Society*, 13 (1), 126–30. http://doi.org/10.2307/2033788

6 Campbell-Moore, Catrin and Benjamin A. Levinstein (2021). Strict Propriety Is Weak. *Analysis*, 81 (1), 8–13. http://doi.org/10.1093/analys/anaa001

7 Carr, Jennifer (2015). Epistemic Expansions. *Res Philosophica*, 92 (2), 217–36. http://doi.org/10.11612/resphil.2015.92.2.4

8 Ellsberg, Daniel (1961). Risk, Ambiguity, and the Savage Axioms. *The Quarterly Journal of Economics*, 75 (4), 643–69. http://doi.org/10.2307/1884324

9 Gibbard, Allan (2008). Rational Credence and the Value of Truth. In Tamar Szábo Gendler and John Hawthorne (Eds.), *Oxford Studies in Epistemology* (Vol. 2, 143–64). Oxford University Press.

10 Gilboa, Itzhak (1987). Expected Utility with Purely Subjective Non-Additive Probabilities. *Journal of Mathematical Economics*, 16 (1), 65–88.

11 Gilboa, Itzhak and David Schmeidler (1989). Maxmin Expected Utility with Non-Unique Prior. *Journal of Mathematical Economics*, 18 (2), 141–53. http://doi.org/10.1016/0304-4068(89)90018-9

12 Gneiting, Tilmann and Adrian E. Raftery (2007). Strictly Proper Scoring Rules, Prediction, and Estimation. *Journal of the American Statistical Association*, 102 (477), 359–78. http://doi.org/10.1198/016214506000001437

13 Greaves, Hilary and David Wallace (2006). Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility. *Mind*, 115 (459), 607–32.

14 Grünwald, Peter D. and A. Philip Dawid (2004). Game Theory, Maximum Entropy, Minimum Discrepancy and Robust Bayesian Decision Theory. *The Annals of Statistics*, 32 (4), 1367–433.

15 Halpern, Joseph Y. (2003). *Reasoning about Uncertainty*. The MIT Press.

16 Horowitz, Sophie (2014). Immoderately Rational. *Philosophical Studies*, 167 (1), 41–56.

17 Hurwicz, Leonid (1951). The Generalized Bayes Minimax Principle: A Criterion for Decision Making Under Uncertainty. *Cowles Commission Discussion Paper* 335.

18 Joyce, James M. (1998). A Nonpragmatic Vindication of Probabilism. *Philosophy of Science*, 65 (4), 575–603.

19 Joyce, James M. (2005). How Probabilities Reflect Evidence. *Philosophical Perspectives*, 19 (1), 153–78.

20 Joyce, James M. (2009). Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. In Franz Huber and Christoph Schmidt-Petri (Eds.), *Degrees of Belief (263–97), Vol. 342 of Synthese Library*. Springer Netherlands.

21 Joyce, James M. (2010). A Defense of Imprecise Credences in Inference and Decision Making. *Philosophical Perspectives*, 24 (1), 281–323. http://doi.org/10.1111/j.1520-8583.2010.00194.x

22 Kechris, Alexander (1995). *Classical Descriptive Set Theory*. Springer.

23 Konek, Jason (in press). Epistemic Conservativity and Imprecise Credence. *Philosophy and Phenomenological Research*.

24 Leitgeb, Hannes and Richard Pettigrew (2010). An Objective Justification of Bayesianism I: Measuring Inaccuracy. *Philosophy of Science*, 77 (2), 201–35. http://doi.org/10.1086/651317

25 Levi, Isaac (1974). On Indeterminate Probabilities. *The Journal of Philosophy*, 71 (13), 391–418. http://doi.org/10.2307/2025161

26 Lewis, David (1971). Immodest Inductive Methods. *Philosophy of Science*, 38 (1), 54–63. http://doi.org/10.1086/288339

27 Mayo-Wilson, Conor and Gregory Wheeler (2016). Scoring Imprecise Credences: A Mildly Immodest Proposal. *Philosophy and Phenomenological Research*, 93 (1), 55–78. http://doi.org/10.1111/phpr.12256

28 Mertens, Jean-François, Sylvain Sorin, and Shmuel Zamir (2015). *Repeated Games*. Econometric Society Monographs. Cambridge University Press.

29 Nielsen, Michael (2022). On the Best Accuracy Arguments for Probabilism. *Philosophy of Science*, 89 (3), 621–30. http://doi.org/10.1017/psa.2021.43

30 Pérez Carballo, Alejandro (2023). Downwards Propriety in Epistemic Utility Theory. *Mind*, 132 (525), 30–62. http://doi.org/10.1093/mind/fzac011

31 Pettigrew, Richard (2016). *Accuracy and the Laws of Credence*. Oxford University Press.

32 Pettigrew, Richard (2018). The Population Ethics of Belief: In Search of an Epistemic Theory X. *Noûs*, 52 (2), 336–72.

33 Predd, J. B., R. Seiringer, E. H. Lieb, D. N. Osherson, H. V. Poor, and S. R. Kulkarni (2009). Probabilistic Coherence and Proper Scoring Rules. *IEEE Transactions on Information Theory*, 55 (10), 4786–92.

34 Ramoni, Marco and Paola Sebastiani (2001). Robust Bayes Classifiers. *Artificial Intelligence*, 125 (1–2), 209–26. http://doi.org/10.1016/s0004-3702(00)00085-0

35 Satia, Jay K. and Roy E. Lave (1973). Markovian Decision Processes with Uncertain Transition Probabilities. *Operations Research*, 21 (3), 728–40.

36 Savage, Leonard J. (1951). The Theory of Statistical Decision. *Journal of the American Statistical Association*, 46 (253), 55–67. http://doi.org/10.1080/01621459.1951.10500768

37 Savage, Leonard J. (1971). Elicitation of Personal Probabilities and Expectations. *Journal of the American Statistical Association*, 66 (336), 783–801.

38 Schoenfield, Miriam (2017). The Accuracy and Rationality of Imprecise Credences. *Noûs*, 51 (4), 667–85.

39 Seidenfeld, Teddy (1988). Decision Theory Without “Independence” or Without “Ordering”. *Economics and Philosophy*, 4 (2), 267–90. http://doi.org/10.1017/s0266267100001085

40 Seidenfeld, Teddy, Mark J. Schervish, and Joseph B. Kadane (2012). Forecasting with Imprecise Probabilities. *International Journal of Approximate Reasoning*, 53 (8), 1248–61.

41 Sion, Maurice (1958). On General Minimax Theorems. *Pacific Journal of Mathematics*, 8 (1), 171–76. http://doi.org/10.2140/pjm.1958.8.171

42 Talbot, Brian (2019). Repugnant Accuracy. *Noûs*, 53 (3), 540–63.

43 Troffaes, Matthias C. M. (2007). Decision Making Under Uncertainty Using Imprecise Probabilities. *International Journal of Approximate Reasoning*, 45 (1), 17–29.

44 van Fraassen, Bas C. (1990). Figures in a Probability Landscape. In J. Michael Dunn and Anil Gupta (Eds.), *Truth or Consequences: Essays in Honor of Nuel Belnap* (345–56). Kluwer Academic.

45 Walley, Peter (1991). *Statistical Reasoning with Imprecise Probabilities*. Chapman and Hall.

46 White, Roger (2010). Evidential Symmetry and Mushy Credence. In Tamar Szábo Gendler and John Hawthorne (Eds.), *Oxford Studies in Epistemology* (Vol. 3, 161–86). Oxford University Press.

My proof of Lemma 3.10 will rely on a fundamental result in game theory, which I will simply state without proof.^{38} Before stating the result, I need to introduce some minimal background.

A *two-person, zero-sum game* (henceforth, a *game*) is a triple $\mathcal{G}=(A,B,f)$, where $A$ is the set of *pure strategies* for player I, $B$ is the set of pure strategies for player II, and $f:A\times B\to \text{}\overline{\mathbb{R}}$ is a *payoff function*. When player I chooses to play $a\in A$ and player II chooses to play $b\in B$, player I gets $f(a,b)$ from player II if $f(a,b)>0$ and gives player $\text{II}-f(a,b)$ if $f(a,b)=0$ (nothing is exchanged if $f(a,b)=0$, and let’s not bother to think of an ‘intuitive’ interpretation of a situation in which $f(a,b)$ is non-finite).

The *lower value* of the game, $\underset{\xaf}{V}$ is defined as

$\underset{\_}{V}\text{:=\hspace{0.17em}\hspace{0.17em}}\underset{a\in A}{\mathrm{sup}}\text{\hspace{0.17em}\hspace{0.17em}}\underset{b\in B}{\mathrm{inf}}f(a,b).$

This is the maximum payoff that player $I$ can guarantee, since for each $a\in A$,

$\underset{b\in B}{\mathrm{inf}}f(a,b)$

is the best player I can do. The *upper value* of the game, $\overline{V}$ is analogously defined as

$\overline{V}:=\text{\hspace{0.17em}\hspace{0.17em}}\underset{b\in B}{\mathrm{inf}}\underset{a\in A}{\text{\hspace{0.17em}}\mathrm{sup}}\text{\hspace{0.17em}\hspace{0.17em}}f(a,b).$

In general,

$\underset{\_}{V}\le \overline{V}.$

We say that $\mathcal{G}$ *has a value* iff

$\underset{\_}{V}=\overline{V}.$

If a game has a value, we say that player I has an *optimal strategy* iff there is ${a}^{*}\in A$ that achieves

$\underset{a\in A}{\mathrm{sup}\text{\hspace{0.17em}}}\underset{b\in B}{\mathrm{inf}}f(a,b).$

Similarly, we say that player II has an optimal strategy iff there is ${b}^{*}\in B$ achieving

$\underset{b\in B}{\mathrm{inf}}\text{\hspace{0.17em}}\underset{a\in A}{\mathrm{sup}}f(a,b).$

If the game has a value and both players have an optimal strategy, the pair of optimal strategies corresponds in an intuitive way to an *equilibrium* in the game—a pair of strategies such that neither player prefers unilaterally deviating from it. Such a pair of strategy is called a *saddle-point*—thus, a saddle point in a game $\mathcal{G}$ is a pair of strategies $({a}^{*},{b}^{*})$ such that for all $a\in A$, $b\in B$, $f(a,{b}^{*})\le f({a}^{*},{b}^{*})\le f({a}^{*},b)$.

Not all games have a value. Some of the foundational results in game theory allow us to characterize classes of games that have a value. I will be relying on one such result for the proof of Lemma 3.10.

Recall that a function $f$ on a vector space that takes values in $\overline{\mathbb{R}}$ is *convex* iff for each $\lambda \in (0,1)$, $\lambda f(x)+(1-\lambda )f(y)\ge f(\lambda x+(1-\lambda )y)$ whenever the term on the left hand side is well-defined. We say that $f$ is *concave* iff $-f$ is convex, and that $f$ is *affine* iff it is both convex and concave.

If $X$ is a topological space, we say that a function $f:X\to \overline{\mathbb{R}}$ is *upper-semi-continuous* (or *u.s.c*.) iff $f<\infty $, and for each $r\in X$, the set $\{x\in X:f(x)\ge r\}$ is closed in $X$. We say that $f$ is *lower-semi-continuous* (or *l.s.c*.) iff $f>-\infty $ and for each $r\in \mathbb{R}$, the set $\{x\in X:f(x)\le r\}$ is closed in $X$. (Here we follow Mertens, Sorin, and Zamir 2015.) Of course, $f$ is u.s.c. iff $-f$ is l.s.c. The result below is essentially Sion’s minimax theorem (Sion 1958).^{39}

**Theorem A.1**. *Let* $A$ *and* $B$ *be two convex topological spaces and suppose* $f:A\times B\to \text{}\overline{\mathbb{R}}$ *is concave and u.s.c. on the first argument, and convex and l.s.c. on the second—that is, for any* $b\in B$ and $a\in A$, $f(x,b)$ and $-f(a,y)$ are concave, u.s.c. functions of $x$ and $y$ (respectively). Then the game $\mathcal{G}=(A,B,f)$ *has a value. If* $A$ *and* $B$ *are compact, then the game has a saddle-point*.

□

We can apply Theorem A.1 to show that, whereas many games of interest do not contain a saddle-point, if we allow players to *randomize* their choice of strategy, the resulting game does have a saddle-point. Let me explain.

For any compact $X\subseteq {\mathbb{R}}^{n}$, let $\Delta (X)$ denote the space of all Borel probability functions on $X$—the space of all countably additive probability functions on $X$ whose domain is the smallest $\sigma $-algebra that contains all the open subsets of $X$. Of course, $\Delta (X)$ is a convex set, and from the fact that $X$ is a compact subset of Euclidean space, we know that $\Delta (X)$ is compact.^{40}

Now, fix $\mathcal{G}=(A,B,f)$, with $A\subseteq {\mathbb{R}}^{n}$ and $B\subseteq {\mathbb{R}}^{m}$ compact. We say that ${\mathcal{G}}^{\ast}=\text{\hspace{0.17em}\hspace{0.17em}}({A}^{\ast},{B}^{\ast},{f}^{\ast})$ is a *mixed extension* of $\mathcal{G}$ iff ${A}^{*}$ (resp. ${B}^{*}$) is a closed and convex subset of $\Delta (A)$ (resp. $\Delta (B)$) and, for any $\alpha \in {A}^{*}$, $\beta \in {B}^{*}$,

${f}^{\ast}(\alpha ,\beta )={\mathbb{E}}_{X~\alpha}{\mathbb{E}}_{Y~\beta}f(X,Y).$

Since each $a\in A$ (resp. $b\in B$) can be identified with the unique probability function ${\alpha}_{a}\in \Delta (A)$ (resp. ${\beta}_{b}\in B$) that assigns probability one to $\{a\}$ (resp. $\{b\}$), I will abuse notation and think of ${f}^{*}$ as also defined over elements of $A\times B$.

If $f$ is continuous and $f<\infty $, ${f}^{*}$ is concave (since linear) and u.s.c. on the first argument and convex (since linear) and l.s.c. on the second. And since any closed subset of a compact topological space is compact, we know that ${A}^{*}$ and ${B}^{*}$ are compact and convex topological spaces. So we can apply Theorem A.1 to show that any mixed extension ${\mathcal{G}}^{\ast}$ has a saddle-point.

**Corollary A.2**. *Suppose* $A\subseteq {\mathbb{R}}^{n}$ *and* $B\subseteq {\mathbb{R}}^{m}$ *are compact and* $f:A\times B\to \mathbb{R}$ *is separately continuous. If* $f<\infty $, *then any mixed extension of* $\mathcal{G}=(A,B,f)$ *has a saddle-point*.

□

I can finally present the proof of Lemma 3.10.

*Proof of Lemma 3.10*. Suppose $\mathfrak{u}$ is a continuous epistemic utility function that is partition-wise strictly proper. Fix a probability function $P$ and a refinement $\pi $ of ${\pi}_{P}$. We define a game ${\mathcal{G}}_{P}=(A,B,f)$ as follows. First, let $N=|\pi |$, fix an enumeration $\{{S}_{i}\}$ of $\pi $, and let $A$ be those elements of ${\mathbb{R}}^{N}$ of the form $\langle Q({s}_{1}),\dots \text{}Q({s}_{N})\rangle $ for $Q\in {\mathcal{P}}_{\pi}$. Abusing notation, I will use $Q,\text{}{Q}^{\prime}$, etc. to denote elements of $A$, even though I will think of them as members of ${\mathbb{R}}^{N}$. Next let

$B:=\text{\hspace{0.17em}\hspace{0.17em}}\{Q\in A:Q({s}_{i})=1\text{\hspace{0.33em}forsome}i\}.$

Again abusing notation, I will use ${s}_{1},{\text{s}}_{2},\dots ,{\text{s}}_{N}$ to denote the elements of $B$ in the obvious way (with ${s}_{i}$ corresponding to that $Q\in A$ assigning probability $1$ to ${s}_{i}$). Finally, let

$f\left(Q,{s}_{i}\right)=\text{\hspace{0.17em}\hspace{0.17em}}\mathfrak{u}\left(Q,{s}_{i}\right).$

Note that $A$ and $B$ are compact subsets of ${\mathbb{R}}^{N}$, and since $\mathfrak{u}$ < ∞, we know that any mixed extension of ${\mathcal{G}}_{P}$ has a saddle point.

Our next step is to define a particular mixed extension of ${\mathcal{G}}_{P}$ and apply Corollary A.2. Before doing so, however, let me make a couple of observations. First, any element of $\Delta (A)$ corresponds to a probability function over ${\mathcal{P}}_{\pi}$. I will use $\mu ,{\mu}^{\prime}$, etc. to denote elements of $\Delta (A)$, and will continue to abuse notation and use $Q$, ${Q}^{\prime}$, etc. to denote the element of $\Delta (A)$ that assigns probability $1$ to the corresponding element of ${\mathcal{P}}_{\pi}$. Second, any element of $\Delta (B)$ corresponds to a probability function over $\pi $. I will thus abuse notation and use $Q$, ${Q}^{\prime}$, etc. to denote elements of $\Delta (B)$.

Let now ${A}^{*}=\Delta (A)$ and ${B}^{*}={[P]}_{\pi}$, and note that ${B}^{*}$ is indeed a subset of $\Delta (B)$. Moreover, both ${A}^{*}$ and ${B}^{*}$ are closed and convex subsets of $\Delta (A)$ and $\Delta (B)$ (respectively), so that ${\mathcal{G}}_{P}^{\ast}=({A}^{\ast},{B}^{\ast},{f}^{\ast})$ is indeed a mixed extension of ${\mathcal{G}}_{P}$, with

${f}^{\ast}(\mu ,Q)={\mathbb{E}}_{X~\mu}{\mathbb{E}}_{Q}\text{\hspace{0.17em}}[\mathfrak{u}(X)],$

and accordingly ${f}^{\ast}({Q}^{\prime},Q)={\mathbb{E}}_{Q}[\mathfrak{u}\text{(}{Q}^{\prime})]$.

From Corollary A.2, we know that our game ${\mathcal{G}}_{P}^{\ast}$ has a saddle point $({\mu}^{*},\widehat{P})$. We claim that this saddle point is in fact of the form ($\widehat{P}$, $\widehat{P}$). To see why, note that since ${\mu}^{*}$ is an optimal mixed strategy for player I, it follows that for any $Q\in {\mathcal{P}}_{\pi}$,

${f}^{\ast}(Q,\widehat{P})\le {f}^{\ast}({\mu}^{\ast},\widehat{P}),$

and thus that

${\mu}^{\ast}\left(\underset{Q\in {\mathcal{P}}_{\pi}}{\text{arg max}}\text{\hspace{0.17em}\hspace{0.17em}}{f}^{\ast}(Q,\widehat{P})\right)=1.$

But since $\mathfrak{u}$ is strictly partition-wise proper,

$\underset{Q\in {\mathcal{P}}_{\pi}}{\text{arg max}}\text{\hspace{0.17em}\hspace{0.17em}}{f}^{\ast}(Q,\widehat{P})=\left\{\widehat{P}\right\}.$

Summing up, we have a saddle point of the form ($\widehat{P}$, $\widehat{P}$), and thus we know that for any ${P}^{*}\in {[P]}_{\pi}$ and any $Q\in {\mathcal{P}}_{\pi}$,

${f}^{\ast}(Q,\widehat{P})\le {f}^{\ast}(\widehat{P},\widehat{P})\le {f}^{\ast}(\widehat{P},{P}^{\ast}).$

In other words,

${\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]\text{\hspace{1em}forall}Q\in {\mathcal{P}}_{\pi},$

(10)
and

${\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}\widehat{P})]\text{\hspace{1em}forall}{P}^{\ast}\in {[P]}_{\pi}.$

(11)
But note that (10) entails both

${\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]={\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}\widehat{P})],$

(12)
by definition, and

${\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]\le {\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})]\text{\hspace{1em}forall}{P}^{\ast}\in {[P]}_{\pi},$

(13)
since $\mathfrak{u}$ is partition-wise proper.

Hence, from (11), (12), and (13), we have that for any ${P}^{*}\in {[P]}_{\pi}$ and any $Q\in {\mathcal{P}}_{\pi}$,

${\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}Q)]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{\widehat{P}}[\mathfrak{u}\text{(}\widehat{P})]\text{\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}\hspace{0.17em}}{\underset{\xaf}{\mathbb{E}}}_{P}[\mathfrak{u}\text{(}\widehat{P})]\text{\hspace{0.17em}\hspace{0.17em}}\le \text{\hspace{0.17em}\hspace{0.17em}}{\mathbb{E}}_{{P}^{\ast}}[\mathfrak{u}\text{(}{P}^{\ast})],$

as desired.

□