A familiar defense of Personalist or Subjective Bayesian theory is that, under a variety of sufficient conditions, asymptotically—with increasing shared evidence—

In this note we continue an old discussion of some familiar results about the asymptotics of Bayesian updating (aka ^{1}^{2} credences. One such result (due to

In ^{3}

In

For ease of exposition, we use a continuing example throughout this note. Consider a Borel space of possible events based on the set of denumerable sequences of binary outcomes from flips of a coin of unknown bias using a mechanism of unknown dynamics. The sample space consists of denumerable sequences of 0s (tails) and 1s (heads). The nested data available to the Bayesian investigator are the growing initial histories of length

For example, an hypothesis of interest

Consider the following, strong-law (countably additive) version of the Bayesian asymptotic approach to certainty, which applies to the continuing example of denumerable sequences of 0s and 1s.^{4} The assumptions for the result that we highlight below involve the

^{th} experiment. Form the infinite Cartesian product

Each ^{5} These finite sequences constitute the observed data.

We are concerned, in particular, with tracking the nested histories of the initial

That is, for

Bayes’ Rule for updating credences requires that

The result in question, which is a substitution instance of

For

In words, subject to the conditions above, the agent’s credences satisfy asymptotic certainty about the truth value of the hypothesis

To summarize: For each

^{6}

In other words, the

Next, we examine details of conditional probabilities given elements of the failure set, even when the agent’s credences are countably additive and the other assumptions in Doob’s result obtain. Specifically, consider the countably additive Bayesian agent’s conditional probabilities,

At the opposite pole from the veridical states, the states in

For deceptive states, the agent’s sequence of posterior probabilities also creates asymptotic certainty. This sense of certainty is introspectively indistinguishable to the investigator from the asymptotic certainty created by veridical states, where asymptotic certainty identifies the truth. Thus, to the extent that veridical states provide a defense of Bayesian learning—the observed histories

When the failure set for an hypothesis

Less problematic than being deceptive, but nonetheless still challenging for a Bayesian account of objectivity, is a non-deceptive state

Then, with respect to hypothesis

Within the failure set for an hypothesis, the following partition of non-veridical states appears to us as increasingly problematic for a defense of Bayesian methodology, in the sense that seeks asymptotic credal certainty about the truth value of the hypothesis driven by Bayesian learning. In this list, we prioritize avoiding deception over obtaining veridicality:^{7}

states that are intermittently veridical but not intermittently deceptive;

states that are neither intermittently veridical nor intermittently deceptive;

states that are both intermittently veridical and intermittently deceptive^{8};

states that are intermittently deceptive but not intermittently veridical;

states that are deceptive.

We find it helpful to illustrate these categories within the continuing example of sequences of binary outcomes. Consider the set of denumerable, binary sequences:

First, if

Next, consider an hypothesis that is logically independent of each finite dimensional rectangular event, an hypothesis that is an element of the

Next, we apply these findings to a recent debate about what

In a (2016) reply to Belot’s analysis, A. Elga focuses on the premise of countable additivity in Doob’s result. Countable additivity is required in neither

With their Propositions 1 and 2, Nielsen and Stewart point out that there exist a class of merely finitely additive credences (with cardinality of the continuum) such that each credence function in this class assigns unconditional positive probability (even probability 1) to each comeager set. Then, such a credence displays

Below, we show that the

First we argue that this sense of “modesty” is mistaken when deception is not a null event, regardless whether the ^{9} In such cases, the investigator’s credences are called

Let

To put this analysis in behavioral terms, suppose the Bayesian investigator faces a sequence of decisions. These decisions might be practical, with cardinal utilities that reflect economic or legal, or ethical consequences. Or, these decisions might be cognitive with epistemically motivated utilities, e.g., for desiring true hypotheses over false ones, or for desiring more informative over less informative hypotheses. Or, these might form a mixed sequence of decisions, with some practical and some cognitive. Suppose each decision in this sequence rides on the probability for one specific hypothesis

One example in which the conditions of this analysis hold was given by ^{10}

A large class of examples of this kind arise by using Proposition 1 of Nielsen and Stewart. Here is how Proposition 1 applies to the continuing example of the Borel space, ^{11} Fix

Nielsen and Stewart’s Proposition 1 establishes that

Under

^{12} That is, for each sequence

^{13}

Specifically, the failure set ^{14}_{QED}

We emphasize that certainty with deception is indistinguishable from certainty that is veridical. In the context of Result 2, the investigator can tell when the observed history

We have argued above that a credence

In Summary, it is our view that having a positive probability over non-veridical states is not sufficient for creating an epistemically modest credence because categories (D) or (E) may have positive prior probability as well. Indeed, in the continuing example, each probability

When the failure set for an hypothesis is deceptive and not null, that is in conflict with an attitude of epistemic modesty about learning that hypothesis.

Regarding the asymptotics of Bayesian certainties, e.g., Doob’s result, neither of Nielsen and Stewart’s concepts of

We thank two anonymous referees for their constructive feedback. Research for this paper was supported by NSF grant DMS-1916002.

To model changes in personal probability when learning evidence

We use the language of events to express these conditions. Let

Deceptive credence is a worse situation for empiricists than what

But if we are empiricists [pragmatists], if we believe that no bell in us tolls to let us know for certain when truth is in our grasp, then it seems a piece of idle fantasticality to preach so solemnly our duty of waiting for the bell.

It is not merely that the investigator fails to know when, e.g., her/his future credences for an hypothesis remain forever within epsilon of the value 1. With deceptive credences, the agent conflates asymptotic certainty of true statements with asymptotic certainty of false statements. The two cases become indistinguishable!

See, also, Theorem 2, Section IV of

See

For ease of exposition, where the context makes evident the hypothesis

We note in passing that the categories may be further refined by considering sojourn times for events that are required to occur infinitely often. Also, the categories may be expanded to include,

Our understanding is that case (C) satisfies the conditions for what

Moreover, when credences are merely finitely additive, the investigator may design an experiment to ensure deceptive Bayesian reasoning. For discussion see

By contrast, in

Existence of such 0–1 finitely additive probabilities is a non-constructive consequence (using the Axiom of Choice) that the comeager sets form a filter: They have the finite intersection property and are closed under supersets.

Note well that

More generally, if

Similarly, Result 2 applies to each hypothesis

When either

The Corollary to Result 1 establishes that the same phenomenon occurs when Nielsen and Stewart’s Prop. 2 is generalized to include finitely additive credences that assign positive probability to each finite initial history and a positive (but not necessarily probability 1 credence) to each comeager set of sequences.

Here, we discuss and illustrate categories (A)–(D) of failure sets using the continuing example. Restrict the exchangeable “prior” probability

The set of veridical states for this credence and hypothesis includes each sequence where,

either

or, either ^{15}

The non-veridical states (the failure set)

Then

In order to illustrate the other three categories of non-veridical states, (A), (B), and (D), the following adaptation of the previous construction suffices. Depending upon which category is to be displayed, consider a state

We illustrate category (A) using the same hypothesis

and infinitely often

In this appendix we consider Nielsen and Stewart’s Proposition 2, and related approaches for creating a

(i)

and (ii)

Nielsen and Stewart’s Proposition 2 asserts that, however

We do not know whether the conclusion of

However, there is a second issue that tells against the technique of Proposition 2 for creating _{1} values in

With respect to the continuing example,

As above, let the hypothesis of interest be ^{16} On what basis do Nielsen and Stewart dismiss the deceptiveness of

Propositions 1 and 2 do not exhaust the varieties of finitely additive probabilities that assign positive probability to each comeager set in

Let

But