Suppose you are 40% confident that Candidate X will win in the upcoming election. Then you read a column projecting 80%. If you and the columnist are equally well informed and competent on this topic, how should you revise your opinion in light of theirs? Should you perhaps split the difference, arriving at 60%?
Plenty has been written on this topic.1 Much less studied, however, is the question what comes next. Once you’ve updated your opinion about Candidate X, how should your other opinions change to accommodate this new view? For example, how should you revise your expectations about other candidates running for other seats? Or your confidence that your preferred party will win a majority?
A natural response is: by Jeffrey conditionalizing (Jeffrey, 1965).2 When you change your probability for from to , Jeffrey conditionalization adjusts your other opinions as follows:
3In our example, is the proposition that Candidate X will win their election, and is any other proposition, e.g. that your party will win a majority. If you split the difference with the columnist, then . So you plug this number into Jeffrey’s equation and, together with your existing opinions about given and given , it determines your new probability that your party will win a majority.
Now suppose you read a different column, about another candidate running for a different seat. In light of the opinion expressed there, you update your confidence in the relevant proposition to some new probability . Then you apply Jeffrey conditionalization again, to update your opinions on other matters accordingly:
A natural thought now is that the order shouldn’t matter here. Which column you read first is irrelevant. Either way, you have the same total information in the end, so your ultimate opinions should be the same.
This requirement is known as commutativity, and we will show that it strongly favours one particular way of merging your 40% with the columnist’s 80%. Rather than splitting the difference to give 60%, you should use another formula: “upco”, also known as “multiplicative pooling.” Given some neutral assumptions, this is the only way of combining probabilities that ensures Jeffrey conditionalization delivers the same final result, no matter which opinion you encounter first. And the difference between upco and difference-splitting can be striking: upco combines 40% and 80% to give a new credence of about 73%, rather than 60%.
But let’s first address the elephant in the room: why not simply conditionalize? You’ve learned that the columnist is 80% confident X will win, so shouldn’t you just conditionalize on the fact that they hold that opinion? Well, you should, if you can. But the “just conditionalize” answer still isn’t fully satisfactory, for two reasons.
First, it’s incomplete. After all, you may not have the prior credences, conditional and unconditional, that conditionalizing requires.4 Perhaps you just haven’t given the columnist’s opinion and its evidential weight much thought until now. Second, even if you have the relevant priors, the computations needed to conditionalize can be very demanding, especially if you are using Bayes’ Theorem for a large partition. It’s much easier to apply a simple formula like splitting the difference, and then Jeffrey conditionalize on the result. Indeed, this corresponds to a natural and intuitive way to break the problem up into two pieces: (i) how should I revise my opinion about Candidate X’s prospects, and (ii) how should my other views change in light of the first change?
What’s more, this two step analysis is actually equivalent to conditionalization in many cases. Suppose the columnist’s opinion about Candidate X is only relevant to other matters insofar as it’s relevant to whether X wins or not. More precisely, suppose that conditional on X winning, other matters are independent of the columnist’s opinion (and likewise conditional on X not winning). Then, revising all your opinions by conditionalization is equivalent to the two step process of first revising your opinion about by conditionalization, and then revising your remaining opinions by Jeffrey conditionalization.5
For multiple reasons then, we would like to know how your opinion about Candidate X might be combined with the columnist’s, such that the result can be sensibly plugged into Jeffrey conditionalization. We’ll show that one way of performing this combination is uniquely privileged.
1. Upco Ensures Jeffrey Pooling Commutes
Splitting the difference between two opinions is known as linear pooling. The formula is just the familiar arithmetic mean:
where is your prior opinion about , before reading any columns, and is the columnist’s probability. In our example and , so .
But we’ll see that commutativity instead favours upco, also known as multiplicative pooling:
(1)If and , then , significantly larger than the recommended by linear pooling.
These two formulas are examples of pooling rules, functions that take two probabilities and and return a new probability . Two more examples come from the other notions of ‘mean’ included in the classical trio of Pythagorean means: the geometric and harmonic means (Genest and Zidek, 1986). And there are many more, too many to name.
Our question is how these various rules behave when coupled with Jeffrey conditionalization. Suppose we begin with , fix some pooling rule , and use the following two-step procedure for responding to ’s opinion about .
Jeffrey Pooling:
-
Step 1. Apply pooling rule to and to obtain :
-
Step 2. Revise all other credences by Jeffrey conditionalization:
We will call this Jeffrey pooling with on using . But that’s a mouthful, so we’ll often leave some of these parameters implicit when context permits. We’ll say that ensures Jeffrey pooling commutes if, for any , , and , Jeffrey pooling with on and then Jeffrey pooling the result with on , has the same final result as Jeffrey pooling with on and then Jeffrey pooling the result with on .
Upco ensures that Jeffrey pooling commutes, as long as the necessary operations are defined. Zeros can gum up the works in two ways. First, if and or vice versa, then Step 1 fails: upco cannot be applied, because its denominator is . Second, the conditional probabilities used in Step 2 need to be defined, so cannot be either or . For a subsequent update on to have defined conditional probabilities as well, we also need the updated probability of to be non-extreme.
To avoid these difficulties, we will temporarily make the simplifying assumption that is regular, i.e. that it assigns positive probability to , , , and . This ensures no problematic zeros arise when Jeffrey pooling on and then , or vice versa. In the Appendix we show that this assumption can be dropped; the result we are about to present holds whenever the relevant Jeffrey pooling operations are defined, even if is not regular.
If is regular, then upco is sufficient to make Jeffrey pooling commutative. We attribute this result to Field (1978) for reasons that will become clear in Section 3.
Theorem 1 (Field). Upco ensures that Jeffrey pooling commutes for any regular , and any and .
In the Appendix we generalize this result to pooling over countable partitions, i.e. to cases where we don’t just hear ’s opinion about , but about every element in a countable partition.
An example makes clear why Theorem 1 is true. Recall the case we opened with, where and . Let’s further suppose that , and that has the following details:
According to Theorem 1, ’s final opinions will be the same whether they Jeffrey pool with first and second, or vice versa, provided they use upco for the first step in Jeffrey pooling.
Begin with the case where pools with first. Step 1 of Jeffrey pooling combines with via upco, to yield . For Step 2, the key is to observe that the relative proportions of and must be preserved—this is Jeffrey conditionalization’s oft-noted “rigidity”. So the assigned to must be divided -to- between and . Similarly, the assigned to gets divided -to- between and . The posterior that results is:
Now we pool with using similar reasoning. Applying upco to and gives . Jeffrey conditionalization then divides this up proportionally to arrive at:
In the case where pools with first and second, parallel calculations give the following sequence instead:
As Theorem 1 claimed, the ultimate posterior is the same either way.
This convergence may seem magical, but its inevitability emerges if we look past the denominators to the relative proportions. We began with the proportions . Then we multiplied the first two entries by and the last two by , since and . This gave us for the relative proportions , although we divided through by the common factor to write this as . Then, because and , we multiplied the first and third entries by and the second and fourth by , to get —although again we divided through by a common factor to write this as .
Updating in the the opposite order, we began again with but multiplied the first and third entries by , the second and fourth by , to get , which reduced to . Then we multiplied the first two entries by and the last two by , to get , or .
In both cases the final proportions had to be the same because, ultimately, all we did was multiply the values , , , and by the values , , , and , respectively (then divide the results by a common factor). We can think of this as multiplying by the values , , , first, and by , , , second, or the other way around. The commutativity of Jeffrey pooling with upco follows by the commutativity of multiplication.
2. Only Upco Ensures Jeffrey Pooling Commutes
While upco ensures that Jeffrey pooling commutes, linear pooling doesn’t; nor do geometric and harmonic pooling. Indeed, among the pooling rules that boast four plausible properties—properties the rules just named all share—upco is the only one that ensures this. As we will indicate in the course of introducing these properties, we don’t think they will be desirable in all situations. But we do claim that they are desirable in a great many important ones. And in those cases, upco is the only rule that delivers.
The first property is monotonicity: if we fix , then as increases, so does . This is a familiar feature of linear pooling, and upco has it too.6 Notice that this is also a feature of conditionalization in many cases. For any proposition , conditionalization sets , which Bayes’ theorem renders
If the likelihood terms and stay fixed as changes, then increases with .7
The second property our argument will rely on is uniformity preservation: if , then too. Crudely put, two empty heads are no better than one. A bit less crudely, if neither party has an opinion about the question at hand, then combining their opinions doesn’t change this. There are conceivable cases where this feature would be undesirable. For example, the fact that both parties are so far ignorant about a question could indicate a conspiracy to keep everyone in the dark. But such cases are the exception rather than the rule.
Third is continuity: in nearly all cases, if we fix and let approach a value , then the pool of and should be the limit of the pools of and as approaches . Nearly all? Yes, because we have to ensure that all of the pools just mentioned are defined. So we restrict to cases in which, as approaches , the pool of and is always defined, and the pool of and is as well.
To illustrate continuity, fix and consider what happens in linear pooling as decreases to . As gets smaller, the value of gets closer and closer to . And, indeed, that is the value takes when finally does reach . There is no sudden jump in the value of when finally hits .
As with uniformity preservation, there are conceivable cases where this feature would not be appropriate. These might arise if we were to think that some probabilities have a particular significance. For instance, a Lockean might think there is a probabilistic threshold beyond which you count as believing the proposition to which you assign the probability, but below which you don’t. And they might think that sudden change in doxastic status should be reflected in our pooling rule—perhaps your probability gains more weight when it suddenly becomes a belief. We’ll assume this isn’t the case.
Our fourth property is symmetry: swapping the values of and makes no difference to . This is perhaps the most restrictive feature, since exceptions are commonplace. When one party is more competent or better informed than the other, it matters who holds which opinion. Frequently we will want to give more “weight” to than to , or vice versa, in which case exchanging their values should make a difference.
But our argument only concerns cases where this is not so: cases where the two parties are equally competent and well informed on the topic.8 When e.g. one party has more information, upco may not be appropriate (although in some cases it will be appropriate even then).
There are asymmetrically weighted versions of the various pooling rules we’ve mentioned, which may be appropriate to such cases. But we won’t address these cases here. If we can show that upco is specially suited when symmetry is appropriate, that will be a significant step forward. Not to mention a strong indicator that a weighted version of upco would be the way to go in some asymmetric cases.
Finally, there’s an assumption implicit in the very idea of a pooling rule, which we should pause to examine. Since a pooling rule is a function of and and nothing else, we are assuming from the outset that and are the only factors relevant to . But other of ’s opinions could be relevant, such as their opinion about what evidence is based on. Even the fact that it’s an opinion about the proposition , and not some other proposition, could be relevant. Someone might be competent on the topic of but incompetent on the topic of . In which case you might apply one formula when faced with their opinion about , but use another should they opine about .
So there is a tacit fifth assumption here, which we might call extensionality. By assuming extensionality, however, we are not assuming that there is one pooling rule appropriate to all circumstances, regardless of your background beliefs or the content of the question under discussion. On the contrary, different rules will be suited to different circumstances. But the question we are asking is: which rules are suited to circumstances where the above four conditions hold, Jeffrey conditionalization is appropriate, and the order in which sources are consulted should not matter.
In answer to this question, we offer the following result.
Theorem 2. Among the monotonic, continuous, uniformity preserving, and symmetric pooling rules, only upco ensures that Jeffrey pooling commutes for any regular , and any and .
As we noted in connection with Theorem 1, upco ensures Jeffrey pooling commutes even when is not regular, provided the relevant operations are defined. But Theorem 2 tells us no other pooling rule can claim this feature, even if we restrict our attention to regular .
It’s important to appreciate what this result does not say: it does not tell us that rules like linear pooling never commute. It is possible to get lucky with linear pooling and encounter two sources where the order doesn’t matter. For example, suppose already agrees with about , and agrees with about . Then, linear pooling will keep ’s opinion fixed throughout. Whichever order they encounter and in, their opinion at the end will be the same as when they started. But Theorem 2 tells us this can’t be counted on to hold generally; only upco is commutative regardless of the particulars of , , and .
It’s also important to recognize that there are cases where the order should matter. For example, imagine you’re interviewing pundits instead of reading pre-written opinion columns. And pundit can be counted on for a serious opinion if you consult them first, but they’ll be so insulted if you talk to first that they’ll lose their cool and adopt wild views. Then it really matters what order you hear their opinions in.
But again, we do not mean to argue that upco is always the best rule. Rather, we aim to show that upco is the only rule that will serve in all cases where the assumptions we’ve laid out are reasonable. And one of those assumptions is that the order shouldn’t matter.
That completes our argument for upco. We now turn to locating Theorems 1 and 2 in the context of existing work on Jeffrey conditionalization and commutativity. In Section 3, we show a surprising and illuminating connection with an early result due to Field (1978). Then, in Section 4, we explain how Wagner’s (2002) theorems relate.
3. Testimony of the Senses
Field (1978) was the first to identify conditions that make Jeffrey conditionalization commutative. How does his discovery fit with our results, especially Theorem 1?
Field discusses cases where sensory experience, rather than another person’s opinion, prompts the shift from to . He assumes that each experience has an associated proposition and number , where reflects how strongly the experience speaks in favour of .9
Field’s proposal is that we should respond to sense experience by the following two-step procedure.
Field Updating:
-
Step 1. Update from to using as follows:
(2) -
Step 2. Update other credences by Jeffrey conditionalization:
We will call this procedure Field updating on . Field shows that his procedure is commutative: Field updating on and then has the same result as Field updating on followed by .
This may sound familiar. And if you squint, you might see that Field’s Equation (2) is actually the same as upco’s Equation (1). It’s just that is on the odds scale from to , rather than the probability scale from to . To convert from odds to probabilities, we can divide through by , in both the numerator and the denominator:
(3)And this is the same as Equation (1), where ’s probabilities are and .
So, formally speaking, Field updating is the same thing as Jeffrey pooling with upco. And Theorem 1 is just a restatement of Field’s classic result.
This formal parallel suggests two helpful heuristics for thinking about Field’s way of responding to sensory experience.
First, we might think of Equation (3) as pooling your prior opinion with a “naive” opinion proposed by your sensory system. Notice that, when , Equation (3) delivers . So if you have no prior opinion about , you will defer to your sensory system’s proposal, . We can thus think of as the odds your sensory system recommends based on the experience alone, absent any prior information.
However, when you do have a prior opinion about , the naive recommendation has to be merged with it. Field’s proposal is to use upco to combine the naive recommendation with your prior opinion, which makes updates commutative under Jeffrey conditionalization. Indeed, Theorem 2 shows that Field’s proposal is the only way to do this using a monotonic, continuous, uniformity preserving, and symmetric pooling rule.
A second way of understanding Field’s proposal exploits a formal analogy between upco and Bayes’ theorem. Notice that Equation (3) just is Bayes’ theorem, if we think of the terms not as unconditional probabilities, but as likelihoods. That is, imagine we are calculating for some proposition . If the likelihoods are and , then Equation (3) is just Bayes’ theorem.
What is the proposition here? Let describe all epistemically relevant features of the experience prompting the update. The original motivation for Jeffrey conditionalization was that you may not be able to represent at the doxastic level—or maybe you can, but you don’t have any priors involving , because it’s too subtle or specific. So you can’t conditionalize, because is undefined.
But we can extend to a compatible distribution that does encompass , by stipulating
Then Equation (3) becomes conditionalization via Bayes’ theorem:
So this interpretation conceives of Field’s proposal as conditionalizing on the ineffable but epistemically essential qualities of sensory experience, by relying on the sensory system to do the effing and the expecting—i.e. to represent the experience’s epistemically relevant features, and supply the likelihood values Bayes’ theorem requires.
4. Wagner’s Theorems
There’s also an important connection between our Theorem 2 and a classic result about Jeffrey conditionalization due to Wagner (2002).
Wagner analyzes Jeffrey conditionalization in terms of “Bayes factors.” When we update a probability distribution from to , the Bayes factor of is the ratio of its new odds to its old odds:10
Crudely put, Wagner’s insight is that Jeffrey conditionalization commutes when, and pretty much only when, the Bayes factors are consistent regardless of the order. This needs some explaining.
Suppose two agents begin with the same prior distribution, . Then they update as in Figure 1. That is, one does a Jeffrey conditionalization update on that yields a Bayes factor of , followed by another on that yields a Bayes factor of . The second agent starts with a Jeffrey conditionalization update on that yields the Bayes factor , then does a second on that yields the Bayes factor . At the end of this process, we label their posteriors and , respectively.
Wagner’s first result is that the two agents will end up with the same ultimate posterior if the Bayes factors for their respective updates are the same, and likewise for their updates. As before we will assume regularity to ensure everything is defined.11
Theorem 3 (Wagner). In the schema of Figure 1, if is regular, then and together imply .
Loosely speaking, Bayes factor “consistency” is sufficient for Jeffrey conditionalization updates to commute.
Field updating produces exactly this sort of consistency. We can verify with a bit of algebra that a given input value always yields the same Bayes factor. In fact, solving for in Equation (2) we find that just is the Bayes factor:
So we can think of Field’s Theorem 1 as a corollary of Wagner’s Theorem 3.
But, crucially for us here, Wagner also shows that this kind of Bayes factor consistency is necessary for commutativity, in almost every case. Exceptions are possible, for example if and are the same proposition. But our regularity assumption precludes this since can’t have positive probability if . In fact, regularity suffices to rule out all exceptions.12
Theorem 4 (Wagner). In the schema of Figure 1, if is regular then implies and .
Does this theorem mean that only Field’s Equation (2) can make Jeffrey conditionalization commute? No, other rules can also consistently yield the same Bayes factor for the same value of .
One silly example is the “stubborn” rule, which just ignores and always keeps . Substituting this rule into Step 1 of Field updating makes the Bayes factor for all updates. And, trivially, updating this way is commutative: if you never change your mind, the order in which you encounter various sensory experiences won’t make any difference to your final opinion.
A less trivial example—call it “upsidedownco”—replaces Field’s Equation (2) with
Doing a bit of algebra to isolate , we find that this implies
So the same value of always results in the same Bayes factor. By Theorem 3 then, this variation on Field updating is also commutative.
However, both of these alternate rules violate the conditions we laid out in Section 2. Specifically, they violate symmetry. The stubborn rule is plainly not symmetric, since it privileges and neglects the proposed by experience entirely. And upsidedownco increases as increases, yet decreases as increases.
So Wagner’s Theorem 4 is not, by itself, enough to secure Field’s proposed Equation (2). Or, returning now to the social interpretation of upco and Equation (1), Wagner’s result doesn’t secure our Theorem 2. But with the help of further conditions like symmetry, we can rule out alternatives like the stubborn rule and upsidedownco. And this is exactly how our proof of Theorem 2 proceeds. We pick up where Wagner’s result leaves off, using the four conditions of Section 2 to rule out any option but upco.
5. Conclusion
No way of combining probabilities is best for all purposes. For some purposes, there are even impossibility results showing that no pooling rule will get you everything you want.13 But for some purposes, we can identify a single pooling rule that is the only one that will do. If your purpose is to combine your probability with an epistemic peer’s and Jeffrey conditionalize on the result, and you want to be assured of commutativity, then upco is the only monotonic, continuous, uniformity preserving, and symmetric game in town.
6. Appendix: Theorems & Proofs
Here we generalize and prove Theorems 1, 2 and 4. We don’t prove Theorem 3, proving Theorem 1 directly instead, for simplicity. Readers interested in a proof of Theorem 3 can consult Wagner (2002, Theorem 3.1).
6.1 Pooling Operators
In the main text we discussed pooling rules, which combine and into a new probability for . Since the probability of is implied by the probability of , these rules effectively combine probabilities over a two-cell partition, . For partitions with more than two elements, we need to extend this definition.
Definition 1 (Pooling operator). A pooling operator takes a countable partition and two probability functions and defined on an agenda that includes , and returns a partial probability function defined just on .
Upco generalizes to countable partitions in the obvious way.
Definition 2 (Upco on countable partitions). Suppose is a countable partition, and and are probability functions defined on an agenda that includes . Suppose further that for at least one element of . Then the upco of and over , denoted , assigns to each
Notice that upco is undefined if there is no such that . That is, upco is defined only when and have overlapping support on . The support of a probability function on a partition is the set of those events from that partition to which it assigns positive probability. In symbols, we write . In this notation, is defined just in case . What’s more, when it is defined, the support of the upco of and is the intersection of their individual supports. More formally,
We now extend the definition of Jeffrey pooling to countable partitions, and introduce more compact notation.
Definition 3 (Jeffrey pooling). Let be a countable partition, and let and be probability functions such that (i) is defined and (ii) . The Jeffrey pool of and on , denoted , is the probability function defined by
Note that the restriction is required to ensure is defined for every where is positive. This ensures that is defined and a probability function.
Notice that, since the support of the upco of and is the overlap of their individual supports, this condition is automatically satisfied if upco of and is defined: . So, if is defined, so is .
6.2 Field’s Sufficiency Theorem
We now state and prove the general version of Theorem 1: upco ensures that Jeffrey pooling commutes, given compatible priors.
Theorem 5 (Field). If and are defined, then
Proof: The proof generalizes the example from page 4. Intuitively, the key idea is that just multiplies the outcomes within a cell by , and renormalizes. More formally, if for some , then
where is a normalizing constant identical for all .
So take an arbitrary proposition , and consider for each and the proposition . If one of , , or is zero, then
If on the other hand , then
where is a normalizing constant independent of and , and thus
where is another normalizing constant independent of and . Similarly,
where and are again normalizing constants independent of and .
This shows that the probabilities and assign to the various have the same proportions. And by a parallel argument, the same is true for the various . So the two distributions have the same proportions over the partition , hence must be identical on this partition. Since is a union of elements from this partition, they must assign the same probability. But was arbitrary. ☐
6.3 Wagner’s Necessity Theorem
Wagner identifies an almost necessary condition for Jeffrey conditionalization updates to commute. Note that here we are concerned with Jeffrey conditionalization in general: the shift from to needn’t be driven by a pooling rule, it could be prompted by anything. Wagner’s theorem concerns any transition from to that can be described in terms of Jeffrey’s formula.
Definition 4 (Jeffrey conditionalization). We say that comes from by Jeffrey conditionalization on the partition if and
We will assume that is regular on ; Wagner assumes something weaker, but we only need the result for regular . Informally, the result says that, for Jeffrey updates of a regular prior to commute, the Bayes factors on each partition must match.
Theorem 6 (Wagner). Let and be countable partitions such that is regular on . Let come from by Jeffrey conditionalization on , from by Jeffrey conditionalization on , from by Jeffrey conditionalization on , and from by Jeffrey conditionalization on . If , then
for all in , and all in .
Proof. By the rigidity of Jeffrey conditionalization, for all :
Coupling the first two equations, and the last two, we get:
(4) (5)Now take any in and in . Using Equation (4), we can analyze our first Bayes factor as follows:
Parallel reasoning with Equation (5) gives:
So the Bayes factors over are identical. The identity of the Bayes factors over follows similarly. ☐
6.4 Our Theorem
Here we use Wagner’s theorem to show the general form of Theorem 2: upco is the only monotonic, uniformity preserving, continuous, symmetric, and extensional pooling operator capable of ensuring that Jeffrey pooling commutes.
Our strategy: first prove that any pooling operator with these features, and which ensures Jeffrey pooling commutes for regular probability functions, must agree with upco when the pooled functions are regular. Then we’ll appeal to continuity to show that any pooling operator that agrees with upco on the regular functions agrees with upco everywhere it’s defined.
We begin by defining terms:
Definition 5 (Uniform). A distribution is uniform over if for all in .
Definition 6 (Uniformity preservation). A pooling operator is uniformity preserving if is uniform over whenever and are uniform over .
Notice that we must set the infinite case aside now, because uniform distributions don’t exist over countably infinite partitions.
Definition 7 (Monotonicity). A pooling operator is monotone if, when is uniform over , implies .
Note that this is a very restricted form of monotonicity, since it only concerns the case where one argument is uniform.
Definition 8 (Symmetry). A pooling operator is symmetric if for all , , and .
Definition 9 (Continuity). A pooling operator is continuous if
whenever is defined for each and is defined.
The restriction avoids ruling out operators like geometric pooling and upco from the get go, since there are sequences such that is defined for each , but is not defined.
Definition 10 (Extensionality). A pooling operator is extensional if, given partitions and of equal size, and for all imply for all .
The main work in establishing the theorem of this section is showing that the pooling operator must treat uniform distributions as “neutral.” That is, pooling any distribution with a uniform distribution just returns the original distribution. We now use the conditions just defined, together with commutativity for Jeffrey pooling, to derive this feature in the case in which the function pooled with the uniform one is regular.
Lemma 7. Suppose that for any finite partitions and such that , , and are regular. Then, if the pooling operator is uniformity preserving, monotonic, symmetric, continuous, and extensional, it must treat uniform distributions as neutral. That is, for uniform over and regular on , .
Proof: Let and be finite partitions of size , let be uniform over , and let be positive for every element of . Define as follows, where :
Observe that , so is uniform over and over . Note for later that
, , and are regular, so Theorem 6 gives the following Bayes factor identity for all :
(6)Since is uniform over , the denominator on the left is . And since is also uniform over , uniformity preservation implies that the numerator is also . Also, for all by the definition of Jeffrey pooling. So Equation (6) reduces to
Since this holds for all , the distributions and have the same relative proportions over , hence must actually be the same distribution. That is, for all :
Using symmetry to move to the left, and then substituting for on grounds of extensionality, this becomes:
(7)Now, by definition the right hand side, is:
So in the limit as goes to , assigns over the same values assigns over . Let be this distribution that approaches, i.e. is a copy over of the assignments makes over :
for all . By continuity we have for all :
The last identity here is the one we need.
Now suppose for a contradiction that for some . Then there must be an for which . Since copies , this implies . Thus we have:
And this contradicts monotonicity. By extensionality, the partition doesn’t matter, since is uniform over both and . So increasing the value of the non-uniform input should increase the corresponding output.
This shows that for all . Since was uniform over and regular, extensionality then implies that for any uniform over and regular on , for all , as desired. ☐
We now show that only upco has the five features defined above, and makes Jeffrey pooling commutative.
Theorem 8. Suppose that for any finite partitions and and any compatible , , and . Then, if the pooling operator is uniformity preserving, monotonic, symmetric, continuous, and extensional, it must be upco.
Proof: We begin by proving that, if for all regular , , and , then the pooling operator must agree with upco on regular functions. Then we show that any continuous operator that agrees with upco on the regular functions must be upco.
Let and be finite partitions of size , and define as in the proof of Lemma 7. Let and be positive everywhere on , and let mimic on the distribution of on , i.e. for all .
, , and are regular, so by Theorem 6 Equation (6) holds, with in place of . By Lemma 7, for all , so in this case Equation (6) reduces to
(8)But
So
Thus, by continuity and Equation (8):
Now observe that this is the same ratio delivered by upco:
So and have the same relative proportions, hence must be the same distribution.
This shows if and are both regular on .
Finally, suppose one or the other or both of and is not regular on , but is defined. Then there are sequences and of regular probability functions such that and . And so, by continuity,
This completes the proof. ☐
Notes
- For some background see Christensen (2007, 2009), Elga (2007), Kelly (2010), Dietrich and List (2016), and Easwaran et al. (2016). ⮭
- See Wagner (2011) and Easwaran et al. (2016) for some prior discussion of this proposal. See also Roussos (2021) for a related model. ⮭
- That is, you retain your credences in conditional on and on , and you use your new unconditional credences in and , together with the Law of Total Probability, to calculate your new credence in . ⮭
- Bayesian writers often assume priors for all propositions an agent might learn. But here we are addressing the part of the Bayesian tradition where this assumption is relaxed; see e.g. Jeffrey (1983) and Easwaran et al (2016). ⮭
- Formally, if the partition screens off from , and we let , then To see why, first recall what it means for to screen off from : Then apply the law of total probability to , and substitute for : ⮭
- In fact this property holds for any fixed value of other than and . But we only need the minimal assumption that it holds for . ⮭
- The derivative with respect to of is positive if . ⮭
- See Elga (2007) for a defense of the idea that the views of peers should be given “equal weight”. See Fitelson and Jehle (2009) for some formal background on articulating the view. ⮭
- Field actually uses a log scaled version of , which he labels . He then reformulates Jeffrey conditionalization using exponentials, to invert the logs. We’ve removed these scaling features to make the connection with upco more transparent. ⮭
- Usually, Bayes factors are used to compare two competing models, and , in light of some data, . The Bayes factor is defined as the ratio of likelihoods, . Using Bayes’ theorem, this can be rewritten Wagner is applying the same idea, with in the role of and in the role of . Except that there is no data being conditioned on; instead, the posterior probabilities in the numerator are arrived at by Jeffrey conditionalization. ⮭
- Wagner uses a milder assumption than regularity, but for simplicity we’ll continue to assume is regular. ⮭
- Wagner shows that a weaker assumption will do, but again we’ll continue to assume regularity for simplicity. ⮭
- Aczél and Wagner (1980) and McConway (1981) formulated two properties and showed that only linear pooling boasts both: Eventwise Independence says that the pool’s probability for a proposition is a function only of the individuals’ probabilities for that proposition, while Unanimity Preservation says that, when all the individuals assign the same probability to a proposition, the pool assigns that too. But then Laddaga (1977) and Lehrer and Wagner (1983) noted that linear pooling does not boast the property of Independence Preservation, which says that, when all the individuals take two propositions to be independent, the pool should too. Together, these results provide an impossibility theorem: no pooling rule satisfies Eventwise Independence, Unanimity Preservation, and Independence Preservation. ⮭
References
Aczél, J. and Carl G. Wagner. 1980. “A Characterization of Weighted Arithmetic Means.” SIAM Journal on Algebraic Discrete Methods 1(3):259–260.
Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116(2):187–217.
Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4(5):756–67.
Dietrich, Franz and Christian List. 2016. Probabilistic Opinion Pooling. In Oxford Handbook of Philosophy and Probability, ed. Alan Hàjek and Christopher Hitchcock. Oxford University Press pp. 519–42.
Easwaran, Kenny, Luke Fenton-Glynn, Christopher Hitchcock and Joel D. Velasco. 2016. “Updating on the Credences of Others: Disagreement, Agreement, and Synergy.” Philosophers’ Imprint 6(11):1–39.
Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41(3):478–502.
Field, Hartry. 1978. “A Note on Jeffrey Conditionalization.” Philosophy of Science 45(3):361–7.
Fitelson, Branden and David Jehle. 2009. “What is the ‘Equal Weight View’?” Episteme 6(3):280–293.
Genest, Christian and James V. Zidek. 1986. “Combining Probability Distributions: A Critique and an Annotated Bibliography.” Statistical Science 1(1):114–135.
Jeffrey, Richard C. 1965. The Logic of Decision. New York: University of Chicago Press.
Jeffrey, Richard C. 1983. Bayesianism With A Human Face. In Testing Scientific Theories, ed. John Earman. University of Minnesota Press pp. 133–156.
Kelly, Thomas. 2010. Peer Disagreement and Higher Order Evidence. In Social Epistemology: Essential Readings, ed. Alvin I. Goldman and Dennis Whitcomb. Oxford University Press. pp. 183–217.
Laddaga, Robert. 1977. “Lehrer and the Consensus Proposal.” Synthese 36(4):473–77.
Lehrer, Keith and Carl Wagner. 1983. “Probability Amalgamation and the Independence Issue: A Reply to Laddaga.” Synthese 55(3):339–346.
McConway, K. J. 1981. “Marginalization and Linear Opinion Pools.” Journal of the American Statistical Association 76(374):410–414.
Roussos, Joe. 2021. “Expert Deference as a Belief Revision Schema.” Synthese 199(1–2):3457–84.
Wagner, Carl. 2002. “Probability Kinematics and Commutativity.” Philosophy of Science 69(2):266–78.
Wagner, Carl. 2011. “Peer Disagreement and Independence Preservation.” Erkenntnis 74(2):277–88.