<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">1533-628X</journal-id>
<journal-title-group>
<journal-title>Philosophers&#8217; Imprint</journal-title>
</journal-title-group>
<issn pub-type="epub">1533-628X</issn>
<publisher>
<publisher-name>Michigan Journal of Community Service Learning</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3998/phimp.3416</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Resolute and Correlated Bayesians</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Babic</surname>
<given-names>Boris</given-names>
</name>
<email>babic917@gmail.com</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gaba</surname>
<given-names>Anil</given-names>
</name>
<email>anil.gaba@indea.edu</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tsetlin</surname>
<given-names>Ilia</given-names>
</name>
<email>ilia.tsetlin@insead.edu</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Winkler</surname>
<given-names>Robert L.</given-names>
</name>
<email>rwinkler@duke.edu</email>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>University of Hong Kong</aff>
<aff id="aff-2"><label>2</label>INSEAD</aff>
<aff id="aff-3"><label>3</label>Duke University, Fuqua School of Business</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-07-25">
<day>25</day>
<month>07</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>25</volume>
<elocation-id>14</elocation-id>
<history>
<date date-type="received" iso-8601-date="2022-09-28">
<day>28</day>
<month>09</month>
<year>2022</year>
</date>
<date date-type="accepted" iso-8601-date="2024-08-18">
<day>18</day>
<month>08</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025, The authors</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. <uri xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">https://creativecommons.org/licenses/by-nc-nd/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://www.philosophersimprint.org/024004/phimp/article/10.3998/phimp.3416/"/>
<abstract>
<p>This paper suggests a new normative approach for combining beliefs. We call it the evidence-first method. Instead of aggregating credences alone, as the prevailing approaches, we focus instead on eliciting a group&#8217;s full probability distribution on the basis of the evidence available to its members. This is an altogether different way of combining beliefs. The method has four main benefits: (1) it captures the weight, or resilience, of a group&#8217;s belief; (2) it is sensitive to correlation among its individuals; (3) it is commutative under updating; and (4) it can be seen as a generalization of weighted averaging and likelihood ratio approaches. More broadly, it encourages an overall rethinking of the belief combination problem away from aggregating bare credences and toward appropriately combining evidence.</p>
</abstract>
<kwd-group>
<kwd>aggregation</kwd>
<kwd>pooling</kwd>
<kwd>credence</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>In this project we reframe the belief aggregation problem as an evidence combination problem. We explain that the focus on combining credences alone, as is the case in prevailing approaches, ignores the individual evidential states giving rise to those credences. As a result, traditional approaches fail to capture the multitude of individual evidential states which can lead to the same group credences. This occurs when we fail to account for dependence among individuals and the resilience of their beliefs. Such omissions are not innocuous: they can underdetermine both the group belief and its updating strategy.</p>
<p>We present an approach that allows one to focus instead on appropriately combining evidence, and in particular taking into account any overlaps in information. Once the evidence is properly captured, we will show, a full group distribution can be uniquely established on its basis. From this distribution, we can derive point estimates, intervals, and predictions. We call this the evidence-first method, in part to distinguish our approach from prevailing rules for combining beliefs, which may more accurately be described as credence-first.</p>
<p>To understand what we mean by this distinction &#8211; between combining evidence and combining credences &#8211; consider an example: Ahmed has observed 100 coin tosses, 30 of which were heads. His estimate that the next toss will land on heads is 0.3. Beatrice has observed 10 tosses, 3 of which were heads. Her estimate for heads on the next toss is also 0.3. Ahmed and Beatrice have identical probability estimates but very different information states, which we will characterize in terms of Joyce (<xref ref-type="bibr" rid="B26">2005</xref>)&#8217;s notion of resilience. This is not a distinction without a difference: which evidential state the probability is based on can have a profound effect on how the group responds to and acts on new information. In the above example, ordinary averaging suggests a group belief of 0.3, but it does not say whether this belief corresponds to 30/100 heads, 3/10 heads, or 33/110 heads. Each alternative would lead to very different updating behavior. Matters become even more complicated when some of the tosses that form the basis of Ahmed and Beatrice&#8217;s evidence were observed in common &#8211; i.e., when their information is overlapping and their estimates are correlated &#8211; a situation that is ubiquitous in real life (<xref ref-type="bibr" rid="B30">Lindley, 1983</xref>). Thus, when we seek to combine Ahmed and Beatrice&#8217;s beliefs we need to know, first, the evidence they correspond to and, second, the extent of its overlap. The prevailing approaches in the literature are not equipped with the tools to answer these questions. The evidence-first method can answer them.</p>
<p>The paper proceeds as follows. In Section (2), we provide a brief background and motivate the general project. In Section (3), we reframe the belief aggregation problem. We explain that its usual formulation &#8211; combining a list of credence functions into a group credence function &#8211; is underspecified. Without capturing the size or weight of each member&#8217;s evidence &#8211; i.e., the resilience of their credences &#8211; the group belief fails to take into account the full texture of information available to its members. In Section (4), we develop the evidence-first method. While this gets a little bit technical, the basic idea is very simple and intuitive: it is just a matter of properly accounting for everyone&#8217;s evidence &#8211; do not leave anything out, do not overcount. Consider the Ahmed and Beatrice example above. If they did not observe any tosses in common, then the group&#8217;s evidence consists of 110 tosses, 33 of which were heads. If they observed some tosses in common, we must appropriately subtract these. We then explain how the evidence, together with each person&#8217;s prior, fixes a unique group distribution that captures the probabilities (valence), their resilience (weight or sharpness) and the dependence among individuals (correlation). In Section (5), we provide a fully worked example of our approach. And in Section (6), we explain how this approach can be generalized to provide normative guidance even in cases where we do not have full access to the individuals&#8217; underlying evidence.</p>
</sec>
<sec>
<title>2. Background</title>
<p>We are interested in how beliefs from multiple individuals ought to be combined to form a group belief. This problem can manifest in several different ways. The first and most literal is when we inquire into the opinions of a collective, taken as one agent (<xref ref-type="bibr" rid="B32">List and Pettit, 2011</xref>). This may occur when the collective is the subject of reactive attitudes like praise or blame (<xref ref-type="bibr" rid="B41">Strawson, 1962</xref>). For example, Amnesty International may blame Shell for human rights abuses in Nigeria without necessarily singling out a corrupt set of individuals to bear responsibility. A judge may decide that a corporation entered into a contract even if no particular set of individuals explicitly thus intended. Indeed, the legal notion of corporate personhood requires that we impute agency to corporate entities. As Chief Justice Marshall states in a well-known case before the US Supreme Court, &#8220;The great object of an incorporation is to bestow the character and properties of individuality on a collective and changing body.&#8221;<xref ref-type="fn" rid="n1">1</xref></p>
<p>The second is when an individual needs to combine multiple sources of counsel or advice. For instance, Alibaba co-founder Lucy Peng is deliberating whether to purchase a small but promising venture. She solicits advice from three different domain experts on whether the company will turn a profit in five years. After obtaining their estimates, she must combine them into one prediction about profitability which represents her own credence. And third is when a group must act. For example, Tesla&#8217;s board of directors must decide whether to remove its founder Elon Musk as the company&#8217;s CEO. Before they can make a decision, they need to combine their individual beliefs about the wisdom of doing so.</p>
<p>The prevailing rules in the aggregation scholarship use measures of central tendency to identify a group&#8217;s belief. Moss (<xref ref-type="bibr" rid="B33">2011</xref>) and Pettigrew (<xref ref-type="bibr" rid="B35">2019</xref>), for example, defend ordinary averaging whereas Russell et al. (<xref ref-type="bibr" rid="B39">2015</xref>) and Dietrich (<xref ref-type="bibr" rid="B10">2019</xref>) champion geometric averaging. Dietrich and List (<xref ref-type="bibr" rid="B11">2016</xref>) discuss the multiplicative rule, which is a special case of the latter. We will explain how measures of central tendency can arise naturally from considerations of evidential symmetry under our approach. However, depending on the underlying evidential states, our approach may or may not coincide with any form of averaging. Meanwhile, Easwaran et al. (<xref ref-type="bibr" rid="B13">2016</xref>) use the product of odds ratios, which is somewhat closer to our approach.<xref ref-type="fn" rid="n2">2</xref> Indeed, we will explain that the rule they develop is equivalent to our method under special circumstances (independent signals and a uniform prior).<xref ref-type="fn" rid="n3">3</xref></p>
<p>While the prevailing aggregation methods in Bayesian epistemology largely focus on measures of central tendency, there are some views closer to ours which can be found in the logic of belief revision literature. For example, Williamson (<xref ref-type="bibr" rid="B45">2019</xref>) argues that the group distribution should be the distribution which maximizes Shannon information entropy subject to the constraints imposed by the evidence of each of the agents. Our approach is in the spirit of Williamson&#8217;s, as we too start from the motivating idea that the content which should be combined is the agents&#8217; evidence.</p>
<p>However, Williamson does not explore situations of evidential overlap. In this project, those are the most interesting situations, and the ones that we spend the most time developing. When evidential bases are independent, aggregation is easier, and even the prevailing averaging rules yield intuitive results. It is particularly in cases of overlap where things get tricky, and our approach attempts to address them. In that sense, one can think about our project as constructively building on Williamson (<xref ref-type="bibr" rid="B45">2019</xref>)&#8217;s. However, we combine probability distributions in a different way &#8211; we do not use maximum entropy methods. In that sense, our project is doing something different, though in the same spirit.</p>
<p>When we consider cases of evidential overlap, the aggregation problem becomes particularly interesting. Our approach requires that the individuals in the group can share evidence with each other and determine which bits are overlapping and which bits are not. For example, Williamson considers a case where we have two doctors making a prognosis about a patient&#8217;s cancer, where one doctor has clinical evidence and the other doctor has molecular evidence. This is a nice case for our project as well, but it is arguably an easy case. Here the doctors can share their evidence and there is no overlap. We can modify the example so that there is some shareable overlapping evidence though. For example, perhaps both doctors physically examined the patient (measuring their temperature, blood pressure, etc.). Both the original example and this modification are ideal use cases for our model, because here the overlapping evidence can be easily identified. However, imagine a case where a forecaster must combine two analysts&#8217; predictions, without knowledge of the underlying evidence that the predictions were based on. In this case, we cannot aggregate evidence since we do not know what it is or the extent to which it overlaps among the analysts&#8217; who made the predictions. We consider this situation in Section 6 of the paper, and we explain that even though in such cases our model cannot provide a recipe, so to speak, for combining credences, it can be used as a normative benchmark for which combinations are reasonable and which are not.</p>
</sec>
<sec>
<title>3. Group Credence is not Reducible to Valence</title>
<p>The argument in this section is straightforward. The prevailing combination rules &#8211; those which rely on one type of averaging or another &#8211; fail to capture an important aspect of the aggregation problem&#8217;s information structure: namely, the weight or mass of the group members&#8217; credences, which we call their resilience and define more carefully below.</p>
<p>Let <inline-formula><mml:math id="Eq001"><mml:mrow><mml:mi>X</mml:mi><mml:mo lspace="0.278em" rspace="0.278em">:</mml:mo><mml:mrow><mml:mi class="MJX-tex-caligraphic" mathvariant="script">F</mml:mi><mml:mo stretchy='false'>&#x2192;</mml:mo><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula> be a random variable defined on an underlying <inline-formula><mml:math id="Eq002"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula>-algebra <inline-formula><mml:math id="Eq003"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mo>,</mml:mo><mml:mi class="MJX-tex-caligraphic" mathvariant="script">F</mml:mi><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. Let <inline-formula><mml:math id="Eq004"><mml:mrow><mml:mi>c</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq005"><mml:mrow><mml:mi>C</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the individual and group credence functions for <inline-formula><mml:math id="Eq006"><mml:mi>X</mml:mi></mml:math></inline-formula>. Then ordinary and geometric averaging may be defined as follows.</p>
<list list-type="simple">
<list-item><p><bold>Ordinary Averaging:</bold></p>
<p><disp-formula id="FD1"><mml:math id="Eq007"><mml:mrow><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.111em">=</mml:mo><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula></p>
</list-item>
<list-item><p><bold>Geometric Averaging:</bold></p>
<p><disp-formula id="FD2"><mml:math id="Eq008"><mml:mrow><mml:mrow><mml:mrow><mml:mi>C</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.111em">=</mml:mo><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x220F;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:msup></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula></p>
</list-item>
</list>
<p>Simple (ordinary/geometric) averaging is obtained from (ordinary/geometric) averaging by setting all weights <inline-formula><mml:math id="Eq009"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. The so-called multiplicative mean is obtained from the geometric mean by setting <inline-formula><mml:math id="Eq010"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>. Some authors also suggest normalizing the result, so that the group credence is given by the above equations multiplied by their normalizing factor. All of these rules share the following important property:</p>
<disp-quote>
<p><bold>Credence profile sufficiency.</bold> An individual&#8217;s list of probability assignments, which Dietrich (<xref ref-type="bibr" rid="B10">2019</xref>) calls their credence profile, is a sufficient statistic for summarizing their doxastic contribution to the group&#8217;s belief.</p>
</disp-quote>
<p>For example, suppose we are interested in identifying a group&#8217;s probability that the next ball to be drawn from a certain urn will be white.<xref ref-type="fn" rid="n4">4</xref> The urn contains blue and white balls in unknown proportion. Credence profile sufficiency says that what we need from every individual is a list containing the probability that the next ball to be drawn is white, and the probability that the next ball to be drawn is blue. Or, equivalently, their point estimates of the proportion. For example, A&#8217;s list for (White, Blue) might be (0.6, 0.4). We will argue that the credence profile is not enough for identifying the group belief because there can be many group credences that map back to the same credence profile depending on the underlying members&#8217; evidence. This becomes particularly clear when the group undergoes a learning experience, in which case the group update is often underdetermined as well.</p>
<p>Joyce (<xref ref-type="bibr" rid="B26">2005</xref>), following Skyrms (<xref ref-type="bibr" rid="B40">1980</xref>), distinguishes between the valence, on the one hand, and resilience (mass or weight), on the other, of a credence function. Valence, Joyce says, &#8220;is a matter of which way, and how decisively, the relevant data points&#8221; (p. 159). Meanwhile, the &#8220;size or weight of the evidence has to do with how much relevant information the data contains, irrespective of which way it points&#8221; (p. 159). We refer to the latter as its resilience. And we refer to people with more/less resilient credence functions as more/less resolute, for short.<xref ref-type="fn" rid="n5">5</xref></p>
<p>Just as a vector has a direction and a magnitude, so too does a credence have a valence and a resilience. The valence refers to its direction, as an estimate of a proposition&#8217;s truth value or an event&#8217;s likelihood of occurring. A credence of 0.9 that it will rain has a strong valence in favor of rain. By defining a group&#8217;s credence as a function of the credence profile &#8211; i.e., the list of individual credence functions &#8211; the prevailing approaches essentially combine individual valences into a group valence. But in doing so they neglect weight or resilience. This is an especially glaring omission when <italic>combining</italic> credences where we want to pool every individual&#8217;s full contribution, which may vary from person to person, depending on their level of expertise or background information.</p>
<p>To characterize resilience, and harness it in support of a general model for combining beliefs, as we do in Section (4), we first need to develop a basic language for describing the magnitude of evidence reflected by one&#8217;s credence function. This is a dimension of the person&#8217;s doxastic state that is not captured by the credence profile. Accordingly, we will describe below a natural Bayesian approach for modeling exchangeable data which will allow us to explain these ideas more clearly. While the next two subsections may seem unduly technical, we spell things out carefully because doing so will allow us to substantially simplify the core material in Sections (4)-(6).</p>
<sec>
<title>3.1 Characterizing Resilience</title>
<p>Suppose again we have two people, Ahmed and Beatrice. They will each draw <inline-formula><mml:math id="Eq011"><mml:mi>n</mml:mi></mml:math></inline-formula> balls from an urn with replacement. The urn contains white and blue balls, with <inline-formula><mml:math id="Eq012"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> as the unknown proportion of white balls. In a draw of <inline-formula><mml:math id="Eq013"><mml:mi>n</mml:mi></mml:math></inline-formula> balls, let <inline-formula><mml:math id="Eq014"><mml:mi>r</mml:mi></mml:math></inline-formula> be the number of white balls and, hence, <inline-formula><mml:math id="Eq015"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula> the number of blue balls. Each person&#8217;s credences may be about <inline-formula><mml:math id="Eq016"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, or they may be about the probability that the next ball to be drawn will be white or blue &#8211; i.e., predictions on <inline-formula><mml:math id="Eq017"><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover></mml:math></inline-formula>, where <inline-formula><mml:math id="Eq018"><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> represents a white ball and <inline-formula><mml:math id="Eq019"><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> represents a blue ball.</p>
<p>Predicting the next ball and estimating <inline-formula><mml:math id="Eq020"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> both correspond to different practical problems. For example, in the context of Covid-19, a doctor might be interested in the probability that the next patient she sees is positive (predicting the color of the next ball) whereas a policymaker in her city may be more interested in the proportion of the population that is positive (estimating <inline-formula><mml:math id="Eq021"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>). As long as one uses a proper scoring rule, the Bayesian logic follows a similar structure for either task.</p>
<p>Accordingly, it is not enough to have a &#8220;credence&#8221; over the space of outcomes because there are many different distributions for <inline-formula><mml:math id="Eq022"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> which correspond to the same predictions about which ball will be drawn. Thus, we first need to specify a full distribution for <inline-formula><mml:math id="Eq023"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. Once we have that, we can make point estimates, interval estimates, and predictions about <inline-formula><mml:math id="Eq024"><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover></mml:math></inline-formula>.</p>
<p>In a draw of <inline-formula><mml:math id="Eq025"><mml:mi>n</mml:mi></mml:math></inline-formula> balls from an urn that contains only white and blue balls, the data are generated according to a Bernoulli process with the following likelihood function:</p>
<disp-formula id="FD3"><label>(1)</label><mml:math id="Eq026"><mml:mrow><mml:mrow><mml:mrow><mml:mi>f</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mi>r</mml:mi></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Now we need to identify prior beliefs regarding <inline-formula><mml:math id="Eq027"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. In the Bayesian approach, a good candidate prior for <inline-formula><mml:math id="Eq028"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> when data is generated according to (1) is the so-called beta distribution, because it is a very flexible distribution with two parameters, <inline-formula><mml:math id="Eq029"><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq030"><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, accommodating a wide variety of information states regarding a Bernoulli process and it arises naturally in the context of modeling binary exchangeable data (<xref ref-type="bibr" rid="B31">Lindley and Phillips, 1976</xref>). An early development of this model can be found in Johnson (<xref ref-type="bibr" rid="B24">1924</xref>), It was later applied by Carnap (<xref ref-type="bibr" rid="B4">Carnap, 1950</xref>, <xref ref-type="bibr" rid="B5">1952</xref>), in his construction of the continuum of inductive methods, and by DeFinetti (<xref ref-type="bibr" rid="B9">De Finetti, 1937</xref>), in his refinement of Laplace&#8217;s Rule of Succession.</p>
<p>Let <inline-formula><mml:math id="Eq031"><mml:mrow><mml:mi>&#x03C0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the prior probability density for <inline-formula><mml:math id="Eq032"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, where</p>
<disp-formula id="FD4"><label>(2)</label><mml:math id="Eq033"><mml:mrow><mml:mrow><mml:mi>&#x03C0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>B</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>is a beta density function with <inline-formula><mml:math id="Eq034"><mml:mrow><mml:mrow><mml:mi>B</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>/</mml:mo><mml:mi mathvariant="normal">&#x0393;</mml:mi></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq035"><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>!</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. The core of this distribution, its kernel, is given by <inline-formula><mml:math id="Eq036"><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>. The combinatorial term in front is a normalizing constant. The mean of a beta distribution is given by <inline-formula><mml:math id="Eq037"><mml:mrow><mml:mrow><mml:mtext>E</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. This will be an important quantity in the material to follow, as will <inline-formula><mml:math id="Eq038"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq039"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula>. With the likelihood in (1) and the prior in (2), the posterior density for <inline-formula><mml:math id="Eq040"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is</p>
<disp-formula id="FD5"><label>(3)</label><mml:math id="Eq041"><mml:mrow><mml:mrow><mml:mrow><mml:mi>&#x03C0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>+</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x221D;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>+</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The posterior distribution is of the same kernel form as the prior distribution. This is because a beta distribution is <italic>conjugate</italic> to the Bernoulli process. This means that if we start with a beta prior for <inline-formula><mml:math id="Eq042"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, and update via Bayes&#8217; Rule with data from a Bernoulli process, our posterior will likewise be beta but with updated parameters.</p>
<p>Such a model lends itself to a very intuitive interpretation.<xref ref-type="fn" rid="n6">6</xref> The parameters of the updated beta distribution (the posterior distribution) are given by the sum of <inline-formula><mml:math id="Eq043"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and the number of white balls, <inline-formula><mml:math id="Eq044"><mml:mi>r</mml:mi></mml:math></inline-formula>, together with the sum of <inline-formula><mml:math id="Eq045"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> and the number of blue balls, <inline-formula><mml:math id="Eq046"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula>. As a result, the parameters <inline-formula><mml:math id="Eq047"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq048"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> can be interpreted as pseudo observations or pseudo counts upon which the prior beliefs are based. For instance, to say that one has a beta(2, 2) prior for the proportion <inline-formula><mml:math id="Eq049"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> of white balls in the urn is equivalent to assuming that prior to making the actual draws, that person observed two balls of each color.</p>
<p>Bayesian updating is very simple and intuitive within a beta-Bernoulli model. If we start with a beta(7, 3) prior, and we observe 4 out of 10 white balls, our posterior for <inline-formula><mml:math id="Eq050"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> would be beta(7+4, 3+6). A beta(1, 1) prior for <inline-formula><mml:math id="Eq051"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is the uniform or flat prior. This would also be the maximum entropy prior for a proportion.</p>
<p>If we want to formulate a credence about <inline-formula><mml:math id="Eq052"><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover></mml:math></inline-formula> (the color of the next ball to be drawn), we need the predictive distribution. Assuming that the draws are conditionally independent given <inline-formula><mml:math id="Eq053"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, this is given by:</p>
<disp-formula id="FD6"><label>(4)</label><mml:math id="Eq054"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="0.0pt"><mml:mtr><mml:mtd columnalign="right"><mml:mrow><mml:mtext>P</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo rspace="0.111em">=</mml:mo><mml:mrow><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mrow><mml:mtext>P</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo fence="false">&#x007C;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>&#x03C0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo lspace="0em">&#x2062;</mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mtext>E</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>+</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Huttegger (<xref ref-type="bibr" rid="B19">2017a</xref>, <xref ref-type="bibr" rid="B20">b</xref>) refers to this expression as the Generalized Rule of Succession and shows that this form of the predictive probability follows from several modest assumptions about the structure of the data-generating process, in particular exchangeability, which will be satisfied throughout. From a decision-theoretic perspective, the posterior mean minimizes expected square error loss. The important point for us is that when the problem is fully specified, each person will have a full distribution about <inline-formula><mml:math id="Eq055"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. They will then base their prediction on the mean of that distribution.</p>
<p>We can now put this model to its first use and capture the notion of resilience for a credence function. Suppose A starts with a beta(1,1) prior and B starts with a beta(10, 10) prior regarding <inline-formula><mml:math id="Eq056"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, the proportion of white balls in the urn. They both obtain equivalent non overlapping evidence: namely, each draws 10 balls, 6 of which are white. Using (3), we can determine that their posteriors will become beta(7, 5) and beta(16, 14), respectively. Using (4), we conclude that A&#8217;s probability that the next ball to be drawn is white moves from 0.5 to 0.58 whereas B&#8217;s moves from 0.5 to 0.53. B is much more resolute in her prior, even though the actual credal value (the valence) is the same among them. The resilience of a credence function for <inline-formula><mml:math id="Eq057"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, therefore, corresponds to the size of <inline-formula><mml:math id="Eq058"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq059"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula>.</p>
<disp-quote>
<p><bold>Resilience.</bold> Let <inline-formula><mml:math id="Eq060"><mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>A</mml:mi></mml:msub></mml:math></inline-formula> denote the size of the sum of <inline-formula><mml:math id="Eq061"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq062"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> in A&#8217;s beta distribution for <inline-formula><mml:math id="Eq063"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. If <inline-formula><mml:math id="Eq064"><mml:mrow><mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>A</mml:mi></mml:msub><mml:mo>&gt;</mml:mo><mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>B</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> then A&#8217;s distribution for <inline-formula><mml:math id="Eq065"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is more resilient.</p>
</disp-quote>
<p>The higher <inline-formula><mml:math id="Eq066"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> the more resolute the person will be about her credences. Keep in mind that after we make observations, <inline-formula><mml:math id="Eq067"><mml:mi>r</mml:mi></mml:math></inline-formula> contributes to our new <inline-formula><mml:math id="Eq068"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math id="Eq069"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula> contributes to our new <inline-formula><mml:math id="Eq070"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula>, and <inline-formula><mml:math id="Eq071"><mml:mi>n</mml:mi></mml:math></inline-formula> contributes to <inline-formula><mml:math id="Eq072"><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow></mml:math></inline-formula>. As a result, the preceding definition captures Joyce&#8217;s dictum that resilience corresponds to the weight of one&#8217;s data &#8211; i.e., to <inline-formula><mml:math id="Eq073"><mml:mi>n</mml:mi></mml:math></inline-formula>.</p>
<p>As the above example comparing a beta(1, 1) prior against a beta(10, 10) prior illustrates, it is possible to have equal valence and unequal resilience. This is why the credence profile sufficiency assumption is problematic. When we combine valences, there are many degrees of resilience compatible with the ensuing group credence. Which level of resilience we then impute to the group will later affect how it responds to new evidence. As a result, credence is not reducible to valence. Without capturing resilience we fail to specify every individual&#8217;s full quantification of uncertainty.</p>
</sec>
<sec>
<title>3.2 Synchronic Resilience</title>
<p>One might wonder whether the only way resilience reveals itself is under learning experiences &#8211; diachronically, so to speak. This is not the case. We can illustrate the difference between resolute and irresolute agents synchronically as well.</p>
<p>The point estimate for <inline-formula><mml:math id="Eq074"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math id="Eq075"><mml:mover accent='true'><mml:mi>&#x03B8;</mml:mi><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>, is the same as the predictive probability for a single draw, <inline-formula><mml:math id="Eq076"><mml:mrow><mml:mtext>P</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. But instead of just producing the value we think is most likely &#8211; which we know to be false anyway, since <inline-formula><mml:math id="Eq077"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is continuous &#8211; we can instead produce an interval estimate for <inline-formula><mml:math id="Eq078"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. In Bayesian inference, a <inline-formula><mml:math id="Eq079"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B3;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mn>100</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> credible interval <inline-formula><mml:math id="Eq080"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> satisfies:</p>
<disp-formula id="FD7"><label>(5)</label><mml:math id="Eq081"><mml:mrow><mml:mrow><mml:mrow><mml:mtext>P</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mo>&lt;</mml:mo><mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo lspace="0em">&#x2062;</mml:mo><mml:mrow><mml:mo fence="true" rspace="0em">&lt;</mml:mo><mml:mi>b</mml:mi><mml:mo stretchy='false'>&#x007C;</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo rspace="0.111em">=</mml:mo><mml:mrow><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mi>a</mml:mi><mml:mi>b</mml:mi></mml:msubsup><mml:mrow><mml:mi>&#x03C0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo lspace="0em">&#x2062;</mml:mo><mml:mrow><mml:mo rspace="0em">&#x1D451;</mml:mo><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Consider an example: A&#8217;s credences are beta(2, 2) whereas B&#8217;s credences are beta(100, 100). Both estimate the proportion of white balls to be 0.5, and both estimate that the probability of the next ball being white is 0.5. Their probabilities (valences) are identical, as are their predictions about the next ball. However, A&#8217;s 95% credible interval is <inline-formula><mml:math id="Eq082"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>0.1</mml:mn><mml:mo>,</mml:mo><mml:mn>0.9</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> whereas B&#8217;s 95% credible interval is <inline-formula><mml:math id="Eq083"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>0.43</mml:mn><mml:mo>,</mml:mo><mml:mn>0.57</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. This is a dramatic difference in uncertainty around the prediction. A is very open minded, whereas B is quite dogmatic.</p>
<p>Indeed, diachronic and synchronic resilience are related. As <inline-formula><mml:math id="Eq084"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq085"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> increase, the variance of the distribution, given here by <inline-formula><mml:math id="Eq086"><mml:mrow><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>, decreases. This is to be expected &#8211; intuitively, large <inline-formula><mml:math id="Eq087"><mml:mi>n</mml:mi></mml:math></inline-formula> implies that it is hard to change the distribution with extra information (diachronic), while small <inline-formula><mml:math id="Eq088"><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:math></inline-formula> implies that the current estimate is tight (synchronic). So when we increase <inline-formula><mml:math id="Eq089"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq090"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> we tighten the variance. Therefore, for a given mean (point estimate) by increasing <inline-formula><mml:math id="Eq091"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq092"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> we ordinarily shrink the width of the credible interval. This is easiest to illustrate if we approximate a beta prior with a normal distribution,<xref ref-type="fn" rid="n7">7</xref> where intervals are symmetric. In this case, a 95% credible interval simplifies to <inline-formula><mml:math id="Eq093"><mml:mrow><mml:mi>&#x03BC;</mml:mi><mml:mo>&#x00B1;</mml:mo><mml:mrow><mml:mn>1.96</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>&#x03C3;</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>, and in our example,</p>
<disp-formula id="FD8"><mml:math id="Eq094"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>&#x03BC;</mml:mi><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>/</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mo>&#x00A0;</mml:mo><mml:mtext>and</mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mi>&#x03C3;</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mi>&#x03B2;</mml:mi><mml:mo>/</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>We can see that as <inline-formula><mml:math id="Eq095"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq096"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> increase, then <inline-formula><mml:math id="Eq097"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula> decreases and so for a given <inline-formula><mml:math id="Eq098"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula>, the length of the credible interval shrinks.</p>
</sec>
</sec>
<sec sec-type="methods">
<title>4. The Evidence-First Method</title>
<p>We now present the evidence-first method. In Section (4.1), we describe the approach informally when there is no shared evidence. In Section (4.2), we develop the idea mathematically for the general case where evidence is overlapping. In Sections (4.3)-(4.4), we show that in the special case with minimal overlapping evidence, we recover the ordinary (weighted) averaging rule. And in the special case with a uniform prior and no overlap, we recover the rule articulated in Easwaran et al. (<xref ref-type="bibr" rid="B13">2016</xref>), which they call Upco (Section 4.5).</p>
<sec>
<title>4.1 A Simple Example</title>
<p>Suppose we have an urn with blue and white balls in unknown proportion <inline-formula><mml:math id="Eq099"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, and two people, <inline-formula><mml:math id="Eq100"><mml:mi>A</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq101"><mml:mi>B</mml:mi></mml:math></inline-formula>, each of whom holds a uniform beta(1, 1) prior for <inline-formula><mml:math id="Eq102"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. They each draw 10 balls, independently, and observe 4 and 7 white balls, respectively, with no overlap. What should the group distribution be?</p>
<p>First, since both <inline-formula><mml:math id="Eq103"><mml:mi>A</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq104"><mml:mi>B</mml:mi></mml:math></inline-formula> approach the problem with a uniform prior, the group prior before any observations are made should be uniform as well.<xref ref-type="fn" rid="n8">8</xref> Second, and more importantly, we have to make explicit the group&#8217;s shared evidence. We have 4+7=11 distinct white balls and 6+3=9 distinct blue balls, for a total of 20 balls. We can think of the group as accomplishing a division of labor &#8211; assigning 10 draws for A to handle, and 10 draws for B to handle. They each do their job and come back to combine the evidence. The group distribution is therefore beta(12, 10). This is a full distribution for the unknown proportion <inline-formula><mml:math id="Eq105"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, from which we can derive any statistic of interest. For instance, the group&#8217;s (posterior mean) point estimate for <inline-formula><mml:math id="Eq106"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is <inline-formula><mml:math id="Eq107"><mml:mrow><mml:mrow><mml:mn>12</mml:mn><mml:mo>/</mml:mo><mml:mn>22</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.54</mml:mn></mml:mrow></mml:math></inline-formula>. The median is <inline-formula><mml:math id="Eq108"><mml:mn>0.55</mml:mn></mml:math></inline-formula>. The 95% credible interval is <inline-formula><mml:math id="Eq109"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>0.34</mml:mn><mml:mo>,</mml:mo><mml:mn>0.74</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. We now have a full representation of the group&#8217;s uncertainty.</p>
<p>By contrast, if we combine credences through simple ordinary averaging, for instance, we might pool the two point estimates from A&#8217;s beta(5, 7) and B&#8217;s beta(8, 4) distribution, which would be <inline-formula><mml:math id="Eq110"><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.41</mml:mn><mml:mo>+</mml:mo><mml:mn>0.66</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.53</mml:mn></mml:mrow></mml:math></inline-formula>. But notice that on ordinary averaging approaches we do not have the full distribution, using instead only the probabilities as described by the credence profile. As a result, it would be impossible to determine whether person A&#8217;s 0.41 estimate came from a beta(5, 7) distribution, a beta(10, 14) distribution, a beta(20, 28) distribution, or any other beta distribution which satisfies <inline-formula><mml:math id="Eq111"><mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.41</mml:mn></mml:mrow></mml:math></inline-formula>. Because all of these distributions are compatible with the reported probability, we also cannot say how the group should update if it makes additional observations. For example, if its prior distribution is beta(5, 7) and it observes 2 white balls it should move to beta(7,7) and a 0.5 estimate of <inline-formula><mml:math id="Eq112"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. But if its prior distribution is beta(20, 28) and it observes 2 white balls then it should move to beta(22,28) and an estimate of 0.44 for <inline-formula><mml:math id="Eq113"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>. Likewise, the 95% credible interval in the beta(5, 7) case is (0.17, 0.69) whereas in the beta(20, 28) case it is (0.28, 0.55).</p>
<p>Further, it is well known that ordinary averaging is not commutative with respect to updating: updating and combining does not always give the same result as combining then updating.<xref ref-type="fn" rid="n9">9</xref> Our approach, by comparison, does not have this problem, and we establish and discuss this for the general case in Theorem 1 below. Indeed, it is easy to see that this will be true because we are simply summing up the number of observations in each category. Since addition is commutative, so too is the evidence-first method.</p>
</sec>
<sec>
<title>4.2 The Core Idea</title>
<p>The preceding examples are particularly easy to handle because each person receives independent signals &#8211; no balls are observed together. But it is rarely the case that a group of people approach a problem with mutually exclusive private information. When some balls are observed in common, the key is to capture overlapping evidence appropriately.<xref ref-type="fn" rid="n10">10</xref> This way, everyone contributes exactly their evidence, and only their evidence, to the group belief. We now generalize the above idea and mathematically formulate the evidence-first method. This model captures both dependence and resilience.</p>
<p>We use the case of two people and two categories for maximal simplicity. Extending to <inline-formula><mml:math id="Eq114"><mml:mi>n</mml:mi></mml:math></inline-formula> people and <inline-formula><mml:math id="Eq115"><mml:mi>k</mml:mi></mml:math></inline-formula> categories is straightforward; but since the number of parameters grows quickly, in both <inline-formula><mml:math id="Eq116"><mml:mi>k</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq117"><mml:mi>n</mml:mi></mml:math></inline-formula>, it risks burying our message in the details. In a problem with two categories (two colors of balls), for two people, we need to know six quantities/parameters: the number of white balls each observed, the number of blue balls each observed, and the number of each color of balls observed in common.</p>
<p>Suppose we have two people, <inline-formula><mml:math id="Eq118"><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mrow></mml:math></inline-formula>, who will estimate the probability <inline-formula><mml:math id="Eq119"><mml:mi>p</mml:mi></mml:math></inline-formula> that the next ball drawn from the urn will be white (label it as a success). Each person has her own full subjective distribution over <inline-formula><mml:math id="Eq120"><mml:mi>p</mml:mi></mml:math></inline-formula>, which is a beta distribution with parameters <inline-formula><mml:math id="Eq121"><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq122"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>:</p>
<disp-formula id="FD9"><label>(6)</label><mml:math id="Eq123"><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x221D;</mml:mo><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Thus, for person <inline-formula><mml:math id="Eq124"><mml:mi>i</mml:mi></mml:math></inline-formula>, the probability that the next ball will be white corresponds to <inline-formula><mml:math id="Eq125"><mml:mrow><mml:mrow><mml:msub><mml:mtext>E</mml:mtext><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mfrac></mml:mrow></mml:math></inline-formula>. Moreover, <inline-formula><mml:math id="Eq126"><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> captures the notion of resilience described above &#8211; the larger <inline-formula><mml:math id="Eq127"><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula>, the more resolute person <inline-formula><mml:math id="Eq128"><mml:mi>i</mml:mi></mml:math></inline-formula> is that the probability is close to <inline-formula><mml:math id="Eq129"><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula>.</p>
<p>In order to combine these two people&#8217;s opinions/credences, we now need to model their shared information structure &#8211; which must reflect the way each came to their subjective probability distributions and any overlap in their evidence. Our model is as follows: Every person starts with a beta prior with parameters <inline-formula><mml:math id="Eq130"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq131"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula>. Typically, these parameters will be small, like <inline-formula><mml:math id="Eq132"><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq133"><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>. Such a prior is proper (i.e., there exists a normalizing constant) if <inline-formula><mml:math id="Eq134"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&gt;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq135"><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&gt;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>. This is often, but not always, the case. We will consider some improper priors below.</p>
<p>Each person will observe a few draws from this urn. This is their evidence. More specifically, <inline-formula><mml:math id="Eq136"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> is the number of successes observed by both people, <inline-formula><mml:math id="Eq137"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> is the number of failures observed by both people, <inline-formula><mml:math id="Eq138"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> is the number of successes observed only by person <inline-formula><mml:math id="Eq139"><mml:mi>i</mml:mi></mml:math></inline-formula>, and <inline-formula><mml:math id="Eq140"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> is the number of failures observed only by person <inline-formula><mml:math id="Eq141"><mml:mi>i</mml:mi></mml:math></inline-formula>. Now we can specify <inline-formula><mml:math id="Eq142"><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq143"><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula>. In particular, <inline-formula><mml:math id="Eq144"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq145"><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula>.</p>
<p>The group credence function, which we will denote by <inline-formula><mml:math id="Eq146"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> (i.e., small <inline-formula><mml:math id="Eq147"><mml:mi>&#x03C0;</mml:mi></mml:math></inline-formula> for the individual distribution, large <inline-formula><mml:math id="Eq148"><mml:mi mathvariant="normal">&#x03A0;</mml:mi></mml:math></inline-formula> for the group distribution), then corresponds to a situation when all of this information is combined. Therefore, it is a beta distribution with parameters <inline-formula><mml:math id="Eq149"><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq150"><mml:mrow><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula>:</p>
<disp-formula id="FD10"><label>(7)</label><mml:math id="Eq151"><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x221D;</mml:mo><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This implies that the group probability is <inline-formula><mml:math id="Eq152"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mfrac></mml:mrow></mml:math></inline-formula> and the new sample size (resilience) of the group is equal to <inline-formula><mml:math id="Eq153"><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula>. The group probability <inline-formula><mml:math id="Eq154"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> can be expressed (using <inline-formula><mml:math id="Eq155"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq156"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula>) as</p>
<disp-formula id="FD11"><label>(8)</label><mml:math id="Eq157"><mml:mrow><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This final quantity, in (8), is what the group uses as its probabilistic estimate. This is the reported probability &#8211; the valence. However, unlike approaches which focus on deriving the probability alone, we derive the full group distribution, (7), and the two are related because <inline-formula><mml:math id="Eq158"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mtext>E</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. In sum, equations (6)-(8) describe the evidence-first method. We now state a useful theorem about this method.</p>
<disp-quote>
<p><bold>Theorem 1 (Update Commutativity).</bold> Let <inline-formula><mml:math id="Eq159"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be <inline-formula><mml:math id="Eq160"><mml:mi>i</mml:mi></mml:math></inline-formula>&#8217;s prior distribution for <inline-formula><mml:math id="Eq161"><mml:mi>p</mml:mi></mml:math></inline-formula>, for <inline-formula><mml:math id="Eq162"><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mrow></mml:math></inline-formula>. Let <inline-formula><mml:math id="Eq163"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the group prior, derived using (7). Let <inline-formula><mml:math id="Eq164"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be <inline-formula><mml:math id="Eq165"><mml:mi>i</mml:mi></mml:math></inline-formula>&#8217;s posterior distribution for <inline-formula><mml:math id="Eq166"><mml:mi>p</mml:mi></mml:math></inline-formula>, obtained from <inline-formula><mml:math id="Eq167"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> via Bayes&#8217; Rule, after learning new information <inline-formula><mml:math id="Eq168"><mml:mi>r</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq169"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula>, and let <inline-formula><mml:math id="Eq170"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the group posterior, obtained from <inline-formula><mml:math id="Eq171"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> via Bayes&#8217; Rule, also after learning <inline-formula><mml:math id="Eq172"><mml:mi>r</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq173"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula>. Finally, with slight abuse of notation, let <inline-formula><mml:math id="Eq174"><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula> be the group posterior obtained if we first update <inline-formula><mml:math id="Eq175"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> to <inline-formula><mml:math id="Eq176"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and then combine <inline-formula><mml:math id="Eq177"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> using (7). Then,</p>
<disp-formula id="FD12"><label>(9)</label><mml:math id="Eq178"><mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Proof in the Appendix.</p>
</disp-quote>
<p>Thus, unlike ordinary averaging (weighted or simple) our approach is update commutative. The proof of this theorem is straightforward. Any distribution in our framework is simply characterized by the number of successes and failures, with probability given by the proportion of successes. So when we combine the distributions, we just count the total number of successes and failures. The only wrinkle to keep in mind is that we must avoid double counting trials that were observed by both people. When we &#8220;combine and then update&#8221; we first count the total number of successes and failures in the priors, and then add new successes and failures. When we &#8220;update and then combine&#8221;, we first count successes and failures for each agent, and then count the total.</p>
<p>There is another aspect of the model worth flagging. We suppose our individuals have a common diffuse/uniform prior, since we use <inline-formula><mml:math id="Eq179"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq180"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> instead of <inline-formula><mml:math id="Eq181"><mml:msubsup><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn><mml:mi>i</mml:mi></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="Eq182"><mml:msubsup><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn><mml:mi>i</mml:mi></mml:msubsup></mml:math></inline-formula>. This assumption is not essential and it can ultimately be dropped, but because some readers may find it problematic, we explain this modeling choice. To understand why we make this assumption, consider two different cases. First, suppose A and B have no prior information, and they adopt a uniform distribution on the basis of something like the Laplacean principle of indifference.<xref ref-type="fn" rid="n11">11</xref> That is, both are completely ignorant before making the relevant observations and their prior is a true flat ur-prior.<xref ref-type="fn" rid="n12">12</xref> Thus, <inline-formula><mml:math id="Eq183"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq184"><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>. Suppose we now want to combine these priors before updating on any information. What should the group distribution be? Our model implies that it should be beta<inline-formula><mml:math id="Eq185"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> and not beta<inline-formula><mml:math id="Eq186"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. This is intentional. We do not want two truly ignorant individual priors to sum up to an informative group prior. Another way to put this is that combining ignorance with ignorance should not lead to wisdom or confidence, just as 0 + 0 = 0.</p>
<p>Second, suppose A and B do have concrete prior information. For example, A has previously drawn balls from an urn in a game of chance at the Ringling Brothers circus whereas B has done so at the Barnum &amp; Bailey Circus. They now find themselves together at the Ramos Brothers Circus, having to make predictions about an urn neither has previously encountered. But they happen to know that all three circuses keep a similar house edge so the proportions cannot vary too widely. Suppose they start with beta(7, 3) and beta(3, 7) priors for the proportion of white balls in the urn at the Ramos Brothers Circus. They then observe 4 draws, two of which are white and two of which are blue. What should the group distribution be?</p>
<p>To answer this, we must make clear the sequence of updating. What happened here is that both people updated on two sets of observations/two experiments &#8211; first, independently, at the Ringling Brothers/Barnum &amp; Bailey Circus, and second, at the Ramos Brothers Circus, together. Thus, we need to determine what their ur-prior was before both sets of observations. Suppose again it was beta(1, 1).<xref ref-type="fn" rid="n13">13</xref> This implies that they each observed 8 balls at the first circus, which is why the sum of <inline-formula><mml:math id="Eq187"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq188"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> for both is 10 before the second circus.</p>
<p>Which approach they use to set their ur-prior does not matter as long as there is a shared understanding of what rationality calls for in the absence of information. Accordingly, our assumption is like a weak version of the common prior familiar from microeconomic theory.<xref ref-type="fn" rid="n14">14</xref> It is &#8220;weak&#8221; in the sense that we do not assume rationality writ-large requires universal agreement about ur-priors. We simply assume that the members of the group agree in this regard. Importantly, however, they can pick any starting point and our assumption that a uniform prior corresponds to an ignorant or uninformative distribution is merely illustrative.</p>
<p>Given this specification of the problem &#8211; beta(1, 1) ur-priors, followed by (6,2) and (2,6) white/blue observations alone at the first circus, followed by (2, 2) white/blue observations at the Ramos Brothers Circus &#8211; the group posterior distribution becomes beta(11, 11). We subtract only the initial <inline-formula><mml:math id="Eq189"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> from the ur-prior and <italic>not</italic> the (6, 2) / (2,6) observations made at the first circus, since these are ordinary independent observations. If they started with beta(0, 0) ur-priors, the group posterior would be beta(12, 12). The message is that we must be clear both about how the prior is selected before observations are made, and about what evidence is available to each person, both individually and jointly. In short, the only burden our framework imposes is that when modeling common information, we have to be careful to model it via <inline-formula><mml:math id="Eq190"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq191"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> and not <inline-formula><mml:math id="Eq192"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq193"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula>.<xref ref-type="fn" rid="n15">15</xref> To further illuminate our model, we will look at several cases where (8) takes a simple form.</p>
</sec>
<sec>
<title>4.3 Limited Evidential Overlap</title>
<p>In our approach, the simple case where there is no evidential overlap corresponds to what Dietrich and Spiekermann (<xref ref-type="bibr" rid="B12">2013</xref>) would describe as a set of opinions which are common cause conditionally independent. Let us examine this kind of situation. Suppose</p>
<disp-formula id="FD13"><label>(10)</label><mml:math id="Eq194"><mml:mtable columnspacing="0pt" displaystyle="true"><mml:mtr><mml:mtd columnalign="right"><mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x226A;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mo>&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;&#x2009;</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign="right"><mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x226A;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This is the case if both people start with completely ignorant beta(0, 0) priors and observe no information in common. That is, <inline-formula><mml:math id="Eq195"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>. Then, from (8),</p>
<disp-formula id="FD14"><label>(11)</label><mml:math id="Eq196"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="0.0pt"><mml:mtr><mml:mtd columnalign="right"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mfrac></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>&#x2248;</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In this case, we recover the ordinary weighted averaging rule, as defended in Moss (<xref ref-type="bibr" rid="B33">2011</xref>) and (<xref ref-type="bibr" rid="B35">Pettigrew, 2019</xref>), among others, where the weights are determined by <inline-formula><mml:math id="Eq197"><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq198"><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, the total number of each person&#8217;s observations &#8211; i.e., their resilience. This is intuitive, and indeed consistent with Pettigrew (<xref ref-type="bibr" rid="B35">2019</xref>)&#8217;s defense of ordinary weighted averaging because the more resolute of the two people will exert a greater weight on the group credence function. As Pettigrew suggests, it appears reasonable that the weights of an aggregation function reflect expertise &#8211; so that more knowledgeable members exert more influence on the group&#8217;s belief. It is also consistent with the interpretation given to the weights in ordinary weighted averaging in Romeijn (<xref ref-type="bibr" rid="B37">2024</xref>). Romeijn interprets the weights in terms of the truth conduciveness that one agent assigns to the other, which can also be thought of in terms of the trust placed in them.<xref ref-type="fn" rid="n16">16</xref> Accordingly, not only do we recover the ordinary weighted averaging rule, but in doing so we also provide a principled reason for how to assign the weights in that rule: namely, by using them to encode resilience.</p>
</sec>
<sec>
<title>4.4 Equal Resilience</title>
<p>Consider the case where <inline-formula><mml:math id="Eq199"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></inline-formula>. Here things become even more straightforward since if common information is small, then we will combine individual credences by simple ordinary averaging:</p>
<disp-formula id="FD15"><label>(12)</label><mml:math id="Eq200"><mml:mrow><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This is intuitive. When resilience is equal, the weights in the ordinary averaging rule ought to be equal. And it can be motivated on similar grounds as above: if we have a group of equally knowledgeable agents, it is reasonable to assign them equal weights.</p>
<p>But according to (8), the prior parameters (<inline-formula><mml:math id="Eq201"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq202"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula>) and the number of successes and failures observed by both people (<inline-formula><mml:math id="Eq203"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq204"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula>) can change this formula. From (8),</p>
<disp-formula id="FD16"><label>(13)</label><mml:math id="Eq205"><mml:mrow><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Without loss of generality, suppose that <inline-formula><mml:math id="Eq206"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2264;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>. Then <inline-formula><mml:math id="Eq207"><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq208"><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. The case where <inline-formula><mml:math id="Eq209"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="Eq210"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> corresponds to a situation where the body of common evidence is small, and as a result, <inline-formula><mml:math id="Eq211"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="Eq212"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. But now consider two further cases, where either <inline-formula><mml:math id="Eq213"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> or <inline-formula><mml:math id="Eq214"><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> is at a maximum or a minimum.</p>
<p><bold>Case 1. <inline-formula><mml:math id="Eq215"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula></bold>, <inline-formula><mml:math id="Eq216"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>. Then, by (13), <inline-formula><mml:math id="Eq217"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula>; <inline-formula><mml:math id="Eq218"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. So not only is the combined resilience now less than <inline-formula><mml:math id="Eq219"><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></inline-formula>, but <inline-formula><mml:math id="Eq220"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> is different from the average of individual probabilities. Even if <inline-formula><mml:math id="Eq221"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>, the combined probability is still <inline-formula><mml:math id="Eq222"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mi>p</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:mfrac><mml:mo>&lt;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>. This is because in this case successes are observed by both people together, while failures are observed by each person separately.</p>
<p><bold>Case 2. <inline-formula><mml:math id="Eq223"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula></bold>, <inline-formula><mml:math id="Eq224"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. Then, by (13), <inline-formula><mml:math id="Eq225"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math></inline-formula>; <inline-formula><mml:math id="Eq226"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>. As in the previous case, the combined probability is again different from what simple averaging would suggest.</p>
</sec>
<sec>
<title>4.5 A Closer Look at Priors</title>
<p>We now examine the impact of the prior distributions. Assume <inline-formula><mml:math id="Eq227"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq228"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:math></inline-formula>. Let <inline-formula><mml:math id="Eq229"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula>, which would be the combined probability under ordinary simple averaging. Then,</p>
<disp-formula id="FD17"><label>(14)</label><mml:math id="Eq230"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="7pt"><mml:mtr><mml:mtd columnalign="right"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:mfrac></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:mfrac></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:mi>n</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mfrac><mml:mi>d</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This highlights an important feature of our model, which we call extremization (<xref ref-type="bibr" rid="B29">Lichtendahl et al., 2021</xref>). By extremization we refer to a phenomenon that Easwaran et al. (<xref ref-type="bibr" rid="B13">2016</xref>) call synergy. It suggests that the group belief can lie outside the interval formed by the lower and upper bounds of individual beliefs. Examining the last line in (14), we can see that <inline-formula><mml:math id="Eq231"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> extremizes away from <inline-formula><mml:math id="Eq232"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:math></inline-formula> whenever the quantity on the right side of the sum is not <inline-formula><mml:math id="Eq233"><mml:mn>0</mml:mn></mml:math></inline-formula>. That is, the group credence extremizes unless <inline-formula><mml:math id="Eq234"><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, or <inline-formula><mml:math id="Eq235"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula>, or <inline-formula><mml:math id="Eq236"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> or <inline-formula><mml:math id="Eq237"><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>. Meanwhile, adopting a uniform prior in the above case would correspond to a situation where <inline-formula><mml:math id="Eq238"><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>, and for small <inline-formula><mml:math id="Eq239"><mml:mi>n</mml:mi></mml:math></inline-formula> the extremization can be quite substantial. Note also that extremization will occur even if <inline-formula><mml:math id="Eq240"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>.</p>
<p>Extremization is not possible under ordinary averaging rules, where the group belief must lie in the convex hull of the set of individual beliefs. But we agree with Easwaran et al. (<xref ref-type="bibr" rid="B13">2016</xref>) that extremizing can be rational, especially in cases where, as here, the common evidence is small. Consider a more realistic scenario. A company&#8217;s executive committee is predicting whether the company will break even next year. It consists of the heads of marketing, finance, and operations. All three independently report that the company has a 97% probability of breaking even. Given that each of these executives is coming from a different area of the company, and is likely basing their forecast on largely independent evidence, it is particularly plausible that the group credence should be above 0.97. Indeed, if the credence remains at 0.97, as ordinary averaging requires, we are likely throwing away information (see also <xref ref-type="bibr" rid="B6">Christensen, 2011</xref>).</p>
<p>Finally, note that under a uniform prior, i.e., where <inline-formula><mml:math id="Eq241"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>, the posterior distribution is proportional to the likelihood. Therefore, if we combine two distributions, and we assume that each person started with a uniform prior and received independent signals, i.e., <inline-formula><mml:math id="Eq242"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, then the combined posterior will be proportional to the product of their individual distributions. In such a case, the individual distribution of person <inline-formula><mml:math id="Eq243"><mml:mi>i</mml:mi></mml:math></inline-formula> is beta with <inline-formula><mml:math id="Eq244"><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq245"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>, and the combined distribution is beta with <inline-formula><mml:math id="Eq246"><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq247"><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>, which is what we would get if we multiply their individual distributions:</p>
<disp-formula id="FD18"><label>(15)</label><mml:math id="Eq248"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="0.0pt"><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Therefore, with independent signals under uniform priors we recover the so-called Upco rule from Easwaran et al. (<xref ref-type="bibr" rid="B13">2016</xref>) for updating on the credences of others. Upco is derived as the product of odds ratios &#8211; for one person, the odds ratio is <inline-formula><mml:math id="Eq249"><mml:mrow><mml:mi>p</mml:mi><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, for another it is <inline-formula><mml:math id="Eq250"><mml:mrow><mml:mi>q</mml:mi><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, so the product is <inline-formula><mml:math id="Eq251"><mml:mrow><mml:mrow><mml:mi>q</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, and after normalization we obtain Upco.</p>
<p>But, our method produces a combined distribution for <inline-formula><mml:math id="Eq252"><mml:mi>p</mml:mi></mml:math></inline-formula>, and the expected value of that distribution would be the probability for the next ball drawn to be white. In Upco on the other hand (as defined on <xref ref-type="bibr" rid="B13">Easwaran et al., 2016, pg. 3</xref>), the rule applies directly to the probabilities of the next ball, which are not sensitive to considerations of resilience. Thus our approach coincides with Upco only under uniform priors and independent signals. For example, if <inline-formula><mml:math id="Eq253"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq254"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:mrow></mml:math></inline-formula>, then our combined probability will still be around 10%; with Upco, if we combine <inline-formula><mml:math id="Eq255"><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mi>q</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mn>10</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> we would get a group probability of about <inline-formula><mml:math id="Eq256"><mml:mrow><mml:mn>1</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
</sec>
</sec>
<sec>
<title>5. A Worked Example: Hiring a Netflix Developer</title>
<p>To get a better feel for the evidence-first method, consider an extended and more realistic example.</p>
<disp-quote>
<p><bold>Netflix.</bold> Netflix is interested in hiring an original series developer. This will be a full-time employee whose job is to bring new pitches, specs, etc., to the streaming service. The search committee consists of two members, Ahmed and Beatrice. The shortlist of competing candidates are all individuals with significant prior experience in developing shows. The committee considers a show successful if it runs for one season or more and turns a profit. Of interest, then, is the developer&#8217;s probability of creating a successful show. They are now considering a well-known developer named Jean Marscome.</p>
</disp-quote>
<p>Suppose A and B each report the following probabilities of JM&#8217;s success: 0.7 and 0.3, respectively. These are their naked probabilities &#8211; or valences &#8211; and we now know that this is not enough to appropriately combine their beliefs. Rather, we should elicit the members&#8217; individual evidence on which these predictions are based and piece together their full distributions, from which we then derive the group distribution and make predictions about JM.</p>
<p>Thus we first need to know their priors for <inline-formula><mml:math id="Eq257"><mml:mi>p</mml:mi></mml:math></inline-formula>. Suppose A and B agree that in the absence of information one should apply the principle of indifference and they both adopted a uniform prior. In fact, suppose that this is standard Netflix company policy in the context of recruiting: before any information on a candidate is obtained, the hiring committee must treat the candidate&#8217;s probability of success as uniform on <inline-formula><mml:math id="Eq258"><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:math></inline-formula>. This is not an unwise policy &#8211; it may be enforced to avoid favoritism among job candidates.</p>
<p>Next, suppose they each lay their cards on the table. A is familiar with 8 of JM&#8217;s shows, 6 of which were successful, and 2 of which were unsuccessful. Meanwhile, B is also familiar with 8 of JM&#8217;s shows, but 2 of them were successful and 6 of them were unsuccessful. We can now account for the resilience of their credences, because we have the weight of their evidence. But we still need to untangle potential dependencies.</p>
<p>Finally, A and B list the JM shows they are familiar with, identifying each as a success or failure. As part of this exercise, we learn that they have 2 shows in common, both of which were a failure. We now know all six parameters (<inline-formula><mml:math id="Eq259"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>). From these, we can determine <inline-formula><mml:math id="Eq260"><mml:mi>n</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq261"><mml:mi>r</mml:mi></mml:math></inline-formula>, compute the individual distributions, and predictions, and the group distribution and prediction. <xref ref-type="table" rid="T1">Table (1)</xref> summarizes the above.</p>
<table-wrap id="T1">
<label>Table 1</label>
<caption><p>Individual and group beliefs in Netflix example.</p></caption>
<table>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">Ahmed</td>
<td align="left" valign="top">Beatrice</td>
<td align="left" valign="top">Group (Netflix)</td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq262"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq263"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq264"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq265"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq266"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq267"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq268"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq269"><mml:mn>1</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq270"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq271"><mml:mn>6</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq272"><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq273"><mml:mn>8</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq274"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq275"><mml:mn>0</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq276"><mml:mn>4</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq277"><mml:mn>4</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq278"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq279"><mml:mn>0</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq280"><mml:mn>0</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq281"><mml:mn>0</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq282"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq283"><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq284"><mml:mn>2</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq285"><mml:mn>2</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq286"><mml:mi>n</mml:mi></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq287"><mml:mn>8</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq288"><mml:mn>8</mml:mn></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq289"><mml:mrow><mml:mrow><mml:mrow><mml:mn>8</mml:mn><mml:mo>+</mml:mo><mml:mn>8</mml:mn></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>14</mml:mn></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq290"><mml:mrow><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq291"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>6</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>7</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq292"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>2</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq293"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>8</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>9</mml:mn></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top"><inline-formula><mml:math id="Eq294"><mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq295"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>0</mml:mn><mml:mo>+</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq296"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>4</mml:mn><mml:mo>+</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>7</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq297"><mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>4</mml:mn><mml:mo>+</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>7</mml:mn></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top">Full distribution for <inline-formula><mml:math id="Eq298"><mml:mi>p</mml:mi></mml:math></inline-formula></td>
<td align="left" valign="top">beta<inline-formula><mml:math id="Eq299"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>7</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top">beta<inline-formula><mml:math id="Eq300"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>7</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top">beta<inline-formula><mml:math id="Eq301"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>9</mml:mn><mml:mo>,</mml:mo><mml:mn>7</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td align="left" valign="top">Prediction of JM&#8217;s success, i.e., <inline-formula><mml:math id="Eq302"><mml:mrow><mml:mtext>E</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq303"><mml:mrow><mml:mrow><mml:mn>7</mml:mn><mml:mo>/</mml:mo><mml:mn>10</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.7</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq304"><mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>/</mml:mo><mml:mn>10</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.3</mml:mn></mml:mrow></mml:math></inline-formula></td>
<td align="left" valign="top"><inline-formula><mml:math id="Eq305"><mml:mrow><mml:mrow><mml:mn>9</mml:mn><mml:mo>/</mml:mo><mml:mn>16</mml:mn></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.5625</mml:mn></mml:mrow></mml:math></inline-formula></td>
</tr>
</table>
</table-wrap>
<p>We highlight several points. First, the group prediction is not a simple average of the individual predictions. It is tilted upward because there are overlapping failures and no overlapping successes. By comparison, Russell et al. (<xref ref-type="bibr" rid="B39">2015</xref>) and Dietrich (<xref ref-type="bibr" rid="B10">2019</xref>)&#8217;s geometric mean would produce a prediction of 0.46 without normalization, since <inline-formula><mml:math id="Eq306"><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>0.7</mml:mn><mml:mo lspace="0.222em" rspace="0.222em">&#x00D7;</mml:mo><mml:mn>0.3</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0.46</mml:mn></mml:mrow></mml:math></inline-formula>, putting more weight on Beatrice, and <inline-formula><mml:math id="Eq307"><mml:mn>0.5</mml:mn></mml:math></inline-formula> with normalization.</p>
<p>Notice, also, the effect of resilience. If after combining their beliefs into a group distribution, they were to watch three of JM&#8217;s shows together, all of them a failure, they would update to a group distribution of beta(9, 10) and the prediction of JM&#8217;s success would then drop from 0.56 to 0.47. Now suppose we double all the values in the original example, so that the group belief is beta(18, 14) before they watch any shows together. This time, again, they watch three shows, all failures, thereby updating the group distribution to beta(18, 17). Now the prediction drops from 0.56 to 0.51. Because such a group is more resolute in its estimate of JM&#8217;s success, it responds less to 3 failures than it did in the original case. This is a facet of the situation that the current approaches in the literature are not equipped to handle.</p>
<p>Finally, our approach streamlines certain distinctions often made in the aggregation literature. Dietrich (<xref ref-type="bibr" rid="B10">2019</xref>), for example, argues that there are &#8220;different types of group Bayesianism, depending on the kind of information on which one requires groups to conditionalize [public, semi private, private]&#8221; (pg. 721). This tri-partite distinction is a necessary byproduct of the assumption that the credence profile is a sufficient summary statistic of individual beliefs. Our approach, by comparison, does not require a multitude of Bayesianisms. There is only one way to be Bayesian, namely, by passing whatever is learnt through Bayes&#8217; Rule. Public information consists of balls observed by every member of the group. Semi private information consists of balls observed by two or more but not all members of the group. Private information consists of balls observed by only one member of the group. To further drive the reader&#8217;s intuition, we include in <xref ref-type="fig" rid="F1">Figure (1)</xref> a visual representation of the Netflix situation.</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>A and B&#8217;s combined credences about JM. The white balls are successes and the blue balls are failures. The points inside the box are the common uniform priors. The points outside the box but inside the intersection constitute shared evidence. And the points outside the intersection constitute each person&#8217;s individual evidence.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="phimp-3416_babic-g1.png"/>
</fig>
</sec>
<sec>
<title>6. Scope and Applicability</title>
<p>One might wonder whether the range of aggregation problems within the scope of our prescriptive approach is too limited. As in the Netflix example, our framework may at first blush seem to call for conditions that are often unmet in real life: namely, the individuals in the group should be able to have a conversation and reveal their total evidence. But what about cases where individual credences do not arise from such a clean and well-specified process?</p>
<p>While we think the requisite conditions are not as unattainable as may first appear, it is nonetheless true that sometimes we cannot so neatly disclose our evidence. Indeed there may be times where a decision maker is faced with the unenviable task of combining bare individual forecasts (which may have been compiled for them by someone else, or made a long time ago, or by experts who are now inaccessible, etc.). In short, one might suggest that the credence profile sufficiency assumption, common in the literature, is not so much a desideratum of the belief combination problem as it is a description of the hard reality in which beliefs must be combined.<xref ref-type="fn" rid="n17">17</xref></p>
<p>Even in such cases, however, our approach remains valuable as a normative benchmark for evaluating the rationality of group beliefs. To understand how, suppose we really are in a situation where we have to aggregate credences without access to the full evidence set, or perhaps to any evidence at all. In such cases, we can apply the framework we suggest in reverse. Instead of using this approach as a recipe for how to reach a specific group distribution, we can instead identify a range of permissible group credences which are consistent with our best estimate about the plausible evidence histories of the individual members.</p>
<p>For example, suppose that in the case of drawing marbles from urns, we have A and B&#8217;s predictions that the next marble will be white. We also know that they observed some of the same draws, but we are not sure how many they observed in common. Suppose that we have no further information. In this situation, we have to make some assumptions about their resiliences, and about the extent of their evidential overlap. There are many evidence histories compatible with their predictions. Accordingly, we can construct upper and lower bounds on what the rationally permissible group prediction should be. The normative guidance that our approach produces in this case is not as specific as in the Netflix example, but that is to be expected since the information structure is now far more impoverished.</p>
<p>To make this more precise, consider how we would make such evaluative judgments without knowing the actual evidence histories. First, we need to estimate individual resiliences, giving more weight to sharper distributions. If we have no basis on which to estimate these, we might start by assuming that everyone&#8217;s resiliences are the same (for similar reasons that would motivate the Laplacean Principle of Indifference). Next, we have to estimate the overlap in their evidence. This will depend on our assessment of the number of evidence histories consistent with the individual credences. But helpfully, our results from Section (4) provide some general bounds on what is permissible.</p>
<p>If we go back to our Cases 1 and 2 (Section 4.4), we have there equal resilience leading to simple averaging, such that <inline-formula><mml:math id="Eq308"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mrow></mml:math></inline-formula>, but only if common information is small. Thus we can now reconsider extreme cases of overlap to see what happens. Assuming again without loss of generality that <inline-formula><mml:math id="Eq309"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&lt;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, then in Case 1 <inline-formula><mml:math id="Eq310"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> is at a maximum and by (13), <inline-formula><mml:math id="Eq311"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. In Case 2, <inline-formula><mml:math id="Eq312"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula> is at a maximum and by (13), <inline-formula><mml:math id="Eq313"><mml:mrow><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></inline-formula>. This means that</p>
<disp-formula id="FD19"><label>(16)</label><mml:math id="Eq314"><mml:mrow><mml:mrow><mml:mfrac><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac><mml:mo>&lt;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&lt;</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>To better understand this inequality, we plot these bounds on the group credence <inline-formula><mml:math id="Eq315"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> in <xref ref-type="fig" rid="F2">Figure (2)</xref>, below. The plot depicts <inline-formula><mml:math id="Eq316"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> (z-axis) as a function of <inline-formula><mml:math id="Eq317"><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> (x-axis) and <inline-formula><mml:math id="Eq318"><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula> (y-axis). We can see that the bounds on <inline-formula><mml:math id="Eq319"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula> are very tight at the extremes, and widest near middling values. This is to be expected because when the valences of the individuals&#8217; predictions are near middling values then there are many possible evidence histories consistent with the group&#8217;s credence &#8211; i.e., many different ways the evidence could be overlapping among the group&#8217;s members. In such cases, our approach is at its most permissible. It allows the group belief to be anywhere between these wide bounds. However, as the individual credences become sharper in their valence &#8211; i.e., closer to 0 or 1 &#8211; then our approach narrows down the range of rationally permissible credences the group could adopt.</p>
<fig id="F2">
<label>Figure 2</label>
<caption>
<p>Plot depicting bounds on the group credence, <inline-formula><mml:math id="Eq320"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula>, as a function of A and B&#8217;s credences, <inline-formula><mml:math id="Eq321"><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq322"><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, respectively, depending on different estimates about the degree of evidential overlap among the group&#8217;s members. All axes range from <inline-formula><mml:math id="Eq323"><mml:mn>0</mml:mn></mml:math></inline-formula> to <inline-formula><mml:math id="Eq324"><mml:mn>1</mml:mn></mml:math></inline-formula>.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="phimp-3416_babic-g2.png"/>
</fig>
<p>We can further illuminate the normative constraints that our approach imposes by considering a few special cases of (16). Consider, first, the case where the individuals&#8217; credences are identical <inline-formula><mml:math id="Eq325"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
<p>Letting <inline-formula><mml:math id="Eq326"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:math></inline-formula>, (16) reduces to,</p>
<disp-formula id="FD20"><label>(17)</label><mml:math id="Eq327"><mml:mrow><mml:mrow><mml:mfrac><mml:mi>p</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:mfrac><mml:mo>&lt;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&lt;</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>We can now plot these bounds as a two dimensional slice of the above plot, as follows.</p>
<fig id="F3">
<label>Figure 3</label>
<caption>
<p>Plot depicting bounds on the group credence, <inline-formula><mml:math id="Eq380"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula>, as a function of A and B&#8217;s credences, <inline-formula><mml:math id="Eq381"><mml:mi>p</mml:mi></mml:math></inline-formula>, depending on different estimates about the degree of evidential overlap among the group&#8217;s members.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="phimp-3416_babic-g3.png"/>
</fig>
<p>Consider two further cases. When <inline-formula><mml:math id="Eq328"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>,</p>
<disp-formula id="FD21"><label>(18)</label><mml:math id="Eq329"><mml:mrow><mml:mrow><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo lspace="0.170em">&#x2062;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&lt;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&lt;</mml:mo><mml:mfrac><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>And when <inline-formula><mml:math id="Eq330"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>,</p>
<disp-formula id="FD22"><label>(19)</label><mml:math id="Eq331"><mml:mrow><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac><mml:mo>&lt;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&lt;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>By comparison, the simple average is <inline-formula><mml:math id="Eq332"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:math></inline-formula> when <inline-formula><mml:math id="Eq333"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq334"><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:math></inline-formula> when <inline-formula><mml:math id="Eq335"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>. Thus, simple averaging here corresponds to only one of many compatible evidence histories.</p>
<fig id="F4">
<label>Figure 4</label>
<caption>
<p>Plot depicting bounds on the group credence, <inline-formula><mml:math id="Eq376"><mml:msup><mml:mi>p</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:math></inline-formula>, as a function of A and B&#8217;s credences, <inline-formula><mml:math id="Eq377"><mml:mi>p</mml:mi></mml:math></inline-formula>, depending on different estimates about the degree of evidential overlap among the group&#8217;s members. In the left panel, <inline-formula><mml:math id="Eq378"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></inline-formula>, corresponding to (18), and in the right panel, <inline-formula><mml:math id="Eq379"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula>, corresponding to (19).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="phimp-3416_babic-g4.png"/>
</fig>
<p>The point, in short, is that there are two ways to make use of the evidence-first method. The first is where the individuals are able to share their total evidence with each other and discern the degree of its overlap. In such cases, our approach offers essentially a step-by-step recipe for getting to a full group distribution. The general idea here is that we start from the notion that we should use all available information, in the spirit of Good (<xref ref-type="bibr" rid="B14">1967</xref>), and we identify a systematic approach to combining that information in a way that is particularly sensitive to avoiding evidential overlap.<xref ref-type="fn" rid="n18">18</xref> Moreover, the aggregation method we propose is maximally fine-grained, or informative, in the sense that we identify the full distribution. Using that distribution, the group can then pull out any statistic of interest &#8211; such as a mean, a confidence interval, or any quantile.</p>
<p>The second is where the individuals do not know either what the evidence is or the extent of its overlap (or both). In such cases, they can use the individual predictions to construct upper and lower bounds on the rationally permissible group credence. So even though in this case we cannot tell the group where, precisely, to move to, we can tell the group which credences to avoid. This is similar to how Joyce (<xref ref-type="bibr" rid="B25">1998</xref>), for example, views the normative role of the accuracy-dominance framework. In that framework, if an agent has incoherent credences, we cannot tell her which coherent credences, precisely, she should adopt. But we can tell her that there are many credences which accuracy-dominate her own, and therefore that she should not remain where she is. Thus, our approach serves as a normative guide in both cases. However, the more information we have about the aggregation problem, the more firmly are the standards of rationality delivered by the evidence-first method.</p>
</sec>
<sec>
<title>7. Concluding Remarks</title>
<p>We have presented a general and flexible evidence-driven framework for combining beliefs. The method&#8217;s core virtues are that the group belief is update commutative and reflects the full range of information available to its individuals while simultaneously taking into account any overlaps in their evidence. Beyond the technical virtues, our hope is to encourage a general rethinking of the belief combination problem, from a question of how to combine numerical credences, to a question of how to identify and appropriately combine evidence.</p>
</sec>
</body>
<back>
<sec>
<title>Appendix</title>
<disp-quote>
<p><bold>Theorem 1 (Update Commutativity).</bold> Let <inline-formula><mml:math id="Eq336"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be <inline-formula><mml:math id="Eq337"><mml:mi>i</mml:mi></mml:math></inline-formula>&#8217;s prior distribution for <inline-formula><mml:math id="Eq338"><mml:mi>p</mml:mi></mml:math></inline-formula>, for <inline-formula><mml:math id="Eq339"><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mrow></mml:math></inline-formula>. Let <inline-formula><mml:math id="Eq340"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the group prior, derived using (7). Let <inline-formula><mml:math id="Eq341"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be <inline-formula><mml:math id="Eq342"><mml:mi>i</mml:mi></mml:math></inline-formula>&#8217;s posterior distribution for <inline-formula><mml:math id="Eq343"><mml:mi>p</mml:mi></mml:math></inline-formula>, obtained from <inline-formula><mml:math id="Eq344"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> via Bayes&#8217; Rule, after learning new information <inline-formula><mml:math id="Eq345"><mml:mi>r</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq346"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula>, and let <inline-formula><mml:math id="Eq347"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> be the group posterior, obtained from <inline-formula><mml:math id="Eq348"><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> via Bayes&#8217; Rule, also after learning <inline-formula><mml:math id="Eq349"><mml:mi>r</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="Eq350"><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:math></inline-formula>. Finally, with slight abuse of notation, let <inline-formula><mml:math id="Eq351"><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula> be the group posterior obtained if we first update <inline-formula><mml:math id="Eq352"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> to <inline-formula><mml:math id="Eq353"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> and then combine <inline-formula><mml:math id="Eq354"><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> using (7). Then,</p>
<disp-formula id="FD23"><label>(20)</label><mml:math id="Eq355"><mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
</disp-quote>
<p>Proof: Suppose, first, we update then combine. We know that each person&#8217;s priors are given by:</p>
<disp-formula id="FD24"><mml:math id="Eq356"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Using Bayes&#8217; Rule, we obtain the following individual posteriors:</p>
<disp-formula id="FD25"><mml:math id="Eq357"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x007C;</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x007C;</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="Eq358"><mml:mi>d</mml:mi></mml:math></inline-formula> is now expressed in terms of <inline-formula><mml:math id="Eq359"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="Eq360"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula>, and <inline-formula><mml:math id="Eq361"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:math></inline-formula>. Combining these, we get the following group distribution:</p>
<disp-formula id="FD26"><mml:math id="Eq362"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="0.0pt"><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>,</mml:mo><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x2062;</mml:mo><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo lspace="0.330em">&#x2062;</mml:mo><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2062;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2013;</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="Eq363"><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0393;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2013;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>!</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>.</p>
<p>Suppose, next, we first combine then update, as in Equation 7. We know that each person&#8217;s priors are given by:</p>
<disp-formula id="FD27"><mml:math id="Eq364"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x007E;</mml:mo><mml:mtext>beta</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Combining these distributions, we obtain:</p>
<disp-formula id="FD28"><mml:math id="Eq365"><mml:mtable columnspacing="0pt" displaystyle="true" rowspacing="0.0pt"><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mi/><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2013;</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd/><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>If we now update this group distribution, we get the following group posterior:</p>
<disp-formula id="FD29"><mml:math id="Eq366"><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Note that <inline-formula><mml:math id="Eq367"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq368"><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:math></inline-formula>. Hence,</p>
<disp-formula id="FD30"><mml:math id="Eq369"><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x223C;</mml:mo><mml:mrow><mml:mtext>beta</mml:mtext><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>,</mml:mo><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup><mml:mo>&#x2013;</mml:mo><mml:msup><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As a result,</p>
<disp-formula id="FD31"><mml:math id="Eq370"><mml:mtable columnspacing="0pt" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mi/><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A0;</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mrow><mml:mo fence="false">&#x007C;</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mo lspace="0em">.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><styled-content style="text-align: right; display: block">&#x25A1;</styled-content></p>
</sec>
<sec>
<title>Acknowledgments</title>
<p>We would like to thank Frederick Eberhardt, Christopher Hitchcock, Jan-Willem Romeijn and Jonathan Weisberg for their valuable comments, as well as audiences at the UC Irvine Formal Epistemology Workshop, University of Bristol, and the California Institute of Technology.</p>
</sec>
<sec>
<title>Funding</title>
<p>Babic&#8217;s work was supported by a grant from the Social Sciences and Humanities Research Council of Canada (SSHRC), Insight Grant (number 435-2022-0325); a grant from the Government of Hong Kong, SAR China, University Grants Committee, General Research Fund (Project Code 17616324); and a grant from the HKU Musketeers Foundation Institute of Data Science, HKU100 Fund.</p>
</sec>
<fn-group>
<fn id="n1"><p><italic>Providence Bank v. Billings</italic>, 29 U.S. 514 (1830).</p></fn>
<fn id="n2"><p>Kinney (<xref ref-type="bibr" rid="B27">2020</xref>) moves away from averaging and uses stacking, which is a particular case of ensembles from machine learning. This is a different spirit of aggregation, but it would be hard to apply in cases where there is not much data or when, as often, the future is expected to be significantly different from the past &#8211; i.e., where so-called concept drift occurs (<xref ref-type="bibr" rid="B43">Widmer and Kubat, 1996</xref>).</p></fn>
<fn id="n3"><p>Throughout this project, we use evidence, information, data, and signal interchangeably. In the mathematical portions, it will be unambiguous what the evidence is.</p></fn>
<fn id="n4"><p>We use ball-and-urn examples throughout. While these are not the most exciting, they are flexible, the evidence is unambiguous (i.e., observed balls) and they allow us to neatly describe various group learning scenarios. In Section (5), we use a more realistic example to illustrate our approach.</p></fn>
<fn id="n5"><p>For Joyce, resilience is to be understood in terms of the extent to which a person&#8217;s credences change under new data. But resilience is not a purely diachronic concept &#8211; we will explain that it can be captured from a time-slice centric perspective also, to borrow Moss (<xref ref-type="bibr" rid="B34">2015</xref>) and Hedden (<xref ref-type="bibr" rid="B18">2015</xref>)&#8217;s expression.</p></fn>
<fn id="n6"><p>See also Babic et al. (<xref ref-type="bibr" rid="B3">2024</xref>) for a discussion of the beta Bernoulli model and its interpretation.</p></fn>
<fn id="n7"><p>This approximation follows from the Central Limit Theorem and is reasonably accurate if <inline-formula><mml:math id="Eq372"><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&gt;</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="Eq373"><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>&gt;</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:math></inline-formula>.</p></fn>
<fn id="n8"><p>We explain and argue for this in more detail in Section (4.2), below. For now the exposition is informal to motivate the reader&#8217;s intuition.</p></fn>
<fn id="n9"><p>Russell et al. (<xref ref-type="bibr" rid="B39">2015</xref>) (Fact 4), Dietrich (<xref ref-type="bibr" rid="B10">2019</xref>) (Theorem 2), and Pettigrew (<xref ref-type="bibr" rid="B35">2019</xref>) (Theorem 3).</p></fn>
<fn id="n10"><p>Clemen (<xref ref-type="bibr" rid="B7">1987</xref>) proposes a similar approach but we expand on this work in several directions: by proving that the method is commutative under updating, by explaining when it is equivalent to weighted averaging, and by connecting it to likelihoodist approaches in philosophy.</p></fn>
<fn id="n11"><p>Laplace (<xref ref-type="bibr" rid="B28">1814</xref>). For recent defenses, see White (<xref ref-type="bibr" rid="B42">2010</xref>) and Pettigrew (<xref ref-type="bibr" rid="B35">2019</xref>). For the case of a proportion, the flat prior is also the maximum entropy prior (<xref ref-type="bibr" rid="B21">Jaynes, 1957a</xref>, <xref ref-type="bibr" rid="B22">b</xref>).</p></fn>
<fn id="n12"><p>By ur-prior, we refer to the stylized prior that an agent may hold before observing any evidence whatsoever &#8211; the Lewisian superbaby (<xref ref-type="bibr" rid="B16">H&#225;jek, ms</xref>).</p></fn>
<fn id="n13"><p>The flat beta(1, 1) prior is merely illustrative, though Babic (<xref ref-type="bibr" rid="B2">2019</xref>) argues it can be considered maximally safe under certain loss functions. It may be instead that they adopted maximally ignorant beta(0, 0) distributions, the so-called Haldane priors (<xref ref-type="bibr" rid="B36">Robert, 2007</xref>). Or perhaps due to symmetry considerations, such as those articulated in Zabell (<xref ref-type="bibr" rid="B46">2005</xref>), they adopt the invariant Jeffreys&#8217; prior, which in this case corresponds to a beta(1/2, 1/2) distribution (<xref ref-type="bibr" rid="B23">Jeffreys, 1946</xref>). Notice that all the above methods agree on one thing: namely, that <inline-formula><mml:math id="Eq374"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="Eq375"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math></inline-formula> are very small, and in all three cases just a little bit of information leads to similar predictions. Our approach is compatible with any assumption one wants to make about how to represent true ignorance, as long as one is clear about that assumption so that we know which part of their distribution is informed by the evidence, and which part is informed by their prior commitments.</p></fn>
<fn id="n14"><p>This assumption is most notably associated with Harsanyi (<xref ref-type="bibr" rid="B17">1987</xref>) and Aumann (<xref ref-type="bibr" rid="B1">1987</xref>).</p></fn>
<fn id="n15"><p>In this sense, our weak common prior assumption might be described as a local or group-level impermissivism about the requirements of rationality with respect to an ur-prior. While one can find many defenses of both objectivism in the selection of priors (e.g., Williamson (<xref ref-type="bibr" rid="B44">2010</xref>)) and uniqueness at large (such as Greco and Hedden (<xref ref-type="bibr" rid="B15">2016</xref>)), we do not need to assume such a strong position, as even the weak/local impermissivist assumption is ultimately a modeling choice and may be relaxed.</p></fn>
<fn id="n16"><p>Truth conduciveness, following its meaning in the Condorcet jury theorems, implies that it is more probable that the person believes (i.e. &#8216;votes for&#8217;) a proposition if it is true than if it is false. See Romeijn and Atkinson (<xref ref-type="bibr" rid="B38">2011</xref>).</p></fn>
<fn id="n17"><p>Thanks to Frederick Eberhardt for raising this consideration.</p></fn>
<fn id="n18"><p>This is particularly important because if the individual members of the group are even slightly correlated, then the incremental value of additional members (i.e., of additional information) fades away rather quickly. For example, see <xref ref-type="fig" rid="F1">Figure (1)</xref> of Clemen and Winkler (<xref ref-type="bibr" rid="B8">1985</xref>).</p></fn>
</fn-group>
<ref-list>
<ref id="B1"><mixed-citation publication-type="journal"><string-name><surname>Aumann</surname>, <given-names>R. J.</given-names></string-name> (<year>1987</year>). <article-title>Correlated Equilibrium as an Expression of Bayesian Rationality</article-title>. <source>Econometrica</source> <volume>55</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>18</lpage>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><string-name><surname>Babic</surname>, <given-names>B.</given-names></string-name> (<year>2019</year>). <article-title>A Theory of Epistemic Risk</article-title>. <source>Philosophy of Science</source> <volume>86</volume>(<issue>3</issue>), <fpage>522</fpage>&#8211;<lpage>550</lpage>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="journal"><string-name><surname>Babic</surname>, <given-names>B.</given-names></string-name>, <string-name><given-names>A.</given-names> <surname>Gaba</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Tsetlin</surname></string-name>, and <string-name><given-names>R. L.</given-names> <surname>Winkler</surname></string-name> (<year>2024</year>). <article-title>Noisy Stereotypes</article-title>. <source>British Journal for the Philosophy of Science</source> <volume>75</volume>(<issue>1</issue>), <fpage>153</fpage>&#8211;<lpage>177</lpage>.</mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="book"><string-name><surname>Carnap</surname>, <given-names>R.</given-names></string-name> (<year>1950</year>). <source>Logical Foundations of Probability</source>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="book"><string-name><surname>Carnap</surname>, <given-names>R.</given-names></string-name> (<year>1952</year>). <source>The Continuum of Inductive Methods</source>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>.</mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Christensen</surname>, <given-names>D.</given-names></string-name> (<year>2011</year>). <article-title>Disagreement, Question-Begging and Epistemic Self-Criticism</article-title>. <source>Philosophers&#8217; Imprint</source> <volume>11</volume>(<issue>6</issue>), <fpage>1</fpage>&#8211;<lpage>22</lpage>.</mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="journal"><string-name><surname>Clemen</surname>, <given-names>R. T.</given-names></string-name> (<year>1987</year>). <article-title>Combining Overlapping Information</article-title>. <source>Management Science</source> <volume>33</volume>(<issue>3</issue>), <fpage>373</fpage>&#8211;<lpage>380</lpage>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><string-name><surname>Clemen</surname>, <given-names>R. T.</given-names></string-name> and <string-name><given-names>R. L.</given-names> <surname>Winkler</surname></string-name> (<year>1985</year>). <article-title>Limits for the Precision and Value of Information from Dependent Sources</article-title>. <source>Operations Research</source> <volume>2</volume>(<issue>33</issue>), <fpage>427</fpage>&#8211;<lpage>442</lpage>.</mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>De Finetti</surname>, <given-names>B.</given-names></string-name> (<year>1937</year>). <article-title>La pr&#233;vision: ses lois logiques, ses sources subjectives</article-title>. <source>Annales de l&#8217;institut Henri Poincar&#233;</source> <volume>7</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>68</lpage>.</mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Dietrich</surname>, <given-names>F.</given-names></string-name> (<year>2019</year>). <article-title>A Theory of Bayesian Groups</article-title>. <source>Nous</source> <volume>53</volume>(<issue>3</issue>), <fpage>708</fpage>&#8211;<lpage>736</lpage>.</mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="book"><string-name><surname>Dietrich</surname>, <given-names>F.</given-names></string-name> and <string-name><given-names>C.</given-names> <surname>List</surname></string-name> (<year>2016</year>). <chapter-title>Probabilistic Opinion Pooling</chapter-title>. In <string-name><given-names>C.</given-names> <surname>Hitchcock</surname></string-name> and <string-name><given-names>A.</given-names> <surname>H&#225;jek</surname></string-name> (Eds.), <source>Oxford Handbook of Probability and Philosophy</source>, Chapter 25, pp. <fpage>519</fpage>&#8211;<lpage>542</lpage>.</mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Dietrich</surname>, <given-names>F.</given-names></string-name> and <string-name><given-names>K.</given-names> <surname>Spiekermann</surname></string-name> (<year>2013</year>). <article-title>Independent Opinions? On the Causal Foundations of Belief Formation and Jury Theorems</article-title>. <source>Mind</source> <volume>122</volume>(<issue>487</issue>), <fpage>655</fpage>&#8211;<lpage>685</lpage>.</mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><string-name><surname>Easwaran</surname>, <given-names>K.</given-names></string-name>, <string-name><given-names>L.</given-names> <surname>Fenton-Glynn</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Hitchcock</surname></string-name>, and <string-name><given-names>J. D.</given-names> <surname>Velasco</surname></string-name> (<year>2016</year>). <article-title>Updating on the Credences of Others: Disagreement, Agreement and Synergy</article-title>. <source>Philosophers&#8217; Imprint</source> <volume>16</volume>(<issue>11</issue>), <fpage>1</fpage>&#8211;<lpage>39</lpage>.</mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Good</surname>, <given-names>I.</given-names></string-name> (<year>1967</year>). <article-title>On the Principle of Total Evidence</article-title>. <source>The British Journal for the Philosophy of Science</source> <volume>17</volume>(<issue>4</issue>), <fpage>319</fpage>&#8211;<lpage>321</lpage>.</mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Greco</surname>, <given-names>D.</given-names></string-name> and <string-name><given-names>B.</given-names> <surname>Hedden</surname></string-name> (<year>2016</year>). <article-title>Uniqueness and Metaepistemology</article-title>. <source>Journal of Philosophy</source> <volume>113</volume>(<issue>8</issue>), <fpage>365</fpage>&#8211;<lpage>395</lpage>.</mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>H&#225;jek</surname>, <given-names>A.</given-names></string-name> (ms). <article-title>Staying Regular</article-title>. <italic>Unpublished Manuscript</italic>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Harsanyi</surname>, <given-names>J.</given-names></string-name> (<year>1987</year>). <article-title>Games with Incomplete Information Played by Bayesian Players (Parts I, II, III)</article-title>. <source>Management Science</source> <volume>14</volume>(<issue>5</issue>), <fpage>159</fpage>&#8211;<lpage>182</lpage>, 320&#8211;334, 486&#8211;502.</mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><string-name><surname>Hedden</surname>, <given-names>B.</given-names></string-name> (<year>2015</year>). <article-title>Time-Slice Rationality</article-title>. <source>Mind</source> <volume>124</volume>(<issue>494</issue>), <fpage>449</fpage>&#8211;<lpage>491</lpage>.</mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><string-name><surname>Huttegger</surname>, <given-names>S. M.</given-names></string-name> (<year>2017a</year>). <article-title>Inductive Learning in Small and Large Worlds</article-title>. <source>Philosophy and Phenomenological Research</source> <volume>95</volume>(<issue>1</issue>), <fpage>90</fpage>&#8211;<lpage>116</lpage>.</mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="book"><string-name><surname>Huttegger</surname>, <given-names>S. M.</given-names></string-name> (<year>2017b</year>). <source>The Probabilistic Foundations of Rational Learning</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><string-name><surname>Jaynes</surname>, <given-names>E. T.</given-names></string-name> (<year>1957a</year>). <article-title>Information Theory and Statistical Mechanics. I</article-title>. <source>Physical Review</source> <volume>106</volume>(<issue>4</issue>), <fpage>620</fpage>&#8211;<lpage>630</lpage>.</mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><string-name><surname>Jaynes</surname>, <given-names>E. T.</given-names></string-name> (<year>1957b</year>). <article-title>Information Theory and Statistical Mechanics. II</article-title>. <source>Physical Review</source> <volume>108</volume>(<issue>2</issue>), <fpage>171</fpage>&#8211;<lpage>190</lpage>.</mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>Jeffreys</surname>, <given-names>H.</given-names></string-name> (<year>1946</year>). <article-title>An Invariant Form for the Prior Probability in Estimation Problems</article-title>. <source>Proceedings of the Royal Society of London: Series A</source> <volume>186</volume>(<issue>1007</issue>), <fpage>453</fpage>&#8211;<lpage>461</lpage>.</mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="book"><string-name><surname>Johnson</surname>, <given-names>W.</given-names></string-name> (<year>1924</year>). <source>Logic: Part III, The Logical Foundations of Science</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>Joyce</surname>, <given-names>J. M.</given-names></string-name> (<year>1998</year>). <article-title>A Nonpragmatic Vindication of Probabilism</article-title>. <source>Philosophy of Science</source> <volume>65</volume>(<issue>4</issue>), <fpage>575</fpage>&#8211;<lpage>603</lpage>.</mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><string-name><surname>Joyce</surname>, <given-names>J. M.</given-names></string-name> (<year>2005</year>). <article-title>How Probabilities Reflect Evidence</article-title>. <source>Philosophical Perspectives</source> <volume>19</volume>(<issue>1</issue>), <fpage>153</fpage>&#8211;<lpage>178</lpage>.</mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="journal"><string-name><surname>Kinney</surname>, <given-names>D.</given-names></string-name> (<year>2020</year>). <article-title>Why Average When You Can Stack: Better Methods for Generating Accurate Group Credences</article-title>. <source>Manuscript</source>.</mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="book"><string-name><surname>Laplace</surname>, <given-names>P. S.</given-names></string-name> (<year>1814</year>). <source>Th&#233;orie Analytique des Probabilit&#233;s</source>. <publisher-loc>Paris</publisher-loc>: <publisher-name>Courcier</publisher-name>.</mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><string-name><surname>Lichtendahl</surname>, <given-names>K. C.</given-names></string-name>, <string-name><given-names>Y.</given-names> <surname>Grushka-Cockayne</surname></string-name>, <string-name><given-names>V. R.</given-names> <surname>Jose</surname></string-name>, and <string-name><given-names>R. L.</given-names> <surname>Winkler</surname></string-name> (<year>2021</year>). <article-title>Extremizing and Anti-Extremizing in Bayesian Ensembles of Binary-Event Forecasts</article-title>. <source>Operations Research</source> <volume>70</volume>(<issue>5</issue>), <fpage>2597</fpage>&#8211;<lpage>3033</lpage>.</mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="journal"><string-name><surname>Lindley</surname>, <given-names>D.</given-names></string-name> (<year>1983</year>). <article-title>Reconciliation of Probability Distributions</article-title>. <source>Operations Research</source> <volume>31</volume>(<issue>5</issue>), <fpage>866</fpage>&#8211;<lpage>880</lpage>.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Lindley</surname>, <given-names>D. V.</given-names></string-name> and <string-name><given-names>L.</given-names> <surname>Phillips</surname></string-name> (<year>1976</year>). <article-title>Inference for a Bernoulli Process (A Bayesian View)</article-title>. <source>The American Statistician</source> <volume>30</volume>(<issue>3</issue>), <fpage>112</fpage>&#8211;<lpage>119</lpage>.</mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="book"><string-name><surname>List</surname>, <given-names>C.</given-names></string-name> and <string-name><given-names>P.</given-names> <surname>Pettit</surname></string-name> (<year>2011</year>). <source>Group Agency: The Possibility, Design, and Status of Corporate Agents</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="journal"><string-name><surname>Moss</surname>, <given-names>S.</given-names></string-name> (<year>2011</year>). <article-title>Scoring Rules and Epistemic Compromise</article-title>. <source>Mind</source> <volume>120</volume>(<issue>480</issue>), <fpage>1053</fpage>&#8211;<lpage>1069</lpage>.</mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="book"><string-name><surname>Moss</surname>, <given-names>S.</given-names></string-name> (<year>2015</year>). <chapter-title>Time-Slice Epistemology and Action Under Indeterminacy</chapter-title>. In <string-name><given-names>J.</given-names> <surname>Hawthorne</surname></string-name> and <string-name><given-names>T. S.</given-names> <surname>Gendler</surname></string-name> (Eds.), <source>Oxford Studies in Epistemology</source>, Volume <volume>5</volume>, pp. <fpage>172</fpage>&#8211;<lpage>194</lpage>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="book"><string-name><surname>Pettigrew</surname>, <given-names>R.</given-names></string-name> (<year>2019</year>). <chapter-title>On the Accuracy of Group Credences</chapter-title>. In <string-name><given-names>T. S.</given-names> <surname>Gendler</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Hawthorne</surname></string-name> (Eds.), <source>Oxford Studies in Epistemology</source>, Chapter 6, pp. <fpage>137</fpage>&#8211;<lpage>160</lpage>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="book"><string-name><surname>Robert</surname>, <given-names>C. P.</given-names></string-name> (<year>2007</year>). <source>The Bayesian Choice: From Decision Theoretic Foundations to Computational Implementation</source> (<edition>2</edition> ed.). <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>.</mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="journal"><string-name><surname>Romeijn</surname>, <given-names>J.</given-names></string-name> (<year>2024</year>). <article-title>An Interpretation of Weights in Linear Opinion Pooling</article-title>. <source>Episteme</source> <volume>21</volume>(<issue>1</issue>), <fpage>19</fpage>&#8211;<lpage>33</lpage>.</mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="journal"><string-name><surname>Romeijn</surname>, <given-names>J.</given-names></string-name> and <string-name><given-names>D.</given-names> <surname>Atkinson</surname></string-name> (<year>2011</year>). <article-title>A Condorcet Jury Theorem for Unknown Jury Competence</article-title>. <source>Politics, Philosophy and Economics</source> <volume>10</volume>(<issue>3</issue>), <fpage>237</fpage>&#8211;<lpage>262</lpage>.</mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="journal"><string-name><surname>Russell</surname>, <given-names>J. S.</given-names></string-name>, <string-name><given-names>L.</given-names> <surname>Buchak</surname></string-name>, and <string-name><given-names>J.</given-names> <surname>Hawthrone</surname></string-name> (<year>2015</year>). <article-title>Groupthink</article-title>. <source>Philosophical Studies</source> <volume>172</volume>(<issue>5</issue>), <fpage>1287</fpage>&#8211;<lpage>1309</lpage>.</mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="book"><string-name><surname>Skyrms</surname>, <given-names>B.</given-names></string-name> (<year>1980</year>). <source>Causal Necessity</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Yale University Press</publisher-name>.</mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><string-name><surname>Strawson</surname>, <given-names>P.</given-names></string-name> (<year>1962</year>). <article-title>Freedom and Resentment</article-title>. <source>Proceedings of the British Academy</source> <volume>48</volume>, <fpage>1</fpage>&#8211;<lpage>25</lpage>.</mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="book"><string-name><surname>White</surname>, <given-names>R.</given-names></string-name> (<year>2010</year>). <chapter-title>Evidential symmetry and mushy credence</chapter-title>. In <string-name><given-names>T. S.</given-names> <surname>Gendler</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Hawthorne</surname></string-name> (Eds.), <source>Oxford Studies in Epistemology</source>, Volume <volume>3</volume>, Chapter 7, pp. <fpage>161</fpage>&#8211;<lpage>186</lpage>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><string-name><surname>Widmer</surname>, <given-names>G.</given-names></string-name> and <string-name><given-names>M.</given-names> <surname>Kubat</surname></string-name> (<year>1996</year>). <article-title>Learning in the Presence of Concept Drift and Hidden Contexts</article-title>. <source>Machine Learning</source> <volume>23</volume>(<issue>1</issue>), <fpage>60</fpage>&#8211;<lpage>101</lpage>.</mixed-citation></ref>
<ref id="B44"><mixed-citation publication-type="book"><string-name><surname>Williamson</surname>, <given-names>J.</given-names></string-name> (<year>2010</year>). <source>In Defense of Objective Bayesianism</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="B45"><mixed-citation publication-type="journal"><string-name><surname>Williamson</surname>, <given-names>J.</given-names></string-name> (<year>2019</year>). <article-title>Aggregating Judgments by Merging Evidence</article-title>. <source>Journal of Logic and Computation</source> <volume>19</volume>(<issue>3</issue>), <fpage>461</fpage>&#8211;<lpage>473</lpage>.</mixed-citation></ref>
<ref id="B46"><mixed-citation publication-type="book"><string-name><surname>Zabell</surname>, <given-names>S. L.</given-names></string-name> (<year>2005</year>). <source>Symmetry and its Discontents: Essays on the History of Inductive Probability</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
</ref-list>
</back>
</article>