Article

Believing in Shmeliefs

Author
  • Neil Levy orcid logo (Macquarie University, University of Oxford)

Abstract

People report believing weird things: that the Earth is flat, that senior Democrats are subjecting kidnapped children to abuse, and so on. How can people possibly believe things like this? Some philosophers have recently argued for a surprising answer: people don’t believe these things at all. Rather, they mistake their imaginings for beliefs. They are shmelievers, not believers. In this paper, I consider the prospects for this kind of explanation. I argue that some belief reports are simply insincere, and that much of the evidence for shmeliefs can be explained by the content of the beliefs reported, rather than by the attitude people take to them. But some reported beliefs are good candidates for being shmeliefs. I consider how shmeliefs are acquired and sustained, and why they might be harmful despite not being seriously believed.

How to Cite:

Levy, N., (2024) “Believing in Shmeliefs”, Ergo an Open Access Journal of Philosophy 11: 18. doi: https://doi.org/10.3998/ergo.6158

519 Views

58 Downloads

Published on
18 Jul 2024
Peer Reviewed

People profess to believe all kinds of odd things. They profess to believe that the COVID vaccine is designed to kill much of the population, that initiatives to discourage driving are part of a plan to lock us in walled ghettos, that aliens regularly visit our planet to kidnap and experiment upon unassuming individuals, and that the movements of incredibly distant celestial bodies influence how our week is likely to go. These claims strike many observers as genuinely bizarre: not only at odds with the scientific consensus or with those of accredited experts, but at odds with the way we know the world works. How, we might wonder, can anyone seriously believe these things?

There are two major ways of responding to this query in the published literature. On the most influential view, people come to hold bizarre beliefs due to motivated reasoning (Bardon 2019), perhaps coupled with reduced cognitive capacity (Pennycook & Rand 2019). A second response, gaining in popularity recently, dissolves the puzzle by pointing out that beliefs that strike some agents as bizarre may be rationally held by other agents, with different background beliefs. One way to cash this thought out is in subjective Bayesian terms: while a belief might be bizarre for me, given my priors, it might be reasonable for you, given yours (Poth & Dolega 2023). We might worry about a regress here – wouldn’t it take bizarre priors to rationalize believing things like that? – but the worry can often be headed off. In particular, we might point to the role of testimony in rationalizing beliefs. Testimony from sources that agents consider authoritative can play a very significant role in justifying beliefs that might strike outsiders as bizarre (Levy 2021). Religious beliefs might be justified for one agent for broadly the same reasons as a scientific theory is justified for another: on the basis of testimony from sources that each regards as authoritative.

These responses, separately or combined, almost certainly explain some cases of apparently bizarre beliefs.1 When beliefs that strike us as bizarre are supported by testimony from sources that might plausibly be taken to be authoritative by those who espouse them, and when conflicting testimony is pre-empted (Begby 2020), they may be held rationally. For example, those people who think that the risks of the COVID vaccine are greater than the benefits may adopt or maintain that belief because they take seriously the testimony of the minority of genuine experts who make this claim, and regard the mainstream as fatally compromised. Equally, when a belief is very widely held, as is belief in astrology, that very fact provides genuine (higher-order) evidence in its favor. Even combined, however, these considerations do not seem to explain all cases of bizarre beliefs.

Some bizarre beliefs are neither widely held nor espoused by sources that are reasonably regarded as authorities. Whereas the belief that the side-effects of the COVID vaccine are more dangerous than the disease it protects against is supported by testimony from genuine experts (Griffin 2021), the belief that the COVID vaccine contains microchips that will be activated by 5G networks is not. In the first case, testimony comes from physicians and scientists (albeit a small minority of each); in the second, it stems from social media and has been amplified almost exclusively by those who aren’t plausibly regarded as experts, even by the lights of those who repeat it (Sriskandarajah 2021). Moreover, whatever role might be played by its widespread adoption within some subculture in rationalizing its maintenance, that role cannot explain why it is adopted in the first place. An even clearer case, I suggest, is the professed belief in a flat Earth: this belief is held in the face of a consensus that is very deep and extremely broad, and in the face of evidence that is available to anyone who cares to look. Whether a belief strikes us as bizarre is an unreliable guide to whether it was acquired rationally, but we may justifiably wonder how beliefs like these could be sincerely held.

One possibility that has attracted attention recently is that people don’t believe these things at all. Or rather, there are far fewer true believers about than we might have thought. Rather, they may be engaging in signalling (Ganapini 2021; Mercier 2020), or reporting an attitude they mistake for a belief (Dieguez 2022; Levy 2022; Munro 2023), or reporting some other kind of attitude altogether (Van Leeuwen 2023). On these views, people are sincere in their reports, but they’re mistaken about what they believe. In this paper, I’ll assess the prospects for these non-doxastic explanations of bizarre belief reports.

I’ll argue that accounts in this family succeed in explaining some cases, but that such cases are not as common as the proponents of these views suggest. They underestimate or overlook the prevalence of wholly insincere belief reports, on the one hand, and of sincere reports of genuine beliefs, on the other hand. I aim to delineate these states from one another – to the extent to which such delineations can be made – and to identify the best candidates for such states. In section one, I will briefly survey extant accounts of the nature of belief, focusing on those accounts that seem to have the resources to explain belief/behavior mismatches. I aim to do so to demonstrate the attractiveness of what I will call the shmelief account. A shmelief is a representation of a state of affairs that has its origins in play: in playfully imagining what it would be like for the world to be like that. Such playful imagining is a normal part of our cognitive lives, but it can give rise to shmeliefs when internal and external conditions allow the person to come to be sufficiently absorbed in their fantasy that they lose track of the fact that it is merely imaginary, and mistake it for a belief. In section two, I will show that some apparent candidates for being shmeliefs are better explained by insincere report. In section three, I will argue that other candidates are genuine beliefs. Section 4 gives an account of how shmeliefs are acquired, while section 5 aims to explain how they are maintained, as well as accounting for the distinctive ways in which they may be epistemically harmful despite not being believed.

1. Beliefs, Shmeliefs, and Other Doxastic Animals

Beliefs have a distinctive functional role: they are, in Ramsey’s (2013) famous metaphor, maps by which we steer. We form beliefs to keep track of how things are in the world, because accurate representations are needed to achieve our goals. This functional role entails two properties: genuine beliefs update in response to evidence, and they cause behavior when their content is relevant (and salient to agents). If the landscape has changed (perhaps there’s been a rock fall), a good map will represent that change, and the navigator will use that information to alter course.

These properties make what I will call recalcitrance cases puzzling. Yet such cases are apparently common. Recalcitrance cases come in two kinds: belief/behavior mismatches, when agents fail to act consistently with their professed belief, in circumstances which seem to call for such action, and failures to abandon or modify professed belief in the face of persuasive evidence that seems to call for such update. Of course, such failures can often be explained by inattention, lapses, cognitive load, and so forth. But when they occur regularly and are sustained over time, some more systematic explanation seems to be called for.

The literature on self-deception is a rich source of such cases and of theories that aim to explain them. Self-deceived agents, like the man who professes to believe his wife is faithful in the face of compelling evidence to the contrary, manifest a failure to respond to evidence, even to evidence they would regard as persuasive were their case described to them with identifying information removed. They may also manifest a failure to act consistently with their professed belief; indeed, some philosophers hold that such inconsistency is necessary for an agent to count as self-deceived. The agent who acts consistently with their belief report is not self-deceived, but self-deluded, they hold, with the difference consisting in how complete the deception is (see Funkhouser 2019 for discussion).

One influential view accounts for self-deception, with its (allegedly) distinctive recalcitrance, by partitioning the mind, and compartmentalizing belief (Davidson 2004). The agent might genuinely believe p, the proposition they report believing, while also believing not-p; since the beliefs are compartmentalized, the conflict is not detected by the agent. Further, compartmentalization can explain both kinds of recalcitrance: If one mental state drives verbal report and the other (some) behavior, the dissociation between them is predictable. Similarly, failures to respond to evidence might reflect the fact that one of the compartmentalized states is encapsulated from reasoning systems. Compartmentalization accounts are doxastic account of self-deception, inasmuch as they hold that the professed claim is a genuine belief. Other accounts of self-deception offer non-doxastic explanations of the professed claim; for example, it might be an avowal (Audi 1982), or the product of pretense (Gendler 2007).

Accounts of belief developed primarily for other purposes may also explain self-deception, as well as other recalcitrance cases. Eric Mandelbaum and colleagues have developed an account of belief that might be seen as a far more empirically-informed successor to the compartmentalization account (Bendaña & Mandelbaum 2021; Porot & Mandelbaum 2021). On this account, we form beliefs indiscriminately and effortlessly, regardless of how well supported they are and even in the face of direct conflict with our existing beliefs. The mind is fragmented, containing multitudes of conflicting beliefs. Rejection of beliefs takes effort, and we often lack the time and cognitive resources to exert that effort. We also won’t make that effort if doing so would threaten our sense of ourselves as good and rational agents (Mandelbaum and colleagues postulate what they call the psychological immune system to explain departures from Bayesian rationality, but their account could easily be recast in Bayesian terms). On this account, both kinds of fragmentation are easily explained (if anything, it is the consistency and rationality we routinely exhibit that needs explaining).

We’ve already mentioned signalling accounts. On doxastic signalling accounts, the resulting beliefs are genuinely held, but not primarily in response to evidence. Rather, they are held to signal group belonging or commitment to some ideology (Funkhouser 2017; Sterelny 2015). Since the function of the state requires sensitivity to non-epistemic properties, this account predicts recalcitrance to evidence, though not belief/behavior mismatches. Other signalling accounts are non-doxastic; the state that has the function of signalling belonging or commitment is not a belief. Such accounts are able to explain both kinds of recalcitrance.2 Eric Schwitzgebel’s (2010; 2002) phenomenal dispositional account can also explain such cases. On his account, a belief is constituted by dissociable dispositions, to act and feel in certain ways. Since these dispositions are dissociable (though they tend to cluster together), we can expect cases of ‘in-between’ beliefs, where the person has the disposition to assert that p but to act somewhat inconsistently with it.

This sketch of the landscape is by no means exhaustive.3 Given this array, why think we need yet another? The shmeliefs account is motivated, in very important part, by recalcitrance cases; it must earn its keep by doing a better job of explaining such cases, or by reference to lower costs elsewhere. One advantage of the shmelief account is that it does not confront the problem of explaining how people could possibly believe things like that (for example, that the Earth is flat, in the face of absolute scientific consensus and easily available evidence). The account does not face this problem, since it denies that shmelievers believe what they profess to believe. Of course, this is an advantage it shares with other non-doxastic accounts, such as the Ganapini/Mercier signalling account or Gendler’s pretense account (2007). The second advantage it has alone: it explains the etiology and the maintenance of these states better than rivals, as well as how they resist detection as imaginative states, and not genuine beliefs, via introspection. Shmeliefs arise through a distinctive kind of imaginative play, and their content makes them resistant to introspective unmasking.

The shmelief account has been developed most extensively by Sebastian Dieguez (2022). On his view, many of those who report believing conspiracy theories, and almost all of those who report believing that the Earth is flat, are engaged in a kind of epistemic cosplay (see also Blanchard 2023). In his metaphor, they’re fans of a view, not believers in it. Somewhat similar views have been defended, even more recently, by Marianna Ganapini (2022) and Daniel Munro (2023), and a closely related view has been developed very extensively by Neil Van Leeuwen (2023; 2014). There are important differences as well as commonalities between them. Most relevantly, whereas Dieguez and Ganapini maintain that shmelievers are unaware that they fail to believe what they report, Munro and Van Leeuwen maintain that shmelievers typically are aware “at some level” (Van Leeuwen 2023: 134) they do not believe what they report. Since their awareness is tacit, though (indeed, Van Leeuwen maintains it is sufficiently recessive that under some conditions it may be occluded altogether), I will treat these theories together.4

In the rest of the paper. I will put forward a distinctive shmelief account, one centred around imaginative play. But before I develop this account, I will argue that there are fewer shmeliefs around than many of their proponents suggest. Some claims that strike observers as absurd are genuinely believed. Some such claims are not believed, but they’re not shmelieved either. I will go on to suggest that we need a new account of the origins and maintenance of shmeliefs, because they have a distinctive etiology and resist introspective unveiling in distinctive ways.

2. Explaining Away Shmeliefs: Sincerity

The true believer takes the belief attitude to a proposition or content. The shmeliever takes a different, more imaginative, attitude toward the same content, but mistakes their attitude for a belief (or has, at most, a recessive awareness that they don’t believe what they report). A shmelief is sincerely reported. But many belief reports are not sincere, and the attitudes reported are not candidates for shmelief.

Some proportion of respondents to questions take the opportunity to express their support for their side of politics or debates, and may respond insincerely to do so (Bullock & Lenz 2019; Hannon 2021). Much of the evidence for this ‘expressive responding’ or ‘partisan cheerleading’ is somewhat equivocal. The standard manipulation compares the responses of partisans given a financial motivation for accuracy to partisans without such a motivation, but the increase in accuracy might reflect the desire to gain the money by reporting what the respondent (rightly) thinks the experimenter regards as true, rather than by reporting a sincere belief. But Schaffner and Luks provide extremely strong evidence for insincerity (Schaffner & Luks 2018; see Ross & Levy 2023 for a replication).

Schaffner and Luks gave their participants unlabelled photos of the Trump and Obama inaugurations, and simply asked ‘which depicts a larger crowd?’ 15% of self-reported Trump voters chose the Trump inauguration photo, despite it being plain the crowd was smaller. There’s little likelihood they were responding sincerely. Generalizing the thought, we should be suspicious of the sincerity of many reports of bizarre beliefs, like flat Earth beliefs or belief in the QAnon conspiracy. These reported attitudes don’t cause much consequential behavior or update in response to evidence, simply because they’re not genuinely believed.

Sometimes, poll respondents and experimental participants report insincerely for a different reason: to engage in trolling (Lopez & Hillygus 2018). Typical opinion surveys and social science experiments aren’t especially interesting for participants, and respondents can’t be relied upon to be motivated by the truth (often their motivation is to earn a little money). It’s unsurprising if some of them answer randomly or make their own fun by reporting bizarre beliefs. The blogger Scott Alexander (2013) has jokingly identified what he calls the lizardman constant: the proportion of people who will reliably answer “yes” to the question “do lizardmen rule the world?” Alexander thinks that response is far more likely to reflect some sort of trolling than a sincere stab at answering the question. He estimates the constant at around 4%. I doubt there’s any such constant. Rather, the proportion of trolling responses is probably a function of the question asked and the population sampled: in some cases, it is very much higher than 4%.

There’s also another reason to doubt that many survey responses report sincere beliefs, though they may not be actually insincere. The majority of responses to questions probing political or conspiratorial beliefs are constructed on the fly: the respondent has no antecedent opinion about the matter. They’re not motivated to reflect deeply; rather, they’re motivated to move on quickly. They therefore answer by simply giving an opinion (typically, by choosing an option from a set presented to them) that seems to match their opinions given what’s salient to them right now (Zaller & Feldman 1992). Often, that will produce partisan response. The person might have no opinion whether Hilary Clinton gave uranium to Russia – this may well be the first time they’ve heard the suggestion. Their answer reflects whether they think this is the sort of thing she’d do. For those without strong partisan leanings, responses might well differ from time to time, depending on what occurs to them in the moment. In that case, it will be too ephemeral to guide behavior.

Of course, many reports of genuine beliefs are likewise generated on the fly. Unless we’ve thought about the question previously, and recall the belief we formed on that occasion, belief reports are generated in significant part by probing what we’re committed to.5 Most of our beliefs are dispositional: they are constituted by the entailments and implications of our attitudes (perhaps inter alia; see Aronowitz 2023). It’s tricky, to say the least, to settle how stable dispositions must be for the reports they cause to count as beliefs. One worry is that the question might itself be seen as suggesting an answer: the respondent might think they’re being asked whether they believe that Hilary Clinton gave uranium to Russia because there’s a serious suggestion that she did. Polls and surveys may thus produce beliefs that the person would not otherwise have had. If the person’s dispositions didn’t entail any answer to the question until (what they see as) further information was provided by the question, the belief reported was not one they possessed, even dispositionally, before they were asked. A particular subgroup might reliably report the belief when probed but members of that subgroup fail to act consistently with it, because being probed is a condition of its being elicited.

3. Explaining Away Shmeliefs: Content

The factual belief fallacy (Van Leeuwen 2018) is the fallacy of building an account of human psychology that overlooks the great mass of our ordinary beliefs and how they function. Philosophers commit the fallacy when they develop accounts of belief that take religious and ideological attitudes as their exemplars. That might be a mistake, but there’s an opposite error: the factual belief fallacy fallacy. This is the mistake of thinking that all genuine beliefs must closely resemble mundane beliefs. There’s good reason to expect that many genuine beliefs will be less responsive to evidence and cause behavior less broadly than mundane beliefs, with the differences between these beliefs and mundane beliefs explained by their content, rather than by the kind of attitude the person takes to that content.6

There’s nothing in the idea of a map that requires its content to be clear and distinct. If the territory that is mapped is topographically complex, a good map should mirror this, and then it will take effort and attention to guide our behavior in its light. Moreover, some maps are not good maps, without ceasing to be maps at all. The cliched treasure map of pirate fiction is hard to interpret because it’s so sparse and vague (“30 paces from the big palm tree”). It’s still a map: it represents a territory. Beliefs, too, may be complex, such that it takes effort to guide our behavior by them, and they may be vague. These features can explain why they don’t result in the behaviors and inferences characteristic of mundane beliefs, which are clear and distinct.

Laypeople tend to have sparse and vague maps when it comes to scientific and theological claims. As a consequence, it may require effort for them to govern their behavior and inference by reference to these representations, and even then the belief will be implicated only in those inferential transitions that the person grasps. Many people who believe that the theory of evolution is true can only govern a limited range of behaviors in the light of that belief. They might take the fact that a candidate for political office is a young Earth creationist as counting against them, but they might not see what’s wrong with claims about, say, viruses that are implicitly teleological or Lamarckian. Dieguez argues that propositions like “the unconscious is structured like a language, or that God is three persons in one” are vague, “not to say totally elusive,” and “strictly speaking lack any natural relation to the world and our way of interacting with reality” (2022: 90–91; translations mine), and that therefore they are not believed. That’s a mistake: their indistinct content is compatible with them counting as beliefs.

Van Leeuwen (Van Leeuwen 2014; 2023) cites evidence from the cognitive science of religion that seems to show that adherents to major religions (and some much less widespread ones too) don’t act consistently with their religious beliefs or don’t update their religious beliefs in response to evidence. Much of this experimental work concerns the prosocial behaviors called for by religions. Malhotra (2010) provides evidence that Christian believers are more likely to donate to charity – that is, to bid in an auction for charitable purposes – on Sundays than on other days of the week. Similarly, Xygalatas (2012) provides evidence that religious Hindus play economic games more altruistically when they play the game in the vicinity of a temple, while Duhaime (2015) found that Muslims donated more to charity in the immediate aftermath of the call to prayer than at other times of the day. Other experiments look to inference. Harris and Giménez (2005) and Astuti and Harris (2008) found that Catholic children and Vezo children and adults (respectively) were more likely to attribute an afterlife to recently dead people when they were described as dying in a context in which religion was salient. Barrett (1999) found that Christians understand religious narratives in ways that presuppose that God is not omnipresent and omniscient, despite professing to believe these things. He also found that people are unlikely to pray for genuine miracles: they’re far more likely to pray that someone’s cancer remits than that their severed leg grows back (Barrett 2001). But these recalcitrance cases, too, may be better explained by the content of the beliefs reported, rather than by the attitude the person takes to them.

Much of this evidence shows only that the religious guide their behavior by their reported beliefs when they’re salient. That’s good evidence these attitudes are not deeply embedded in the cognitive system, such that they cause behavior and inference automatically. It’s common to think that such embedding is a criterion for belief (Stich 1978), but that’s a mistake. Complex and counterintuitive representations resist embedding in cognitive systems, such that they drive inference automatically. If we make such embedding criterial for belief, we’ll turn out to believe only simple propositions. Even scientists might not count as believing in much of science: even experts don’t automatically guide their inference by reference to the science when they’re under cognitive load (Shtulman & Harrington 2016).

The experimental evidence also seems to suggest that religious representations compete with conflicting representations: hence theological incorrectness, where people’s intuitive concepts of God trump their theology (Slone 2004). Again, though, I don’t see a strong motivation for a lack of conflict as a criterion for a belief. It’s not difficult to generate conflicting representations in ordinary people: think of visual illusions, or, indeed, movies. I think it’s fairly obvious that I don’t believe that the zombies will devour me, even though my heart rate accelerates and I startle at a sudden movement.

Van Leeuwen and Dieguez both fall victim to the factual belief fallacy fallacy in another way: in taking what Van Leeuwen (2014) calls “vulnerability to special authority” as a marker of shmelief.7 It’s true that mundane beliefs, like Dieguez’s paradigm there is a beer in the fridge, are acquired and updated through direct sense perception, rather than on the say-so of someone. But authority independence is not criterial for genuine belief. To function as a map, a belief must maintain some systematic relation to the world it maps, but that relation needn’t and often can’t be direct. The world is a big and complex place, and to find out about it – to find out things I urgently need to know – I must rely on others. I can’t just look and see why my car is making that funny noise, or whether that pain is a reason to worry. I must consult the experts. The beliefs I acquire in this sort of way are surely genuine (the doctor’s testimony might make a huge difference to how my life goes from then on). These attitudes update easily and automatically (suppose I found out my doctor had confused my x-rays with those of another patient). They’re maps. But they’re authority-dependent.8

The same sorts of considerations apply to conspiratorial beliefs and the like. Like scientific and religious beliefs, conspiracy theories, even bizarre ones, are often acquired on the basis of testimony from sources the agent regards as authoritative. Dieguez denies this, because he thinks that those who accept these theories know that those who promote them are not epistemic authorities, but this is a mistake. It’s true, as he notes (2022: 103), that they respond to factors like the prestige of the agents, but prestige carries information. Prestige is a marker of success, and there’s a reliable (if defeasible) link between being prestigious and having true beliefs (Levy 2021). The fact that a person has achieved success indicates that the map by which they have steered themselves is accurate (or more accurate than average, at any rate). Relying on testimony from the prestigious is relying on people one may reasonably take to be epistemically accurate.

It’s worth adding that using our assessment of the improbability of a hypothesis as a guide to its status as an attitude is an enterprise that needs careful handling. Above, we noted that Schaffner and Luks (2018) is a rare example of a wholly persuasive empirical demonstration of expressive responding: it strains credibility beyond breaking point to think that respondents who picked the Trump photo were reporting their genuine beliefs. But this kind of example is rare because what is held to be plausible can vary dramatically from person to person. At the same time, the prior probability of actual occurrences is sometimes quite low. What’s the prior probability of the official story of the 9/11 tragedy – that 19 men would successfully coordinate to hijack 4 planes, crashing two of them into the Twin Towers and bringing them down? It’s not obvious, to me at least, that that story is more intuitively plausible than the controlled demolition alternative urged by many conspiracy theorists (after all, there have been many false flag operations in US history, and Operation Northwoods was only one of several Department of Defence or CIA proposals to commit terrorist attacks against US interests to justify aggression against foreign powers). I don’t see any particular reason to think that conspiracy theories, generally, can’t be acquired through normal evidential mechanisms.

However they’re acquired, we ought to expect genuine conspiracy beliefs often to cause little behavior and exhibit a restricted vulnerability to evidence, for precisely the same sorts of reasons as religious beliefs: they’re acquired on the basis of testimony and their implications for behavior and inference aren’t obvious. Believing Catholics may outsource inference and behavior to priests and theologians: the trinity means and entails what they take it to mean and entail. Believing laypeople may outsource inference and behavior to scientists: the theory of evolution means and entails what they take it to mean and entail. Similarly, the believing conspiracy adherent may outsource inference and behavior to those they regard as authoritative: 5G has the risks that they tell us it has.

Candidates for the category of shmeliefs must be sincerely reported attitudes that were not unduly elicited by probes, that are relatively stable over time, and which often don’t cause behavior or update in the face of evidence when their implications for behavior and update are salient and plain to the person reporting the attitude. Sincere reports of religious, scientific, and conspiracy beliefs often don’t satisfy these conditions: There are fewer candidates for shmliefs around than Van Leeuwen and Dieguez think. Religious beliefs are typically not good candidates for shmeliefs. The trinity belief doesn’t qualify, because it’s not obvious that it has a content—for the layperson—that’s relevant to how they should update in everyday life. Given the opacity of the idea to them, they can only update by ascending semantically (“that’s heresy, because it denies the trinity (whatever that is).” An omniscience belief is a better candidate since its implications are (somewhat) easier to grasp, but the failures of cognitive governance it exhibits might better be explained by competition from a conflicting intuitive representation. The same goes for all the other examples of religious credences: afterlife beliefs, the apparent context dependence of pro-social behavior, and so on.9

Many conspiratorial beliefs are also not good candidates, because the people who espouse them are trolling or engaging in expressive responding or because their failures to exhibit the full functional profile of beliefs is better explained by content rather than attitude. All that said, we can be fairly confident that some flat Earth believers, QAnon adherents, and so on, are sincere and that their failure to act and infer consistently with their attitude is not explained by its content. It is obvious that the Pizzagate conspiracy theory, sincerely believed, called for action to free the children. Some conspiracy theorists and the like are simply sincere believers: they might (for instance) put their flat Earth belief to the test, for example (see Weill 2022 for several cases). But others seem to be sincere shmelievers: while they take themselves to be believers—they may report that they were believers after exiting QAnon for example (for cases, see Dickson 2020 and Venkataramakrishnan & Murphy 2021)—there are features of their behavior and their attitudes toward evidence that indicate that they were never, quite, believers.

4. The Road to Shmeliefs

Dieguez argues that shmeliefs are acquired effortfully. It’s not despite the fact that a shmelief is epistemically flawed that it’s acquired: it’s because of this fact. This, again, is a contrast with mundane beliefs: I acquire the belief that it’s raining because I’m presented with strong and immediate evidence for the proposition (water falling from the sky), and I update my belief as soon as the evidence supports doing so. All this happens automatically and effortlessly. But – according to Dieguez – I can’t acquire the belief that Donald Trump is working to expose widespread Satanism at the highest levels of government by way of exposure to immediate evidence, because there is no such evidence. Instead, everything points the other way. I must work to acquire the belief.

This, again, is an instance of the factual belief fallacy fallacy. Dieguez infers from the fact that mundane beliefs are typically acquired via sense perception – which is a typically effortless and automatic means of belief acquisition – to the conclusion that representations that are acquired via other routes probably aren’t acquired effortlessly and automatically. Dieguez is right that shmeliefs aren’t acquired on the basis of immediate evidence that supports them. If they were acquired in this sort of way, they’d simply be beliefs. But it doesn’t follow that they’re acquired effortfully. Not only is it a mistake to think that if representations are not acquired on the basis of immediate evidence, they can’t be acquired effortlessly and automatically, it’s also a mistake to think that effortlessness and automaticity must bundle together. Shmeliefs are acquired effortlessly, but they are not acquired automatically. We fall into shmeliefs, rather than acquire them effortfully, but the route from first entertaining the representation to its coming to be a shmelief is long.

Being a shmeliever is akin to being under the spell of a representation: one has somehow to prevent oneself or be prevented from realizing that you don’t believe the representation. The effortful acquisition hypothesis makes it difficult to understand how one could fall under the spell of the representation. Bernard Williams influentially argued that we can’t believe at will, because my awareness that I had willed myself to believe a proposition would undermine its status as a belief (1973). You can’t simultaneously believe that this is a good map and that I have chosen to believe this a good map; the latter entails that there’s no reason to think there’s a systematic relationship between the map and the world. Analogously, effortful acquisition would tend to undermine my shmeliefs. For me to shmelieve that p, I must take myself to believe that p, but were I aware that I had effortfully acquired the representation, I couldn’t bring myself to mistake it for a belief.10 Perhaps more importantly, I think there’s a more plausible route to the acquisition of shmeliefs: via a kind of play (Levy 2022).

Conspiracy theories are fun. Many non-believers (and non-shmelievers) spend a lot of time consuming them. Think of the large number of conspiracy theory themed entertainments, from The X-Files to The Americans to Homeland and well beyond. They have a satisfying narrative arc, they pit Davids against the Goliaths of big government or big pharma, and they may enchant the world, making it a place of secrets (and we the privileged ones, who peek behind the curtain). I think a common route into the acquisition of conspiracy-themed shmeliefs is a via a kind of playful imaginative immersion: the person starts out playing “QAnon”, because it’s fun and because it has the added epistemic benefit of painting us – our side of politics – as the heroes and them as the enemies. Playing QAnon is a little like playing cops and robbers, with us as the heroic cops. The shmeliever loses track of the fact they’re playing, and drifts into shmelieving.

This account, which makes play central to shmelief, explains many of the characteristic features of at least some conspiracy theories – certainly not all such theories, but perhaps the features characteristic of those that are widely taken to be paradigmatic. They’re highly gamified, often turning on the need to solve puzzles. QAnon is the exemplar here, but it’s common for shmelievers to find ‘evidence’ in all kinds of weird places, and taking weird forms (think, for example, of pointing to the names of Covid variants as evidence of a plot, on the grounds that they can be rearranged to spell out ‘media control’). It’s hard to construe this charitably as an attempt to adduce evidence. Rather, it’s wordplay, with the emphasis on the ‘play’. Playfulness is also apparent in how shmelievers present themselves in person and online. They share memes and jokes. They wear costumes (think of the QAnon Shaman), carry colorful banners, bedeck their cars with stickers and wear identifying t-shirts. But they’re not expressive responders: they’re not insincere. Not all of them, at any rate: there are cosplayers and trolls who tag along and create memes “for the lulz” (Kunzru 2020), who may be conscious they’re playing. But it’s not plausible that everyone is insincere. Passions run too high, and too many people are willing to pay too high a cost for their shmeliefs (think of the ‘big lie’ shmelievers who invaded the US Capitol). It’s unlikely that their shmeliefs would have motivated a long series of consequential actions had they retained consciousness that they weren’t beliefs. Dieguez compares conspiracy theorists to fans or hobbyists, and the comparison is insightful. But better yet is to see them as gamers. They’re playing conspiracy theory, and have lost track of the fact that it’s an imaginative exercise.

The phenomenon of imaginative immersion is familiar. Most of us can recall childhood experiences of becoming absorbed in imaginative play. While we’re so immersed, we lose track to some degree of the fact that we’re imagining, perhaps because our attention is elsewhere (Kampa 2018) or because we’re temporarily unconscious of the generative rules that govern our play (Chasid 2021). While we’re immersed, we behave consistently with our pretense. But under ordinary circumstances, not even children become so absorbed in their play as to lose track of the way the world really is. They’re easily brought back to reality: in fact, taking the pretense too seriously seems to shatter the illusion. They’re happy to play “tea time” with play dough cookies, but jarred back to reality if the experimenter take a bite from a cookie (Weisberg 2013).

Whereas the mildest reminders that this is pretense breaks the spell for the child, the shmeliever is immersed to a degree that’s highly unusual. On the other hand, they’re not so immersed as to count as believers. They continue to treat evidence playfully, and they show some implicit awareness that it’s pretense: they are slow to act on their shmeliefs, and may even take care to avoid generating evidence against them. It’s relatively easy to see how a shmeliever might fall into their shmeliefs. Play is pleasurable, especially when you’re playing “hero.” Once you’re playing, you can easily become absorbed.

One advantage of the shmelief account developed here over the related views developed elsewhere – for example, non-doxastic accounts of the nature of self-deception – is that it explains this distinctive etiology. Shmeliefs arise by imaginative absorption (that’s not to say that analogous states might not arise in different ways; including, perhaps, some of the ways that these other theorists have suggested). It’s also worth pointing to an important difference between the shmelief account and non-doxastic accounts of self-deception. On the views developed by Audi (1982) and Gendler (2007), self-deceived agents do not believe the proposition they profess to believe, but they do believe a conflicting proposition. The man who self-deceptively reports that my wife is faithful actually believes that she is having an affair. They suggest this conflicting belief might actually sustain the avowed claim, motivating a distinctive pattern of avoidance of evidence. On the account developed here, agents need not possess any belief about the state of affairs their ostensible belief concerns. Such a belief is certainly not needed to explain a pattern of evidence avoidance, because shmelievers often don’t avoid evidence. They may obsessively devour it, as further (pretend) evidence for how deep the conspiracy goes.

5. Maintaining Shmeliefs: The Motte and Bailey

As Porcher (2015) and Wei (2020) each note, it’s somewhat mysterious why agents engaged in pretense could fail to be aware of this fact for very long. Pointing to the phenomenon of imaginative immersion might go some way to solving that problem, but the repeated and vociferous reminders of reality that shmelievers confront requires something more. Why are they so resistant to snapping out of their play? Shmelievers are peculiarly resistant to awakening from their playful immersion, I want to suggest, for several reasons. First, shmeliefs receive social support that most playful imaginings do not: your shmeliefs are endorsed, validated and propped up by the community you identify with. These supports might take the form of broadcasts and talk back radio, but they’re probably most effective when they’re personalised: when they engage you. This may happen in person or online. Second, there’s the absence of pushback from the world: the available evidence is consistent with the shmelief. The play dough cookie doesn’t taste the way a cookie should; tasting it provides evidence that we’re pretending. But shmeliefs are notoriously self-sealing (Napolitano 2021); because they typically postulate conspiracies, they predict that authorities will deny them, cover their tracks and plant misleading evidence.

Of course, the shmeliever will receive plenty of pushback, but it comes for the most part from people she doesn’t trust, thinks of as ‘in on it,’ or as ‘sheeple.’ Nevertheless, even the most unreflective shmeliever is probably sometimes prompted to reflect on her shmelief and on whether it’s well supported. Were this an ordinary episode of pretense, it would dissipate upon such probing: the person would come to see that it was imagination all along. But shmeliefs are resistant to introspection, for an interesting reason. Shmeliefs have beliefs at their core, beliefs that are (by the agent’s lights) well supported by evidence. The structure of shmeliefs makes it all too easy for introspection to lead to the conclusion that the shmelief is justified, because its belief core is well supported.

I suggest that shmeliefs bear something of the same sort of relationship to beliefs as baileys have to mottes. A motte-and-bailey castle consisted of a desirable area of land, the bailey, surrounding or next to a raised and fortified motte. The bailey was lightly defended, by a ditch, which couldn’t withstand a serious assault. When they faced such an attack the defenders could retreat to the motte, which gave the advantage of high ground and stone walls. A motte-and-bailey doctrine is an argumentative strategy (Shackel 2005), with the bailey corresponding to a big, exciting claim, and the motte to a far more defensible but much less exciting claim. Suppose the theorist defends the proposition that reality is socially constructed. This proposition can be given an exciting interpretation; an interpretation according to which Ramses II could not have died of tuberculosis, because the tubercle bacillus cannot be said to have existed prior to its 1882 discovery.11 But it can also be given a much less exciting and much more defensible interpretation: we understand ourselves in ways that are shaped by the concepts of our historically specific culture. The person deploys the motte-and-bailey argument when they equivocate between the radical and the more mundane interpretation, relying on the latter and the evidence for it when challenged. Here, of course, the exciting claim is the desirable bailey, while the much more defensible weak interpretation is the motte.

Now, think about how things look from the perspective of someone deploying a motte-and-bailey argument. Perhaps prompted by criticism, they might pause to ask themselves whether they really believe that reality is socially constructed. A few moments careful reflection might assure the theorist that they really believe it. “After all, there’s lots of evidence that self-understanding differs in all sorts of interesting ways across history and cultures. And think of institutions like money: definitely socially constructed”. Having satisfied themselves that they do believe reality is socially constructed, they feel within their epistemic rights to carry on defending claims like science is true here and now, but magic is equally true elsewhere and elsewhen (itself a motte-and-bailey claim, equivocating between ‘true’ and ‘taken to be true’).

I suggest that shmeliefs are, together with genuine beliefs, constituents of motte-and-bailey structures, and this fact helps the shmeliever sustain both attitudes. Because shmeliefs are constituents of motte-and-bailey structures, the shmeliever can satisfy themselves of the epistemic bona fides of their attitude when reflective, because the motte is defensible: having satisfied themselves, they can then fall back into their shmelieving. Here, the bailey is the big, exciting claim: the election was stolen; senior Democrats are killing children in Satanic rituals; the COVID vaccine is designed to kill people. The motte is the defensible core: several people have been charged with illegal voting; some Democrats are openly atheists; pharmaceutical companies have engaged in manipulation of trials in the past and are making large profits from vaccines. When the person probes their beliefs, these kinds of claims are at the forefront of their minds (what else could they adduce as evidence for their shmelief, after all?), and they find them satisfactory. They’re then free to return to shmelieving.

If something like this is right, though, then the shmeliever is also a believer. It’s because the shmeliever is also a believer that they’re able to assure themselves of their epistemic responsibility. The beliefy core of a shmelief also helps to explain why shmeliefs can be expected to guide some actions, even when the person is attentive and reflective: for example, why they might gather evidence, defend the shmelief online and associate with other shmelievers. It’s notable, too, that the facts that justify the belief also genuinely justify the shmelief (albeit weakly), which also allows the person to reassure themselves. Dieguez suggests that shmeliefs are a “metacognitive disaster”, alienating the person from their own mental life (2022: 161). If I’m right, the manner in which shmeliefs resist unmasking via introspection is essential to their maintenance.

This account might also help to explain some of the harms of shmeliefs. A motte-and-bailey doctrine, like the castle it’s modelled on, has the motte protecting the bailey. Here, too, the beliefy core protects the exciting shmelief. But—unlike the familiar cases—the shmelief also protects the belief. The shmeliever is entranced by and concentrates on the exciting shmelief. She may therefore spend little time examining her genuine belief that climate change is not a threat to the world because she’s too absorbed in imagining that there’s a worldwide conspiracy of scientists to fabricate evidence for climate change, in order to impose socialism. Shmelievers are too busy playing “the terrific two-step of triviality” (Holbo 2007), hopping between the strong but absurd claim and the weak and plausible claim, to bring their beliefs into focus, and attempts to argue with them founder as they shift from belief to shmelief and back again. The belief causes consequential behavior in the usual way, and the shmelief helps protect it from careful examination.12

Perhaps this framework can be extended to other problematic doxastic states, such as monothematic delusions (given that they, too, exhibit only some of the properties of beliefs). Perhaps, for example, they, too, have a motte-and-bailey structure, albeit one kept in place by the deficits that traumatic brain injury involves. We’ve seen that religious beliefs are typically not good candidates for shmeliefs, but perhaps some combine a core genuine belief (perhaps a belief in a creator, sustained by the thought that the universe could not have come from nothing), with a shmelief in specific doctrine. These monothematic delusions will count as shmeliefs if they are acquired through imaginative engagement (perhaps generated in response to the distinctive phenomenology that the lesions involved in delusions give rise to), are protected from unmasking as pretense by a motte-and-bailey structure, but give rise to recalcitrance in distinctive ways, betraying the fact that they are not processed in the ways characteristic of beliefs.

Conclusion

I’ve argued that shmeliefs are real, but that they’re less common than thinkers like Dieguez and Van Leeuwen suggest. Lots of genuine beliefs depart, in interesting ways, from the paradigms of mundane beliefs that guide their discussion of belief. It may be true that researchers too often neglect these mundane beliefs in their theorizing, but these theorists make the opposite mistake, of centring everything around them. Even genuine shmeliefs have beliefs at their core. There are many more beliefs, and many fewer shmeliefs, than their proponents have thought up till now.

But shmeliefs are not a marginal phenomenon. They are not uncommon, and they play an important role in protecting false beliefs. They protect the false beliefs of shmelievers, preventing them from clearly focusing on their own genuine beliefs and the evidence for them. They also protect false beliefs at a social level, hijacking conversations and debates about, say, climate change, and sending them down rabbit holes of allegations of fraud. Whole groups of people, many of whom are believers and not shmelievers at all, may engage in the terrific two-step of triviality, defending the motte by shifting attention to the bailey and back again. It is in the interests of those who want to avoid genuine debates to promote shmeliefs, and perhaps such promotion is a deliberate strategy on the part of the new merchants of doubt. Whereas the old merchants of doubt used distraction and the technique of focusing on genuine, but peripheral, uncertainties to manufacture doubt (Oreskes & Conway 2011), perhaps new merchants of doubt may promote shmeliefs. That may explain why certain media outlets give them airtime, and engage in ‘just asking questions’, and why, for example, the Republican party and its donors have not sought the removal of fringe elements, like Marjorie Taylor Greene.13

Perhaps the recognition that shmeliefs are quite common should lead to new ways of addressing false beliefs. Fact checking and debate can be expected to have little effect on shmelievers and on groups that contain and promote shmelievers. We might do better by expressing genuine incredulity: c’mon, I know you don’t believe that! Incredulity may be more effective, and more honest, than rebuttal. Perhaps we need to rouse people from their immersion in shmelief, before we can actually begin to address their genuine concerns.

Notes

  1. While the motivated reasoning account and the Bayesian updating accounts clearly postulate distinct mechanisms, and have different implications for the rationality of beliefs, it’s important to note that it is very difficult to tease these accounts apart empirically. Apparent cases of motivated belief formation may be explained by attributing to agents likelihood functions that rationalize the belief (Coppock 2023).
  2. Williams (2022) presents a semi-doxastic account of signalling for the purpose of manifesting ingroup commitment that is especially relevant here. On his account, groups may coalesce around absurd claims because they’re seen as absurd by outsiders: by committing publicly to an absurd belief, agents signal their willingness to pay reputational damages in the eyes of outsiders and therefore make defection more difficult. He argues that their function requires agents to treat them as genuine beliefs in many contexts, and therefore departures from the functional profile of belief will be limited for these sorts of signals.
  3. Indeed, not only is it a merely partial survey of those, broadly functionalist, theories of belief that promise to explain recalcitrance cases, it completely ignores non-functionalist accounts, like Declan Smithies (forthcoming) conviction account. It sets these accounts aside because they are not intended to explain recalcitrance. Readers sympathetic to such accounts might bracket the word ‘belief’ and assess the view offered here as a theory concerning how agents may acquire and commit to representations that play different functional roles.
  4. There are, of course, other significant differences across these theories. Ganapini emphasises narratives: she argues that when a narrative strikes us as plausible, this fact is evidence not that the narrative is itself true (it may, after all, be explicitly presented as fictional) but that the world actually works in the sorts of ways that are presupposed by the narrative. For example, the Pizzagate narrative is plausible to those who circulate it because it presupposes that Democrats are morally depraved. Ganapini’s account seems to better explain why some theories rather than others circulate and are enjoyed by those in particular milieus than how they come to be mistaken for shmeliefs; her account and mine complement each other. Munro argues that conspiracy theories and cults are both attractive because they postulate ‘secret knowledge’, or, at minimum, an esoteric epistemology, with its own rules for decoding events. I think that’s false with regards to conspiracy theories: contemporary conspiracy theorists are all too ready to share their theories and their evidence. Their methods of decoding texts and events are already familiar to all of us. Finding hidden anagrams or acronyms, or hints of a plan in popular movies, is not really an esoteric epistemology: it’s a game.
  5. This isn’t an alternative to the transparency account famously outlined by Evans (1982). Evans argues that we form beliefs by looking to the world, not ourselves: asked whether we believe that inflation will rise, we don’t probe our mental states but think about market forces and the economy. However, our perception of market forces and the economy is very significantly shaped by our attitudes, especially but not only by our Bayesian priors.
  6. The extent to which mundane beliefs always have the properties Van Leeuwen and Dieguez attribute to them may be questioned. Egan (2008) points out that cases in which we fail to draw obvious inferences from our mundane beliefs, or in which we guide our behavior by conflicting beliefs in different circumstances, are far from rare.
  7. Note that Van Leeuwen abandons vulnerability to special authority as a marker of shmelief in the definitive statement of his view (2023).
  8. Dieguez cites the illusion of explanatory depth as evidence that scientific attitudes are shmeliefs. We think we understand how flush toilets and toasters work, but we’re unable to explain the mechanism when asked. This fact shows that the content of the relevant attitudes is far from rich and detailed, but again, facts about content don’t entail facts about attitude. In fact, the best account of the illusion of explanatory depth explains it as arising not because people mistake shmeliefs for beliefs, but because they adaptively outsource to others for the content of their beliefs (Keil & Wilson 2000; Levy 2021). We do this all the time. Experts (or those we take to be experts) tell us that p (e.g. that evolution is true, that E = mc2., that God is three persons in one) and we accept the claim, depending on others to provide it with more detailed content.
  9. I’m not claiming that no religious or scientific representations are shmeliefs, or even that shmeliefs may not be common among such states. My claim is epistemic, not metaphysical: we should expect to see failures of cognitive governance and evidential vulnerability among complicated or esoteric representations that are authority-dependent, so the fact that actual representations exhibit these properties is not good evidence that the representation is a shmelief.
  10. We can, of course, know that we have voluntarily acquired a belief and regard it as a belief, so long as we take it to be accurate now. What we can’t do is to think that the belief is now sustained voluntarily and that it is also an accurate representation of the world. Dieguez might maintain, therefore, that shmeliefs are acquired effortfully but that subsequently the shmeliever comes to be absorbed by their shmeliefs. That’s a coherent position, but my view is more parsimonious, since it cuts out the effortful acquisition stage and postulates only the imaginative absorption we know to be a common feature of imaginative engagement.
  11. This claim is attributed to Bruno Latour (2000). I make no claims concerning whether the attribution is fair.
  12. (Ganapini, 2022) also recognizes how something like shmeliefs may protect the core commitments of those who espouse them. On her account, for example, people may accept narratives about Democrat vote-rigging not because they genuinely believe the narratives, but because these narratives help sustain their genuine belief that Democrats are untrustworthy.
  13. Indeed, the ongoing Dominion litigation against Fox News has unearthed internal messages that reveal that network figures reported and amplified claims about the presidential election they knew to be false (Luscombe, 2023).

Acknowledgements

I’m very grateful to two reviewers for Ergo, as well as the Area Editor, for very helpful comments. I am also grateful to Rob Ross, Basil Müller and Marianna Ganapini, as well as audiences at the Institut Jean Nicod and at Monash University for valuable feedback. Work on this paper was supported by grants from the John Templeton Foundation (grant #62631) and the Arts and Humanities Research Council (AH/W005077/1).

References

Alexander, Scott (2013). Lizardman’s Constant Is 4%. Slate Star Codex. Retrieved from https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/

Aronowitz, Sara (2023). A Planning Theory of Belief. Philosophical Perspectives, 37(1): 5–17.

Astuti, Rita, and Paul Harris (2008). Understanding Mortality and the Life of the Ancestors in Rural Madagascar. Cognitive Science, 32, 713–40.  http://doi.org/10.1080/03640210802066907

Audi, Robert (1982). Self-deception, action, and will. Erkenntnis, 18(2), 133–158.

Bardon, Adrian (2019). The Truth About Denial: Bias and Self-Deception in Science, Politics, and Religion. Oxford University Press.

Barrett, Justin L. (1999). Theological Correctness: Cognitive Constraint and the Study of Religion. Method & Theory in the Study of Religion, 11(4), 325–339.  http://doi.org/10.1163/157006899X00078

Barrett, Justin L. (2001). How Ordinary Cognition Informs Petitionary Prayer. Journal of Cognition and Culture, 1(3), 259–269.  http://doi.org/10.1163/156853701753254404

Begby, Endre (2020). Evidential Preemption. Philosophy and Phenomenological Research, 102(3), 515–530.  http://doi.org/10.1111/phpr.12654

Bendaña, Joseph, and Eric Mandelbaum (2021). The Fragmentation of Belief. In Cristina Borgoni, Dirk Kindermann, & Andrea Onofri (Eds.), The Fragmented Mind, 78–107.  http://doi.org/10.1093/oso/9780198850670.003.0004

Blanchard, Joshua (2023, June 7). Playing the Expert: ‘Doing Your Own Research’ as Epistemic Cosplay. Retrieved 22 July 2023, from The Applied Epistemology Project at the University of North Carolina Chapel Hill, https://aep.unc.edu/2023/06/07/playing-the-expert-doing-your-own-research-as-epistemic-cosplay/

Bullock, John G. and Gabriel Lenz (2019). Partisan Bias in Surveys. Annual Review of Political Science, 22(1), 325–342.  http://doi.org/10.1146/annurev-polisci-051117-050904

Chasid, Alon (2021). Imaginative immersion, regulation, and doxastic mediation. Synthese, 199, 7083–7106.  http://doi.org/10.1007/s11229-021-03055-1

Coppock, Alexander (2023). Persuasion in Parallel: How Information Changes Minds about Politics. Retrieved from https://press.uchicago.edu/ucp/books/book/chicago/P/bo181475008.html

Davidson, Donald (2004). Problems of rationality. Retrieved from https://philpapers.org/rec/DAVPOR

Dickson, E. J. (2020, September 23). Former QAnon Followers Explain What Drew Them In – And Got Them Out. Rolling Stone. Retrieved from https://www.rollingstone.com/culture/culture-features/ex-qanon-followers-cult-conspiracy-theory-pizzagate-1064076/

Dieguez, Sebastian (2022). Croiver: Pourquoi les croyances ne sont pas ce que l’on croit. Retrieved from https://livre.fnac.com/a17212225/Sebastian-Dieguez-Croiver

Duhaime, Erik P. (2015). Is the call to prayer a call to cooperate? A field experiment on the impact of religious salience on prosocial behavior. Judgment and Decision Making, 10(6), 4.

Egan, Andy (2008). Imagination, Delusion, and Self-Deception. In Tim Bayne & Jordi Fernandez (Eds.), Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation (Macquarie Monographs in Cognitive Science) (263–280). Psychology Press.

Evans, Gareth. (1982). The Varieties of Reference (1st edition). Oxford University Press.

Funkhouser, Eric (2017). Beliefs as Signals: A New Function for Belief. Philosophical Psychology, 30(6), 809–831.  http://doi.org/10.1080/09515089.2017.1291929

Funkhouser, Eric (2019). Self-Deception. Routledge.

Ganapini, Marianna (2021). The Signaling Function of Sharing Fake Stories. Mind and Language.

Ganapini, Marianna (2022). Absurd Stories, Ideologies, and Motivated Cognition. Philosophical Topics, 50(2), 21–39

Gendler, Tamar Szabó (2007). Self-Deception as Pretense. Philosophical Perspectives, 21(1), 231–258.  http://doi.org/10.1111/j.1520-8583.2007.00127.x

Griffin, Drew and Rob Kuznia, Scott Bronstein, and Curt Devine (2021, October 20). They take an oath to do no harm, but these doctors are spreading misinformation about the Covid vaccine. Retrieved 27 February 2023, from CNN website: https://www.cnn.com/2021/10/19/us/doctors-covid-vaccine-misinformation-invs/index.html

Hannon, Michael (2021). Disagreement or Badmouthing? The Role of Expressive Discourse in Politics. In Elizabeth Edenberg & Michael Hannon (Eds.), Political Epistemology. Retrieved from https://www.academia.edu/40013480/Political_Disagreement_or_Partisan_Cheerleading_The_Role_of_Expressive_Discourse_in_Politics

Harris, Paul and Marta Giménez (2005). Children’s Acceptance of Conflicting Testimony: The Case of Death. Journal of Cognition and Culture, 5, 143–164.  http://doi.org/10.1163/1568537054068606

Holbo, John (2007, April 11). When I hear the word culture … aw, hell with it. Retrieved from https://crookedtimber.org/2007/04/11/when-i-hear-the-word-culture-aw-hell-with-it/

Kampa, Samuel (2018). Imaginative Transportation. Australasian Journal of Philosophy, 96(4), 683–696.  http://doi.org/10.1080/00048402.2017.1393832

Keil, Frank C., and Robert A. Wilson (2000). The Shadows and Shallows of Explanation. In Frank C. Keil & Robert A. Wilson (Eds.), Minds and Machines (137–159). MIT Press.

Kunzru, Hari (2020, March 26). For the Lulz. New York Review of Books, 67(5), 4–8.

Latour, Bruno (2000). On the Partial Existence of Existing and Non-existing Objects. In Lorraine Daston (Ed.), Biographies of Scientific Objects (247–269). Retrieved from http://www.bruno-latour.fr/node/223.html

Levy, Neil (2021). Bad Beliefs: Why They Happen to Good People. Oxford University Press.

Levy, Neil (2022). Conspiracy theories as serious play. Philosophical Topics, 50(2), 1–19.

Lopez, Jesse, and D. Sunshine Hillygus (2018). Why So Serious?: Survey Trolls and Misinformation (SSRN Scholarly Paper No. ID 3131087).  http://doi.org/10.2139/ssrn.3131087

Luscombe, Richard (2023, February 17) Fox News hosts thought Trump’s election fraud claims were ‘total BS’, court filings show. The Guardian. Retrieved from https://www.theguardian.com/media/2023/feb/17/fox-news-hosts-dominion-lawsuit-trump-election-fraud-tucker-carlson-sean-hannity-laura-ingraham

Malhotra, Deepak (2010). (When) are religious people nicer? Religious salience and the “Sunday Effect” on pro-social behavior. Judgment and Decision Making, 5(2), 138–143.

Mercier, Hugo (2020). Not Born Yesterday: The Science of Who We Trust and What We Believe. Princeton University Press.

Munro, Daniel (2023). Cults, Conspiracies, and Fantasies of Knowledge. Episteme, 1–22.  http://doi.org/10.1017/epi.2022.55

Napolitano, M. Giulia (2021). Conspiracy Theories and Evidential Self-Insulation. In Sven Bernecker, Amy K. Flowerree, & Thomas Grundmann (Eds.), The Epistemology of Fake News (1st ed., 82–106).  http://doi.org/10.1093/oso/9780198863977.003.0005

Oreskes, Naomi, and Erik M. Conway (2011). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. A&C Black.

Pennycook, Gordon, and David G. Rand (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50.  http://doi.org/10.1016/j.cognition.2018.06.011

Porcher, José Eduardo (2015). Is Self-Deception Pretense? Manuscrito, 37, 291–332.  http://doi.org/10.1590/S0100-60452015005000002

Porot, Nicolas, and Eric Mandelbaum (2021). The science of belief: A progress report. Wiley Interdisciplinary Reviews. Cognitive Science, 12(2), e1539.  http://doi.org/10.1002/wcs.1539

Poth, Nina, and Krzysztof Dolega (2023). Bayesian belief protection: A study of belief in conspiracy theories. Philosophical Psychology, 1–26.  http://doi.org/10.1080/09515089.2023.2168881

Ramsey, Frank Plumpton (2013). The Foundations of Mathematics and Other Logical Essays (R. B. Braithwaite, Ed.). Martino Fine Books.

Ross, Robert M., and Neil Levy (2023). Expressive Responding in Support of Donald Trump: An Extended Replication of Schaffner and Luks (2018). Collabra: Psychology, 9(1), 68054.  http://doi.org/10.1525/collabra.68054

Schaffner, Brian F., and Samantha Luks (2018). Misinformation Or Expressive Responding? What An Inauguration Crowd Can Tell Us About The Source Of Political Misinformation In Surveys. Political Opinion Quarterly, 82(1), 135–147.

Shackel, Nicholas (2005). The Vacuity of Postmodernist Methodology. Metaphilosophy, 36(3), 295–320.  http://doi.org/10.1111/j.1467-9973.2005.00370.x

Shtulman, Andrew, and Kelsey Harrington (2016). Tensions between science and intuition across the lifespan. Topics in Cognitive Science, 8(1), 118–137.

Slone, D. Jason (2004). Theological Incorrectness: Why Religious People Believe What They Shouldn’t. Oxford University Press.

Smithies, Declan (forthcoming). Belief as a Feeling of Conviction. In Eric Schwitzgebel & Jonathan Jong (Eds.), The Nature of Belief. Retrieved from https://philpapers.org/rec/SMIBAA-8

Sriskandarajah, Ike (2021, June 5). Where did the microchip vaccine conspiracy theory come from anyway? The Verge. Retrieved from https://www.theverge.com/22516823/covid-vaccine-microchip-conspiracy-theory-explained-reddit

Sterelny, Kim (2015). Content, Control and Display: The Natural Origins of Content. Philosophia, 43(3), 549–564.  http://doi.org/10.1007/s11406-015-9628-0

Stich, Stephen P. (1978). Beliefs and Subdoxastic States. Philosophy of Science, 45(December), 499–518.  http://doi.org/10.1086/288832

Van Leeuwen, Neil (2014). Religious credence is not factual belief. Cognition, 133(3), 698–715.  http://doi.org/10.1016/j.cognition.2014.08.015

Van Leeuwen, Neil (2018). The Factual Belief Fallacy. Contemporary Pragmatism, 15(3), 319–343.  http://doi.org/10.1163/18758185-01503004

Van Leeuwen, Neil (2023). Religion as Make-Believe: A Theory of Belief, Imagination, and Group Identity. Harvard University Press.

Venkataramakrishnan, Siddharth, and Hannah Murphy (2021, April 14). Quitting QAnon: why it is so difficult to abandon a conspiracy theory. Financial Times. Retrieved from https://www.ft.com/content/5715176a-03b3-4ee9-a857-c50298ffe9da

Wei, Xintong (2020). The role of pretense in the process of self-deception. Philosophical Explorations, 23(1), 1–14.  http://doi.org/10.1080/13869795.2020.1711960

Weill, Kelly (2022). Off the Edge: Flat Earthers, Conspiracy Culture, and Why People Will Believe Anything. Algonquin Books.

Weisberg, Deena Skolnick (2013). Distinguishing Imagination from Reality. In Marjorie Taylor (Ed.), The Oxford Handbook of the Development of Imagination (75–93). Oxford University Press.

Williams, Bernard (1973). Deciding to Believe. In Bernard Williams (Ed.), Problems of the Self (136–51). Cambridge University Press.

Williams, Daniel (2022). Signalling, commitment, and strategic absurdities. Mind & Language, 37(5), 1011–1029.  http://doi.org/10.1111/mila.12392

Xygalatas, Dimitris (2012). Effects of religious setting on cooperative behavior: A case study from Mauritius. Religion, Brain and Behavior, 3, 91–102.  http://doi.org/10.1080/2153599X.2012.724547

Zaller, John, and Stanley Feldman (1992). A Simple Theory of the Survey Response: Answering Questions versus Revealing Preferences. American Journal of Political Science, 36(3), 579.  http://doi.org/10.2307/2111583