Article

Assertion is weak

Authors: ,

Abstract

Recent work has argued that belief is weak: the level of rational credence required for belief is relatively low. That literature has  contrasted belief with assertion, arguing that the latter requires an epistemic state much stronger than (weak) belief---perhaps knowledge or even certainty.  We argue that this is wrong: assertion is just as weak as belief.  We first present a variety of new arguments for this, and then show that the standard arguments for stronger norms are not convincing.  Finally, we sketch an alternative picture on which the fundamental norm of assertion is to say what you believe, but both belief and assertion are weak. To help make sense of this, we propose that both belief and assertion involve navigating a tradeoff between accuracy and informativity, and so it can makes sense to believe/say something you only have weak evidence for, if it is informative enough.

Keywords: norms of assertion

How to Cite: Mandelkern, M. & Dorst, K. (2022) “Assertion is weak”, Philosophers' Imprint. 22(0). doi: https://doi.org/10.3998/phimp.1076

1 Introduction

Assertion is weak: you can rationally assert p even when you aren’t sure of p, don’t know p, and have low rational credence in p. That, anyway, is the (rather unorthodox) contention of this paper—one that goes against prevailing philosophical orthodoxy.

Our arguments build on arguments of Hawthorne et al. 2016 (henceforth HRS), who influentially argued that belief is weak in the corresponding sense. If they are right, what are the broader ramifications? The most common conclusion is that belief plays a less important role in epistemology and philosophy of language than many have thought. Instead, it’s a stronger notion (full belief, certainty, etc.), not necessarily corresponding to the ordinary usage of ‘believe’ or ‘think’, that plays a central philosophical role (e.g. Stalnaker 1984, 2006; Williamson 2000, fc.). Indeed, HRS themselves argue that, while belief is weak, assertion is strong: it requires more than mere belief.

But an alternative reaction is available: perhaps belief—the weak attitude that our ordinary usage of ‘think’ and ‘believe’ corresponds to—should continue to play a central philosophical role, despite being weak. We argue for this conclusion with respect to assertion: assertion is closely tied to belief, but both are weak.

Our claim that assertion is weak goes against a large literature that argues that assertion is strong, requiring knowledge or justified (strong) belief.1 Despite many disagreements, there’s a near consensus in this literature that assertion is strong, in the sense that most, and perhaps all, of the following are false:

We think all of these claims are true—that is what we mean when we say that assertion is weak. But we’ll be happy if we only convince you that some of them are, for we’ll still have succeeded in convincing you that assertion is much weaker than many have thought.

Our plan is to argue that assertion is weak (§§2–3) and then respond to the two most influential arguments that assertion is strong—namely, that it’s often natural to follow up an assertion with ‘How do you know?’ (§4), and that Moorean sentences like ‘It’s raining but I don’t know it’ are always infelicitous (§5). We’ll conclude by sketching a positive theory of assertion, on which you should assert what you (weakly!) believe about the question under discussion.

2 The Weakness of Belief

We start by summarizing the case for the weakness of belief from Hawthorne et al. 2016, which we’ll build on.2 When HRS say that belief is weak, they mean that ⌜Tim believes p⌝ or ⌜Tim thinks p⌝ can be true even if Tim is rational and has relatively little evidence for, and relatively low credence, in p.3 This is of course fully consistent with there being other, stronger doxastic states that philosophers, or ordinary speakers, can get at with terms like ‘outright belief’ or ‘full belief’.4

The best way to motivate the weakness of belief is simply to look at examples. Start with a standard lottery case. Suppose there’s a fair 2,000-ticket lottery in your workplace. Miriam buys one ticket. As has long been noted (Kyburg 1964; Levi 1967), it seems perfectly rational to think that Miriam will not win, even though your credence that Miriam will not win is less than 1.

The pattern generalizes. Suppose that John is coming for dinner and is bringing takeout. Liam knows he likes Indian best, then Chinese, then Thai, and finally Italian. More precisely, imagine that Liam thinks it’s 55% likely that John will get Indian, 25% likely he’ll get Chinese, 15% likely he’ll get Thai, and 5% likely he’ll get Italian. If we ask Liam what he thinks John will bring, it seems fine for Liam to respond:

This is so even though Liam’s credence that John will bring Indian is far less than 1.

Even more strikingly, it seems you can sometimes believe things that have probability less than 1/2. Suppose that in the lottery, 1,500 of your coworkers (Miriam, Mark, Marge,…) bought one ticket each. Claire, meanwhile, bought the remaining 500 tickets. Although you’re by no means obligated to think Claire will win, if you’re asked, ‘Who do you think will win?’ or ‘How do you think the lottery will play out?’, it’s perfectly natural to reply:

This, despite the fact that your rational credence that Claire will win is only 1/4.5

In addition to such intuitions about cases, HRS also motivate the weakness of belief by noting that it generally sounds incoherent to assert that you have some doxastic inclination toward a claim but then deny belief in it:

By contrast, when ‘thinks’ is replaced with an uncontroversially strong attitude predicate like ‘know’ or ‘is sure’, the result is perfectly coherent: This suggests a close connection between uncontroversially weak attitudes like thinking p is probable, on the one hand, and believing p, on the other.

3 The Weakness of Assertion

HRS claim not just that belief is weak, but that belief is weaker than assertion, arguing against Entitlement Equality, which says, roughly, that belief and assertion have the same standards. More carefully, here’s our understanding of Entitlement Equality (slightly altered from HRS, with the goal of having a clearer target). Let your epistemic situation include everything that’s potentially relevant to what you are (epistemically) warranted in believing or asserting.6 Entitlement Equality says that an epistemic situation entitles you to believe p iff it entitles you to assert p.7

HRS’s central argument against Entitlement Equality comes from contrasts like these:

The most natural explanation for these contrasts is that assertion requires knowledge or complete confidence, while belief does not. Natural though it is, it’s wrong. As we’ll argue (§5), strong norms of assertion can’t explain the range of Moorean data. And whatever does explain them is consistent with the weakness of assertion.

But hold that thought. Before rebutting the core arguments that assertion is strong (§§4–5), we will give three positive arguments that it’s weak.

3.1 Cases

Our first argument is simple: people make weak assertions all the time, without any apparent infelicity or irrationality. This is not to say that weak assertions can’t be challenged—they can be—but, as we discuss in §3.1.2, that doesn’t show they’re irrational.

We will give a few cases that illustrate the point, and then explore some potential responses.

3.1.1 Argument

Start with Miriam. Recall that Miriam has one ticket in a fair 2,000-ticket lottery. Consider:

  • (7)

    1. [Lucy:] Miriam, are you going to start saving for retirement this year?

    2. [Miriam:] No. I entered the lottery instead.

    3. [Lucy:] That’s nuts. You’re not going to win the lottery!

Lucy’s response seems perfectly reasonable to us, despite Lucy’s rational credence in her assertion being less than 1, and despite the fact that she can’t know that it’s true (per orthodoxy, at least).

Likewise, recall that Liam is 55% confident that John will bring Indian, 25% confident of Chinese, 15% confident of Thai, and 5% confident of Italian. Consider:

  • (8)

    1. [Mark:] What will John bring for dinner?

    2. [Liam:] He’ll bring Indian. That’s his favorite, after all.

This just seems… ordinary. To bring this out, imagine how much weirder it would be for Liam to say ‘He’ll bring Chinese’—after all, Liam thinks Indian is a much more likely option. By contrast, (8b) is unremarkable, despite his fairly low credence in it.

Or recall Claire. She’s bought 500 of the 2,000 lottery tickets, while 1,500 coworkers each have one of the remaining tickets. Consider:

  • (9)

    1. [Lucy:] Who do you think is going to win the lottery?

    2. [Jill:] It’ll be Claire. She bought 500 tickets!

Jill’s assertion, again, seems okay—despite the fact that her rational credence is only 1/4.8

For another case, consider the 2021 Democratic primary in New York City. Thirteen nominees were on the ballot, and extrapolating outcomes from polls was complicated by the fact that the outcome was decided by ranked-choice voting, meaning it depended not just on voters’ preferred candidate but potentially on all of their conditional preferences. Plausibly, then, one’s rational credence in any particular outcome should have been relatively low. Nonetheless, it seems that there would be nothing wrong, normatively speaking, with a conversation like this:

  • (10)

    1. Eric Adams will win. He’s the establishment choice.

    2. No! It will be Maya Wiley. The progressives have coalesced around her.

    3. I disagree. Wall Street will make sure Ray McGuire wins.

More examples:
  • (11)

    1. Who’s gonna win the race this afternoon?

    2. Slippery Pete’s gonna win this one.

  • (12)

    1. How many people are coming to the party tomorrow?

    2. 17. (That’s how many people RSVP’d.)

  • (13)

    1. What will the weather be next Tuesday?

    2. It’s going to rain.

These exchanges are unremarkable, despite the fact that in normal circumstances it’s common ground that you can’t reasonably be sure (or claim to know) which horse will win, that all people who RSVP’d will come, and so on.

Our examples so far have been future-directed. This is unsurprising: we think the standards of assertion vary with context (§6), and contexts where we are talking about the future are ones where it’s usually common ground that little certainty is to be had, and hence where the standards for assertion will naturally be taken to be relatively low. Nevertheless, in some contexts, past-directed weak assertions are also fine.9 For a simple example, consider a case above like the mayoral primary. After the votes are in but before they are counted, we can imagine the very same argument going on in the past tense. (‘Who won?’ ‘Eric Adams, for sure’ ‘No, Maya Wiley’, etc.) Or consider:

  • (14)

    1. [Child:] How did the dinosaurs go extinct?

    2. [Parent:] A meteor 20 miles wide struck the Earth, and the debris heated up the atmosphere to hundreds of degrees!

  • (15) [Discussing the Great Seattle Fire of 1889:]

    1. I wonder what it looked like.

    2. Well obviously it was cloudy. (It’s Seattle, after all.) So I bet the fire was eerily reflecting off the clouds.

If you’d like a recipe, variants on preface-paradox cases (Makinson 1965) can be used to generate natural cases of weak assertion. Suppose you’ve gotten 98 ‘yes’ RSVPs to the Met Gala. It’s a hot ticket, so for each ‘yes’, your rational credence that that guest will come is very high (perhaps you even know they’ll come). Nevertheless, it’s unlikely that all of them will show up. (If each guest has a 1% chance of not showing up, and the chances are mutually independent, it’s only 37% likely that all 98 will come.) But still, the following exchange seems fine:

  • (16)

    1. How many guests will be at the Met Gala this year?

    2. Ninety-eight.

Now, some think that you can know in cases like these. Most significantly, Holguín (2021) argues that knowledge is often very weak, so that all of the asserted propositions above are in fact known. This of course runs counter to orthodoxy in epistemology (e.g., Williamson 2000). Still, if you’re convinced that knowledge can be weak, then you might agree that assertion is weak in senses (i) and (iii)–(v) above, but not in sense (ii): assertion still requires (weak!) knowledge.

If so, then you agree with us that assertion is much weaker than it’s standardly taken to be. Still, we think even this weak view of assertion is too strong: there are cases where you can assert p even when you manifestly don’t know p even in a weak sense. For instance, suppose you are on a game show and you’re asked what year the Seven Years’ War started. You say:

  • (17) I don’t know. I know it was in the 1700s, but I don’t know more than that.

The host replies:
  • (18) Okay, well you’re out of lifelines, so you’ll just have to guess.

Even though it’s common ground that you don’t know the answer, it’s perfectly fine for you to go on to say your best guess—indeed, this seems like what you should do in this situation:
  • (19) Ok, hm. The war started in 1760.

This is a fine thing to say. But clearly, by any standards, weak or strong, you don’t know that the war started in 1760—you just said as much! So assertion does not require knowledge even in a very weak sense.

So our first argument is simply that weak assertions are perfectly acceptable in many contexts. No doubt some will be unconvinced; so before moving on to our next argument, we’d like to address a few reasons for skepticism.

3.1.2 Replies

To begin, you might think that while our judgments about these cases are right, these cases are simply not what philosophers have had in mind when they have talked about assertion. There are lots of other speech acts—predictions, guesses, speculations, etc.—which can be weak, and the cases we have given here are instances of those speech acts (see Benton and van Elswyk 2020). But assertion proper is strong.

On the one hand, we have no objection to finer-grained taxonomies of speech acts, and we have no attachment to the word ‘assertion’ to describe the category of speech act we have been investigating; call them ‘sayings’ if you’d prefer. But we think that by far the most natural way of drawing these finer-grained distinctions is normative. For instance, we might say that assertions are distinguished by being governed by such-and-such strong norms. But if this is simply stipulated as a taxonomical criterion, then this is no way of defending the claim that assertion is strong. If, on the other hand, this is being argued as a substantive claim, then some independent criterion must be given to distinguish assertions proper from the speech acts we have been considering so far. But we know of no such criterion.

Still, you might think that there is no reason to lump together all these speech acts—roughly, all communicative uses of declarative sentences. But we think they in fact constitute an interestingly unified class of speech acts. For even if we can draw finer-grained distinctions, there are two rather striking normative facts which bind together all communicative uses of declarative sentences. First, all speech acts across this class are subject to a kind of anti-Moorean constraint: conjoining any of them with expressions of uncertainty (⌜p, but I don’t know p⌝) leads to striking infelicity (see §5). Second, across this whole class, it is generally unacceptable to say something unless you think it is true (see §6).

In short: we have a clear object of study (communicative uses of declarative sentences), delineated in non-normative terms, which are subject to unified norms. Our project is figuring out what those norms are, and our central claim is that they are weak. This is consistent with other projects which fine-grain that class of speech acts; but unless that fine-graining is done non-normatively, that is no way at all of saving the claim that assertion is strong.

Another worry: you might accept that people say things like this all the time, but doubt the evidential value of that observation—perhaps they’re just violating the norms? Taken too generally, this reply also undermines the methodology of the arguments for strong norms of assertion. Those arguments are built on patterns about what people say in everyday conversations. Either we take those patterns seriously, or we don’t. If we do, then data like the above about the ordinariness of weak assertions must be taken seriously, too. If we don’t, then we can dismiss this weak-assertion data—but, on pain of cherry-picking, we must also dismiss the data that support strong norms of assertion. We are not sure where that kind of methodological skepticism would land, vis-à-vis the norms of assertion, and we will not pursue that question. Rather, in line with the existing literature, we’ll continue to take seriously how people speak, criticize each other, and respond to criticism, as evidence about what the norms of assertion are.

But, even if you agree with this methodological point, you might still think that in these particular cases, people simply are violating the norms of assertion. Aside from the general arguments for strong norms which we will discuss below, however, we can only see one reason for thinking this: namely, that pushback is possible in these cases. That is, we can almost invariably criticize weak assertions by raising counter-possibilities. For instance, when Lucy tells Miriam she’s not going to win the lottery, Miriam can reply: ‘You don’t know that! My ticket is as good as anyone else’s. I could certainly win.’

Does this show that Lucy’s assertion was unwarranted? No. Aside, perhaps, from mathematical or logical contexts, it’s always possible to raise counter-possibilities, as we know from the literature on skepticism (Lewis 1970, 1996; DeRose 1995). If Miriam says ‘My car is parked around the block’, you can push back: ‘Are you absolutely certain? What if it was stolen? What if it was towed? What if you’re a brain in a vat being prodded by scientists?’ Most think that the possibility of this kind of conversational move doesn’t show that Miriam shouldn’t have said what she said. So the felicity of pushback doesn’t show that weak assertions are unwarranted.

Moreover—as usual—pushback against the pushback is possible. In the lottery case, Lucy can say ‘No, you’re not going to win, Miriam. You can’t latch onto marginal possibilities like that’. In the car case, Miriam can say: ‘Well, that’s very unlikely, so I’m going to stick with my original claim: my car is parked around the block.’ Patterns in pushback, and in pushback pushback, don’t show an asymmetry between weak assertions and strong ones. Of course, the stronger your evidence for p, the more ammunition you’ll have to push back against pushback. But that doesn’t show that there is a categorical distinction.

3.2 Elicitation

Our second argument that assertion is weak is based on how we elicit assertions.

3.2.1 Argument

Our key observation is that asking someone what they think about some subject matter X is a standard way to elicit outright assertions about X.

For example, consider a primary election with five viable candidates. Marta is a close watcher of politics. She doesn’t know what will happen— no one does. Still, the following exchange is perfectly natural:

  • (20)

    1. [Latif:] What do you think will happen in the race?

    2. [Marta:] Joe will win.

Note that while this case is future-directed, that is inessential. If voting has concluded but we don’t know the outcome, Latif can ask: ‘What do you think happened in the race?’ and Marta can reply: ‘Joe won. [He’s the union candidate, after all!]’

Likewise, coming back to the case of John getting takeout: Liam doesn’t know what John will do. Still, it’s perfectly fine to ask:

  • (21) What do you think John will bring for dinner?

And it’s perfectly natural for Liam to respond:
  • (22) He’ll bring Indian.

This is so even though Liam doesn’t know that John will bring Indian. What this adds to the cases above: we think that even the most hardened partisan of strong assertion has to admit that asserting what you think about X is a reasonable reaction to a question about what you think about X! This is just the most ordinary kind of information exchange. But if you agree that these exchanges are unobjectionable—and you agree that these are assertions (more on this in a moment)—then you agree that assertion is weak.

By contrast, it’s actually somewhat weird to elicit assertions by asking someone what they are sure or certain of (Turri 2010b). This is especially pronounced in circumstances like the election, where it’s common ground that certainty is not to be had: ‘What are you certain of about the election?’ is, of course, a coherent question, but it’s not the normal way to invite someone to make assertions about it, and will probably elicit completely uninteresting claims—that the election will take place, etc.—which fall far short of what someone might be prepared to say in response to the question of what they think about the election.

In sum: if the norm of assertion is strong, it’s puzzling why it’s so natural to elicit assertions with questions with the form ‘What do you think?’.

3.2.2 Replies

A natural response to this argument is to deny that the above responses to ‘What do you think…’ questions ((20b) and (22)) are asserting what they appear to assert. When you ask someone what they think about p—the response goes—you’re asking about their beliefs; so when they respond by saying p, they are not asserting p at all. Instead, what they are asserting is ⌜I believe p⌝, with ‘I believe’ simply elided (cf. Benton and van Elswyk 2020). So e.g., (20b) is not an assertion that Joe will win, but an assertion about Marta’s beliefs: ‘I think that Joe will win’. If so, the above exchanges don’t tell us anything at all about the strength of assertion.

In fact, you might be inclined to generalize this response: perhaps all our cases which appear to be weak assertions of p are really assertions of ⌜I think that p⌝ or ⌜Probably p⌝, with the hedge simply unpronounced.

But this is wrong. We have two arguments against it. The first is that Moorean assertions are acceptable for overtly hedged assertions, but not otherwise. Compare:

  • (23)

    1. [Latif:] Who do you think will win?

    2. [Marta:] I think that Joe will win, but I don’t know.

    3. [Marta:] # Joe will win, but I don’t know.

(23c) is as unacceptable as any Moorean assertion (a fact we’ll return to in §5). But if (23c) conveyed the same content as (23b)—with ‘I think that’ simply unpronounced—this would be inexplicable. So ‘Joe will win’ in this context does not have the same content as ‘I think that Joe will win’.

This is enough to vitiate this response, we think. For completeness, we have a second argument that ellipsis of the proposed kind simply is not possible. To see this, suppose that, instead of (20a), Marta is asked a third-personal belief question like (24):

  • (24) What does Mark think will happen in the race?

It’s strange to respond to (24) with (25):
  • (25) ??Joe will win.

(25) doesn’t seem responsive to the question (24): instead, it seems to be simply an assertion that Joe will win. In other words, the parse ‘Mark thinks that Joe will win’ seems unavailable for (25).10 This suggests that the parallel parse ‘I think that Joe will win’ is likewise unavailable for (20b), and hence that (20b) is not an elided belief ascription after all.

Ellipsis is possible in cases like this, but only with an overt complementizer. Hence, (26) is easily interpreted as an answer to (24), with the form ‘Mark thinks that Joe will win’, and it is not easily interpreted as an assertion about who will win:

  • (26) That Joe will win.

These contrasts suggest that elision of the proposed kind is not available for assertions without an overt complementizer. If this is right, it provides a second argument that the deflationary response we are considering is wrong: a claim like (20b) is an assertion about who will win, not about Marta’s beliefs after all.

A second potential reply notes that elicitation with ‘What do you think?’ isn’t always possible. In particular, when you think A knows about X, it is weird to ask what they think about X. If A saw what happened at the game, it is strange to ask A ‘Who do you think won?’ But this is readily explained on standard theories of anti-presupposition: since ‘knows’ is a presuppositional competitor of ‘thinks’, ⌜S thinks p⌝ leads to the inference that ⌜S knows p⌝ is false (Percus 2006); and this inference projects out of questions, since presuppositions project out of questions. So ‘What do you think about X?’ leads to the inference that the addressee doesn’t know about X, and so it will be peculiar to ask what A thinks about X if it is common ground that A knows about X. (This also explains why ‘What do you think’ elicitations are more natural in future-directed contexts, where we often don’t know. But, as we saw, it can also be used to elicit assertions about the past if you think the addressee doesn’t know.)

3.3 Attitude Reports

Our final argument is based on what kinds of attitude reports are licensed by assertions.

3.3.1 Argument

First, when A asserts p, what are we inclined to conclude? The strong assertion story says that (ceteris paribus—assuming norms are satisfied) we can conclude that A is sure that p, or certain that p, or takes themselves to know p (depending on the strong norm theory). The weak assertion story says that we can only conclude that A thinks that p. And the prediction of the weak story is correct.

To see this, suppose that Ezra overhears this exchange:

  • (27) [Mark, on the phone with Liam:] What will John bring for dinner? [Listens.] Okay, thanks. [Hangs up.]

Suppose that, unbeknownst to Ezra, Liam said ‘John will bring Indian’. Now Ezra asks Mark what he found out from Liam. Given what Liam said, Mark can naturally report any of the following:
  • (28)

    1. John will bring Indian.

    2. Liam said that John will bring Indian.

    3. Liam thinks that John will bring Indian.

By contrast, Liam’s assertion does not naturally license the following reports:
  • (29)

    1. Liam is certain that John will bring Indian.

    2. Liam is sure that John will bring Indian.

    3. Liam takes himself to know that John will bring Indian.

It just doesn’t seem that, simply from Liam’s assertion that John will bring Indian, we are entitled to conclude that Liam’s attitude toward that proposition is strong in any sense. By contrast, we do seem entitled to conclude that Liam believes the proposition. Generally speaking, if A asserts p, you can report that A said p, or that they think p; but it is very strange, just on the basis of their assertion, to report that they are sure or certain or take themselves to know p. This is expected if assertion is weak, but surprising if it’s strong.

Another consideration is this. The most natural way for Ezra to ask Mark what Mark has found out is to say:

  • (30) [Ezra:] What does Liam think John’ll bring?

Now, Ezra could instead ask one of the following:
  • (31)

    1. [Ezra:] What does Liam know John’ll bring?

    2. [Ezra:] What is Liam sure John’ll bring?

These are coherent questions. But they do not seem to be ordinary ways to simply find out what Liam said to Mark; instead, they feel like questions about a special subset of what Liam might have said— namely, the things that Liam indicated he was maximally sure about. Generalizing: if you know A was asked about X, and you want to know what they said about X, a normal way to find out is to ask what A thinks; asking what A is sure of, or takes themself to know, about X, is not a normal way to do so. This is exactly what we would expect if assertion were weak, and exactly the opposite of what we’d expect if it were strong.

A final related point builds on the observations by HRS about the difficulty of interpreting sentences like (3), repeated here:

  • (3) ??Claire will probably win, but it’s not as if I think she’ll win.

Things seem parallel with respect to assertion. Compare:11
  • (32)

    1. ??Claire will probably win, but I’m not saying she’ll win.

    2. Claire will probably win, but I’m not sure she’ll win.

  • (33)

    1. ??I think Claire will win, but I’m not saying she’ll win.

    2. I think Claire will win, but I don’t know she’ll win.

The ‘sure’ and ’know’ variants sound much better than the ‘say’ variants, which feel difficult to interpret. But if saying p requires knowledge or certainty of p, the ‘say’ versions should be easily interpretable, like the ‘sure’/‘know’ versions.

3.3.2 Replies

We’ll reply to three potential concerns.

First, on the last data point: a potential confound is that the bad variants in (32) and (33) are mixing attitude verbs with speech-act reports. But this does not in general lead to infelicity; witness:

  • (34)

    1. I want Claire to win, but I’m not saying she will.

    2. I consider it possible that Claire will win, but I’m not saying she will.

Since these contrasts are fine, the natural explanation of the infelicities in (32) and (33) is that if you’re in a position to think that Claire will win, you’re also in a position to say as much. In other words, assertion is weak.12

Second, in ordinary circumstances, you can infer from Liam saying ‘I think John will bring Indian’ that Liam doesn’t know what John will bring, while you can’t infer this about Ezra if he says outright ‘John will bring Indian’. However, this has a straightforward explanation which is compatible with the weakness of assertion: as we discussed above, ⌜I think p⌝ leads to an anti-presupposition that ⌜I know p⌝ is false (and presumably also a scalar implicature that ⌜I’m sure p⌝ is false as well, as Bach (2008) discusses). By contrast, p alone does not, since ⌜I { am sureknow } p⌝ is not (on standard theories) an alternative to p alone (Katzir 2007). Thus Liam’s statement gives rise to the inference that he isn’t sure and doesn’t know that p, while Ezra’s statement doesn’t. But this is all perfectly compatible with assertion being weak.

Third, in both the case of belief and assertion, these incoherent sequences can be improved by pulling ‘will’ out of the contraction and focusing it: ‘Claire will probably win, but I’m not saying she will win’ is at least improved. But this supports rather than undermines our point: if assertion were strong, focusing ‘say’ should bring out exactly the contrast we aim to communicate here, just as in ‘Claire will probably win, but I don’t know she will’. The fact that we can’t bring out these contrasts by focusing ‘say’, and need to focus ‘will’ instead, supports our claim that ‘say’ is not strong. Focusing ‘will’ improves things by making salient the contrast between ‘probably’ and ‘will’: our point is not that asserting ‘Claire will probably win’ and ‘Claire will win’ amount to the same thing (they obviously don’t), but rather that the gap between them can’t be brought out simply by emphasizing ‘say’, while the corresponding gap with a strong predicate like ‘knows’ can be brought out that way.

4 ‘How Do You Know?’

This completes our positive argument that assertion is weak. We’ll now turn to rebutting the most prima facie compelling arguments that assertion is strong. We’ll start by addressing the argument from ‘How do you know?’; then, in §5, we’ll explore Moorean utterances.

A common argument for the knowledge norm is the fact that when someone makes an assertion, it’s usually fine to reply ‘How do you know?’ Since ⌜How p?⌝ presupposes p, this suggests that the very act of asserting p communicates that you know p (Williamson 2000, 252–253).

But this conclusion is not true in general.13 While ‘How do you know?’ is often a natural response to assertions, it isn’t always. Consider:

Liam’s assertion is fine. But it would be mad for Mark to reply to (36d) with (37): After all, Liam just said he didn’t know! So: even if ⌜How do you know p?⌝ is often a natural reply to an assertion of p, it isn’t always. And so our use of ‘How do you know?’ does not show that assertion in general requires knowledge.

In fact, further reflection on the naturalness of various responses to assertions provides further evidence that assertion requires belief but not, in general, knowledge (or certainty). Compare the responses in (38):

(38a) and (38b) are perfectly ordinary responses to Liam’s assertion— intuitively, the questioner is trying to ascertain Liam’s level of confidence in what he said. Of course, if you like the idea that there are strong norms of assertion, you might think that these responses are simply double-checking that Liam was complying with those norms. But that leaves it puzzling why (38c) is so weird: advocates of strong norms of assertion will also agree (a fortiori) that there is a belief norm of assertion. So why can’t we use (38c) to similarly double check that Liam is following this norm of assertion?

By contrast, if there’s no knowledge or certainty norm, but there is a belief norm, the contrast is easy to explain. (38c) feels aggressive or redundant, because the norms of assertion already require Liam to believe what he says;14 whereas (38a) and (38b) are perfectly reasonable attempts to ascertain more information about Liam’s mental state, since assertion doesn’t require knowledge or certainty.15

Nevertheless, you might wonder: if assertion is weak, why is ‘How do you know’ such a natural follow-up in so many cases? Well, of course, it is perfectly consistent with the weakness of assertion that in many particular contexts, we do expect people to only say what they know, and the naturalness of ‘How do you know?’ in those contexts may well show just that (see §6 for more discussion). But that doesn’t mean that assertion requires knowledge in general.

Two further points. The first is that, even insofar as assertion often goes with knowledge, the relevant sense may may be very weak (see, again, Holguín 2021). This plausibly is all that’s expected when people causally ask ‘How do you know?’, which in many contexts is, intuitively, just a request for you to state your reasons for belief (cf. McKinnon 2012). Again, though, let us emphasize that we don’t think that assertion in general requires even this weak attitude, since there is nothing whatever wrong with denying knowledge tout court, and then going on to assert, as in (36).

Second, if the norms of assertion are weak in general, we will constantly be negotiating and exploring the strength of commitment that we expect or can assume in any particular context. Obviously sometimes you should only say what you know, or are sure of. When under oath, you’d be ill-advised to just say whatever you think! And plausibly, using presuppositional questions like ‘How do you know?’ is one effective way to negotiate the local standards (using a question like this shows that I expect that, in this context, we are only saying what we know, in at least some sense of ‘know’). In §6, we’ll briefly sketch a theory which accounts for this flexibility.

Upshot: on reflection, judgments about follow-ups to assertions show that assertion is weak, not strong.

5 Moorean Sentences

We turn, finally, to what we see as the best argument for defenders of strong assertion: Moorean sentences. Recall the contrast in (5):

The obvious explanation of this contrast is that asserting p requires more evidence than merely believing p: assertion requires knowledge, which is why (5b) is infelicitous—it’s impossible to know. By contrast, belief doesn’t require knowledge, which is why (5a) is fine. We’ll call this story (which HRS go in for) the strong assertion story (SAS) about Moorean sentences. Our central goal in this section is to argue that SAS is wrong. We make two arguments for this point, then we briefly offer a positive account of Moorean sentences.

5.1 Guessing Contexts

First observation: it’s sometimes the case that (1) you may assert ⌜I don’t know p⌝ (since you don’t know p); (2) you may assert p (since assertion is weak); but (3) you still can’t assert ⌜p but I don’t know p⌝. In these cases, the SAS cannot explain what is wrong with such an assertion.

This comes out clearly in contexts where you are explicitly asked to guess. Recall the game show scenario, where you are asked what year the Seven Years’ War started. You don’t know, and you can say as much: ‘All I know is that it started in the 1700s—I really don’t know more than that’. The host insists you guess, and you say:

Again, this is a fine assertion.

Alternately, suppose that, in the same context, the game show host, instead of asking you to take a guess, prompts you by asking, ‘Do you know whether the war started in 1760?’ In response to that prompt, it seems like you can—you should—say:

But what you can’t do is assert a Moorean conjunction which puts these two assertions together, as in (43a), or the subsequent variants:16

This is surprising. After all, it’s common ground that you don’t know when the war started. It’s completely fine to say that you don’t know or that you aren’t sure. It’s also completely fine to say your guess. What you can’t do, apparently, is combine these speech acts: you can’t simultaneously say p while explicitly admitting that you are less than certain of p.

The SAS can’t account for this. It says that what’s wrong with Moore sentences is that you are flouting the norm of assertion. But it is clearly normatively fine to assert (41), even though it’s common ground that you don’t know it. So what’s wrong with asserting it, and also admitting that you don’t know it? Even those who like strong assertion—and so think there is something normatively wrong with (41)—should agree that there’s a striking contrast between (41) and the Moorean variants in (43). The SAS can’t account for this contrast. After all, it says that the same thing is wrong with both assertions. This shows that the SAS does not yield a general account of Moorean sentences.

There could be different explanations of these phenomena. Perhaps the SAS explains Moorean infelicity in cases of non-guessing assertions, and something else does so in guessing cases. But this seems unlikely— more plausible is that whatever explains the infelicity in guessing cases also explains it in the others. One story you might tell along these lines is that the infelicity of the assertions in (43) arises from the irrelevance of the Moorean conjunct. But, first of all, if relevance explained the badness of these Moorean cases, then there is no reason to think it wouldn’t account for all Moorean phenomena—in which case the SAS would be otiose, and we would still conclude that Moorean sentences provide no evidence for strong norms. But second, the relevance story is not plausible. For adding conjuncts about your mental state is generally fine, as long as those conjuncts don’t express uncertainty:

But then the relevance explanation is wrong: if it is relevant that you are sure that p or know p, then it’s also relevant that you are not sure that p, or don’t know p.

5.2 Certainty

Our second argument against the SAS is that it can’t explain the infelicity of Moore sentences involving attitude verbs stronger than ‘know’, like the following:

In general, for any operator U which expresses a lack of maximal certainty in p, ⌜p and U(p)⌝ appears unassertable. If the SAS were to explain such cases, it would have to say that you can only assert p if you’d bet your life on p, if you’re absolutely certain of p, if you can absolutely, infallibly rule out every possibility in which p is false, and so on.

But these norms are, on the face of it, absurd. We’re almost never in such strong epistemic positions about the things we ordinarily talk about—where the butternut squash is, whether Miriam will win the lottery, whether your car is still out back. Yet this ordinary fact of life does not stop us from speaking. If Liam tells you he left his car around the corner, and you respond by asking how he could possibly be willing to bet his life on this, you aren’t demonstrating mastery of the norms of assertion.

The SAS thus faces a dilemma. On the one hand, it may simply not try to explain these Moorean infelicities. But then we need an alternate explanation; and again, that alternate explanation will presumably extend to the basic cases with ‘know’, making the SAS otiose. On the other hand, the SAS may try to explain theses cases—but if it does so, it will have to say patently absurd things about the strength of assertion.

Others have pointed to this concern and responded in various ways.17 Williamson 2000 suggests that the infelicity stems from the fact that we resist allowing the contextual standards for knowledge to come apart from those for certainty. Relatedly, Stanley 2008, Clarke 2018, Dorst 2019, and Beddor 2020b suggest that these patterns support a certainty (or credence-one) norm of assertion, but suggest the relevant kind of ‘certainty’ is not overly demanding because it’s context-sensitive (cf. Moss 2019).

These replies don’t seem very promising. First, it’s not at all clear how we’d extend this story to the full range of constructions we saw above. For instance, it’s not plausible that ⌜I’d bet my life that p⌝ has any default, context-sensitive reading on which it’s entailed by ⌜I know that p⌝.

Second, it is fine to explicitly stand by p when you assert it but aren’t maximally certain:

But, again, things look very different if you admit that you don’t actually believe what you said: These contrasts, again, suggest that belief has a different relation to assertion than certainty: it’s easy to defend assertions that you aren’t sure of, but not ones you don’t believe.

Third, if the norm of assertion is maximally strong, exchanges like

But Ezra’s responses here are comically inapt.

Defenders of the SAS may, again, appeal to a form of contextualism in response. Suppose that whenever you ask someone about certainty (or whether they are willing to bet their life on something, or whatever), you raise the stakes for those things. If someone asserts p in context c1, they must be absolutely certain of p, relative to the certainty/sureness-standards of context c1. But simply asking them if they are certain of p moves you to a new context, c2, with higher certainty/sureness-standards, relative to which they might not be certain.

We think this is implausible. First, we don’t know of any other kind of context-sensitive language where the contextual standards are invariably changed simply by using that language. This position would also make the norms of assertion strangely ineffable: you’re supposed to be (absolutely) certain of whatever you say, but relative to standards that we can’t (in a stable way) talk about, since talking about them ipso facto raises the standards.

More pointedly, if the standards for ‘certain’ can be changed so easily across assertions, it’s unclear why they can’t also change within a single assertion in the same way. But clearly they can’t—if they could, then ⌜p, but I’m not absolutely certain of it⌝ would be fine, since ‘absolutely certain’ would be interpreted relative to higher standards for certainty than the first conjunct.

Finally, there is direct evidence against the view that merely asking someone whether they are certain about something is enough to raise the stakes for certainty ascriptions. Consider:

If merely asking about certainty were enough to raise the stakes, then Catherine’s ‘No’ here would be interpreted in a different context (with respect to certainty stakes) than her first assertions, and so this should be perfectly coherent. But in fact, in this case, it seems like Catherine has reversed herself, in a way totally different than, for example, (46). So it just doesn’t look like the stakes for certainty ascriptions can be raised that easily, let alone (as the present account would have to have it) that they must.18

In sum: if the SAS were to account for the full range of Moorean sentences, it would have to say that the norms of assertion are maximally strong. But that is patently implausible. Once again, the SAS falls short.

5.3 An Alternative

Our main goal in this section is negative: to argue that the SAS does not explain Moorean phenomena, and hence that Moorean phenomena do not show that norms of assertion are strong.

We want to briefly say something positive about what the explanation could be. This idea is separable from the rest of the picture we are pushing, and we are not sure it’s right; indeed, we’re confident more needs to be said. But we want to give some sense of how we might go about accounting for Moorean phenomena in a way consistent with the weakness of assertion.

Start with an observation from Silk 2015, 2022 and Mandelkern 2021: Moorean infelicities extend to commands. There’s something wrong with giving an order to do p while also asserting that p might not happen:

Obviously there is no norm which says that you may give an order only if you know it’ll be obeyed: if you’re being kidnapped, there’s nothing infelicitous about commanding ‘Let me go!!’ Thus the analogue of the SAS in this case is a non-starter. So what explains (51)? Mandelkern suggests that there’s a pretense norm along the following lines:

Imperative Posturing: In performing the speech act of giving an order, act as if you are absolutely certain you will be obeyed.

Mandelkern suggests that something similar might be applied to Moorean phenomena in general (attributing the suggestion to Daniel Rothschild).19 That is the idea we’d like to explore here—that there is a norm along the lines:

Epistemic Posturing: In performing the speech act of asserting p, act as if you are absolutely certain of p.

This norm would account for the basic contrasts which motivate HRS to reject Entitlement Equality: while, per this norm, you can’t ever assert a sentence like ⌜p but I’m not sure that p⌝, it’s consistent with it to assert a sentence with the form ⌜I think p but I’m not sure that p⌝, since such a sentence does not amount to an assertion of p. This norm also accounts for the various data points that the SAS misses. It applies equally to explicit guessing contexts as to any other context of assertion, which means that it will account for the infelicity of Moorean assertions in those contexts. And Epistemic Posturing says you must adopt a pretense of maximal certainty, which means that it will account for Moorean sentences with ‘certain’, ‘bet my life’, and so on.

Since Epistemic Posturing only applies within single speech acts, there is, according to it, no need to maintain a pretense of absolute certainty before or after an assertion. Moreover, since the norm requires a pretense—not actual certainty—it doesn’t license the conclusion that the speaker is absolutely certain of what they said; all it licenses is the conclusion that the speaker was (in the moment of assertion) acting as if they were certain of p (more on this in a moment).20

And, of course, this approach is consistent with thinking that assertion is weak in general: according to this norm, it’s fine to assert p when you are less than certain of p, provided that (while making the speech act) you adopt a pretense of certainty.

To help bring out the considerations in favor of Epistemic Posturing, let us contrast it with a subtly different, broadly Stalnakerian approach discussed in Hawthorne et al. 2016, which we think is prima facie attractive but on reflection less appealing than Epistemic Posturing.21 This approach maintains that there is a strong norm of assertion, but that we’re often happy to simply pretend we’re satisfying it.

You might be inclined to think this is a way of saving a knowledge norm, and hence something like the SAS. But we aren’t convinced that much gets saved, either (i) of the idea that there is a knowledge norm nor (ii) of the idea that there is a knowledge norm. First: for this reply to account for the certainty data above, we’d need the norm in question to be a norm of absolute certainty, not merely knowledge. That is, this view won’t account for Moore sentences with ‘I’m not certain’, ‘I wouldn’t bet my life’, and so on unless the norm says to assert p only if you are absolutely certain that p. And, attractive as the knowledge norm has been to many, few have been inclined to defend a norm of absolute certainty.

But suppose we did take on board a certainty norm, and then say that it can be satisfied by pretense. That would be a strange kind of norm indeed. In general norms can be reasonably violated in various ways when they are outweighed by other considerations. But when a norm is reasonably violated, there is not in general a requirement that we pretend we are satisfying the norm—we might do so, but we certainly need not. So for this approach to work, it would have to say that there is a certainty norm of assertion, but it’s a special kind of norm: it’s a norm that we can reasonably violate in many cases, but even when we do so, we must continue to pretend that we are satisfying it. But that is a very idiosyncratic picture; and it’s not clear, in the end, how much is being saved of the standard strong-norm picture.

Maybe you’re still attracted to that view. If so, we have another, more empirical argument against it. Epistemic Posturing is a norm specifically tied to speech acts, while the picture we are considering now is about satisfying a norm by pretense more generally. But as we have seen, the infelicity of Moorean sentences is extremely local to the speech act in question. While it is unacceptable to assert ⌜p but I’m not certain that p⌝, it is fine to assert p while, in a prior or subsequent speech act, acknowledging that you aren’t certain of p. We saw this in the game-show case above, where it is perfectly fine to say you don’t know p, and then in a subsequent speech act assert p. Compare also ‘The war started in 1760. I don’t know if that’s right, but it’s my best guess!’ which is completely fine.22 By putting the expression of uncertainty in a subsequent, separate speech act, the assertion is saved. But clearly there is no conversation-level pretense here that the speaker is certain that the war started in 1760: she says explicitly that she isn’t!

Thus it looks like the ban on admitting uncertainty about p while asserting p is incredibly local to the speech act of assertion: outside of that speech act, there is not a general, conversation-level pretense that what was asserted is certain.

This is accounted for by our proposal, but not by the proposal under discussion. We could elaborate that proposal by saying that it is local to speech acts. But then that proposal collapses into ours.23

Epistemic Posturing is deeply puzzling. Why would a norm like this exist? We don’t know, and clearly an explanation is needed: indeed, as it stands, Epistemic Posturing is nothing more than a compact description of the observations we’ve drawn out. But if there is indeed a Posturing norm on orders, it’s relatively unsurprising that a similar norm would exist for assertions, even if more explanation of both is needed.

Before considering the range of Moorean infelicities, we assumed, with most philosophers, that the SAS was right. But as far as we can tell, the cases we explored above suggest it is not. Epistemic Posturing does a better job of explaining Moorean phenomena. With that said, let us emphasize that, as far as the broader goals of this paper go, we’re very open to other explanations of Moorean sentences. Our central point is the negative one: since the SAS does not work, Moorean phenomena do not provide good evidence for strong norms of assertion.

6 Conclusion

Assertion is weak. When we focus on the everyday-discourse patterns in (1) what we say, (2) how we get other people to say things, and (3) how we report what they say, we find that they are inconsistent with assertion being strong. Moreover, the best arguments for strong assertion—from ‘How do you know?’ and Moorean sentences—turn out to be surprisingly unconvincing.

This is mostly a negative paper: our central goal has been to argue against a prevailing family of views about norms of assertion. But having said what we think assertion is not, we’d like to close by saying a bit about what we think assertion is.

First, to reprise a point from §3.1.2: while we are open to fine-grained taxonomies, we think that (what we have been calling) assertions are an interestingly unified class, since they are all, rather surprisingly, subject to an anti-Moorean norm. On top of that, we think assertions are also all governed by the norm: say the strongest thing you believe about the question under discussion. In concluding, we’ll say a bit more about how to think about the norm.

That norm is superficially familiar enough. But we think belief is weak, so Entitlement Equality comes out true: both belief and assertion are weak. Moreover, we thinkthey are both weak in a very distinctive way. In particular, we’re sympathetic to a theory of weak belief developed in Holguín (2022). Holguín argues that what it is to believe p, relative to a given question, is for p to be entailed by your best guess about that question. He argues that this gives rise to a variety of interesting patterns concerning what you can and can’t reasonably believe.

We think that picture is right. Moreover, we think a natural Jamesian (1897) idea from Levi 1967, which we develop further in Dorst and Mandelkern fc., helps explain these patterns in both guesses and beliefs. The basic idea is that in forming your best guess, you are implicitly maximizing a tradeoff: you want your guess to be accurate but also informative. The concern for accuracy provides reason to guess something you’re confident is true, since true guesses are better than false ones. But the concern for informativity provides reason to guess something that narrows down the space of alternatives, since more informative (true) guesses are better than less informative ones. The optimal way to balance this tradeoff varies with your level of epistemic risk aversion.

In the limiting case, when accuracy is all we care about, it makes sense to guess only what you’re absolutely certain of (‘John will bring something for dinner’). But in normal contexts, it often makes sense to guess something more specific (‘John will bring Indian’), since the value of an informative guess outweighs the risk that it might be false.

Our proposal: what’s true for belief and guessing is also true for assertion.24 While we can’t make a detailed case for this here, we will briefly spell out some virtues of this proposal (we develop the idea more, with different arguments for its applicability to assertion, in Dorst and Mandelkern fc.).

First, it explains the connection between assertion and truth. Since true guesses/beliefs are better than false ones, likewise for assertions: both aim at truth (cf. Marsili 2018, 2021).

Second, it explains the connection between assertion and the question under discussion (Roberts 2012). More informative guesses/beliefs are better than less informative ones, and informativity, on our picture, is question-relative. Thus the same goes for assertions: both aim at informativity, relative to the question being consider.

Third, it explains why assertion is weak: assertion is weak because relatively improbable things can also be your best guess, provided they are sufficiently informative. In the right context—for example, prompted by ‘Who do you think will win the lottery?’—‘Claire will win’ is an acceptable assertion because, if true, it’s very informative.

But, finally, guessers have to figure out how to optimize the tradeoff between informativity and accuracy in a given situation. That flexibility helps explain how and why the norms of assertion may vary: in some, epistemically risk averse, contexts you can only say (think) what you are certain of; in other, epistemically permissive contexts, you can say (think) whatever you’re inclined toward, even if it’s relatively improbable. This variation is explained by changes in epistemic risk aversion—a free parameter that can be negotiated and adjusted by conversational participants.25

We think this flexibility helps explain why many have thought assertion is strong: they’ve focused on contexts in which accuracy is more heavily weighted than informativity. Nevertheless, we think it’s clear that many contexts are not like this. When you ask a grocery clerk where you can find the butternut squash, you’re not worried about them saying something that’s not totally certain—you just want a helpful pointer.

This variation explains why a lack of knowledge or certainty is sometimes treated as a good grounds for complaint, sometimes not. It also makes sense of why reactions like ‘Are you sure?’ or ‘Do you know that?’ are natural responses, whereas ‘Do you think that?’ is much more aggressive: the former can be used to negotiate the level of evidence required for assertion, while the latter suggests that the speaker may have violated assertion’s fundamental norm.

More generally, we think this theory gives an intuitive overall picture of what assertion is, and why it is weak. Assertion is weak because we do not use assertions simply to transmit knowledge or certainties, but also to form and communicate pictures of the world that go beyond our certainties: we aim not only to avoid error, but also to acquire and share an informative picture of the world. For that reason, much of what we say involves coordinating on a picture that goes beyond what we are certain of. For limited agents like us, we can’t afford to stick to our certainties; we often must take a stand that goes beyond them. Assertion is how we coordinate on that stand.

Acknowledgments

We’re very grateful to reviewers at Sinn und Bedeutung and Philosophers’ Imprint, audiences at the New York Philosophy of Language Workshop and NYU Philosophy, and Bob Beddor, Matthew Benton, Sam Carter, Roger Clarke, Alexander Dinges, Cian Dorr, Diego Feinmann, John Hawthorne, Ben Holguín, Cameron Domenico Kirk-Giannini, Manfred Krifka, Harvey Lederman, Annina Loets, Neri Marsili, Dilip Ninan, Daniel Rothschild, Ginger Schultheis, Levi Spectre, Robert Stalnaker, Andreas Stokke, and Julia Zakkou for very helpful discussion.

Notes

  1. For examples, see Unger 1975; Williamson 2000, Ch. 11; Douven 2006; Lackey 2007, 2011, 2016; Stone 2007; Brown 2008, 2011; Levin 2008; Bach 2008; Adler 2009; Kvanvig 2009; Turri 2010a, b, 2011, 2016a, b; Benton 2011, 2012, 2016a, b; DeRose 2000; Maitra 2011; McKinnon 2012; McKinnon and Turri 2013; Kneer 2018, 2021; Benton and van Elswyk 2020; Willard-Kyle 2020. While we don’t know of any explicit arguments that assertion is weak in all the senses we will spell out, this view is very much in the spirit of Stalnaker’s framework, in which assertions aim to update a common ground which can track weak attitudes of acceptance (Stalnaker 2002). Our view is slightly different: on our view, assertion does, as a rule, go by belief—it’s just that belief is itself a weak notion. And our focus is on norms of assertion, rather than the related question of how assertions update the common ground (see §5.3 for discussion). The most important precedent for our view is Oppy 2007, which unfortunately only came to our attention as this paper was going to press, preventing us from giving it a full discussion. Short story: Oppy also uses prediction- and lottery-cases—as well as considerations about the ways weak norms can be supplemented—to suggest that the norm of assertion is belief, understood as being weaker than certainty (or knowledge). But he doesn’t seem to think that belief (or assertion) is as weak as we do—for instance, he does not defend (iv) or (v) below.
  2. See also Kahneman and Tversky 1982; Windschitl and Wells 1998; Dorst 2019; and Holguín 2022.
  3. We’ll move freely between ‘think’ and ‘believe’, following the recent literature; we’ll also assume the people in our examples are rational, and so match their credences and beliefs to their evidence. When we say credence, we mean rational credence, a.k.a. epistemic probability.
  4. See e.g., Williamson 2000; Stalnaker 2006; Buchak 2014, and Staffel 2016.
  5. Some are happy with (1) but resist (2). If you’re among them, then you still agree with use that belief can be quite weak—we just disagree about how weak it can reasonably be.
  6. It certainly includes your evidence and rational credences, perhaps also the question under discussion (Roberts 2012; Holguín 2021), facts about normality (Smith 2016; Goodman and Salow 2018; Carter 2022), pragmatic stakes (Weatherson 2005; Ganson 2008), and epistemic risk aversion (Dorst and Mandelkern fc., and §6 below).
  7. Some are skeptical that there’s a unified norm of assertion (Stone 2007; Levin 2008; Sosa 2009; McKinnon 2015), and others have suggested that there are various assertion-like speech acts with varying norms (Turri 2010a; Cappelen 2011; Pagin 2016; Zakkou 2021). We are inclined to disagree—there is at least one unifying feature of such speech acts, namely that Moorean utterances are ruled out across the board (see §3.1.2). Nevertheless, our negative arguments are mostly consistent with these positions.
  8. Some people agree with intuitions up to here, but think that (9b) is not acceptable. If you are one of them, then you still agree with most of our claims about the weakness of assertion—that is, you’re on board with Theses (i)–(iv), but not (v); and, again, we’re happy to have taken you that far away from the orthodoxy.
  9. Here we deviate from authors who take the speech act of prediction to have its own (weaker) normative standards which differ from assertion (Benton 2012; Benton and Turri 2014; Cariani 2020; Ninan 2021a). Some may want to characterize all of our examples, including past-directed ones, as predictions, saving ‘assertion’ for something stronger; see §3.1.2 for discussion.
  10. Interestingly, in response to related questions like ‘Who does Mark think will win?’, a bare NP ‘Joe’ seems fine. Likewise, such bare-NP responses can be okay in Moorean contexts—for example, it’s fine to respond to ‘Who do you think will win the race?’ with ‘Joe, but I’m not sure he will’. These two contrasts suggest that ellipsis of the kind in question is indeed possible for bare NPs, but not for full clauses.
  11. ‘Assert’ is stilted, so we’ll default to using ‘say’—but we think the results are the same using either verb.
  12. A related observation that we think supports our argument is an intriguing contrast: As Yalcin (2010) observes, following Windschitl and Wells (1998), ⌜Probably p⌝ does not seem to have the simple truth-conditions we would expect, namely, that p is more probable than not. For instance, imagine a five-way election with probabilities Joe, 45%; Bernie, 25%; Elizabeth, 15%; Pete, 10%; Kamala, 5%. When asked ‘Who’s gonna win the election?’, it’s natural (though not required) to respond with ‘Probably Joe’. What this brings out, we think, is that saying ⌜Probably p⌝ is a natural way to communicate that p is your guess about the QUD, i.e. what you think about it (see Dorst and Mandelkern fc.). Combined with our claim below that asserting is guessing, this explain the contrast between (35a) and (35b): in the former, you express that your guess is that John will bring Indian, but then refuse to say that he will; while in the latter you aren’t necessarily guessing that John will bring Indian at all.
  13. See Stone 2007; Kvanvig 2009; Turri 2010b; McKinnon 2012, and Kneer 2018 for related responses.
  14. ‘Do you think that?’ improves with focus on ‘that’ or ‘think that’, whereas it doesn’t seem to work at all with focus just on ‘think’, suggesting that insofar as it can be used as a rejoinder, it is being used to query the content of the assertion rather than the attitude in question.
  15. Of course, as Diego Feinmann has pointed out to us, ‘How do you know?’ is a much more natural reply to an assertion than an explicit attitude claim. Compare: There is a contrast here. However, as we’ve seen (§3.3.2), there’s a simple explanation for this: ⌜I think p⌝ implicates that the speaker is not sure of (and doesn’t know) p, while p alone lacks such an implicature.
  16. This observation goes back at least to Williams 1994, and has been discussed in the subsequent literature on prediction (e.g. Benton 2012; Cariani 2020). van Elswyk (2021) takes this point to be evidence for his view that declaratives like ‘The war started in 1760’ by default come with an unpronounced parenthetical ‘I know’ (compare similar proposals in Chierchia 2006; Alonso-Ovalle and Menéndez-Benito 2010; Meyer 2013). However, as far as we can tell, the present data tell against such a view: in an exchange like the present one, it is completely unacceptable to say ‘Ok, hm. I know that the war started in 1760’, since you’ve just said you don’t know. On van Elswyk’s view, though, this is what (41) means.
  17. E.g. Williamson 2000; Stanley 2008; Sosa 2009; Clarke 2018; Dorst 2019; Moss 2019; Beddor 2020a, b.
  18. It’s not that account has to merely say that it’s easy to raise standards for certainty; we agree about that. We even think that just mentioning skeptical possibilities might be enough to do so (DeRose 1992; Lewis 1996). But this isn’t enough: the account we’re targeting must say that simply using ‘certainty’ invariably raises the standards. But that’s wrong. As Cross 2010 points out, simply saying things like ‘You might not have hands’ is not a good way to move to a skeptical context. Instead, you have to use the right sort of intensifier (‘Are you absolutely sure…?’), or raise a salient counter-possibility (‘What if you’re a brain in a vat?’), etc.
  19. The closest proposal to this in the existing literature that we know of is in Condoravdi and Lauer 2011, which argues that an assertion of p is a public commitment to act as if you believe that p is true. Our proposal is similar to this, though with belief strengthened to certainty, and restricted to single speech acts. For other ideas in the neighborhood, compare Ninan’s (2021b) suggestion that to represent someone as believing p is to represent them as disposed to act as if they know that p, and Lauer’s (2012) suggestion that pragmatic slack is to be explained in terms of pretense norms.
  20. If in asserting ‘John will bring Indian’ you adopt a pretense of certainty in this claim, why can’t you likewise say ‘John will bring Indian; I’m certain of it’? Presumably because (you know that) you’re not certain of it, and you shouldn’t assert things you know are false.
  21. Thanks to Dilip Ninan and Daniel Rothschild for helpful discussion of this point. Yet another approach would be a ‘loose talk’ kind of knowledge norm (Moss 2019)—we think this would be subject to the same worries. Generally speaking, while there are superficial similarities between weak assertion and loose talk (on which see Lasersohn 1999) they are very different phenomena (most obviously: in cases of loose talk, you are asserting something which you know to be false when interpreted literally, as when you say ‘John is six feet tall’ when in fact he is just shy of six feet; whereas in the cases we have focused on, you never say something you know to be false). In fact, cases of loose talk constitute, to our knowledge, the best kind of prima facie counterexample to the norm that you should only say what you think (though of course they may not be genuine counterexamples, if what is asserted is in fact the proposition that, say, John is roughly six feet tall).
  22. Thanks to John Hawthorne for this example. A tricky class of cases comes from hedges: cases like ‘The war—I’m just guessing here, I don’t know—started in 1760’, which are fine (thanks again to John Hawthorne). We see two ways of dealing with this: (i) say that the asserted content here is ‘My guess is that the war started in 1760’; (ii) distinguish two speech acts, the speech act of asserting that the war started in 1760, and the speech act of hedging it (cf. Krifka 2014; Benton and van Elswyk 2020). Both options would make the goodness of these cases consistent with Epistemic Posturing.
  23. You might be worried that our approach doesn’t explain why you can’t believe Moorean assertions. But that’s misguided—you can believe them, since belief is weak! Since you can say ⌜I think (p, but I don’t know p)’, presumably you can also think p, but I don’t know p. (Of course, you can’t be certain of Moorean sentences, simply because ‘certain’ is strong in some relevant sense.)
  24. Need assertion follow the same accuracy-informativity tradeoff as belief? Our core idea would be preserved if assertions instead always care at least as much about accuracy as guesses/beliefs do. But it’s natural to think the two will go precisely together. Admittedly, some high-stakes contexts might suggest that assertions must be more sensitive to accuracy than beliefs. When asked, ‘I really need to deposit a check; is the bank open?’, if you’re not sure, then its infelicitous to reply with ‘It’s open’, but it’s fine to reply with ‘I think it’s open’. However, this can be explained by the implicature story from §3.3.2: in such contexts, it’s important to report whether you know, and ‘I think it’s open’ conveys that you don’t know (via implicature), whereas ‘It’s open’ leaves open that you do. Another question helpfully pressed by Cian Dorr is whether belief really matters at all: if two people have the same credences, could their assertions have normatively different statuses? As far as we can tell, the answer is yes. If you think Miriam won’t win the lottery, it’s fine to say she won’t. But if (having the same credence) you suspend judgment about whether she’ll win or not, it seems very odd to say she won’t.
  25. For other approaches to the flexibility of the norms of assertion, see for example Levin 2008; Sosa 2009; McKinnon 2015; Pagin 2016; Benton and van Elswyk 2020.

References

Adler, J. E., 2009. ‘Another Argument for the Knowledge Norm’. Analysis, 69(3):407–411.

Alonso-Ovalle, Luis and Menéndez-Benito, Paula, 2010. ‘Modal Indefinites’. Natural Language Semantics, 18:1–31.

Bach, Kent, 2008. ‘Applying Pragmatics to Epistemology’. Philosophical Issues, 18:68–88.

Beddor, Bob, 2020a. ‘Certainty in Action’. Philosophical Quarterly, 70(281):711–737.

Beddor, Bob, 2020b. ‘New Work for Certainty’. Philosophers Imprint, 20(8):1–25.

Benton, M. A., 2011. ‘Two More for the Knowledge Account of Assertion’. Analysis, 71(4):684–687.

Benton, M. A., 2012. ‘Assertion, Knowledge and Predictions’. Analysis, 72(1):102–105.

Benton, Matthew A, 2016a. ‘Expert Opinion and Second-Hand Knowledge’. Philosophy and Phenomenological Research, 92(2):492–508.

Benton, Matthew A., 2016b. ‘Gricean Quality’. Noûs, 50(4):689–703.

Benton, Matthew A. and Turri, John, 2014. ‘Iffy Predictions and Proper Expectations’. Synthese, 191(8):1857–1866.

Benton, Matthew A and van Elswyk, Peter, 2020. ‘Hedged Assertion’. In Sanford Goldberg, ed., The Oxford Handbook of Assertion, 245–263. Oxford University Press.

Brown, Jessica, 2008. ‘The Knowledge Norm for Assertion’. Philosophical Issues, 18(1):89–103.

Brown, Jessica, 2011. ‘Fallibilism and the Knowledge Norm for Assertion and Practical Reasoning’. In Jessica Brown and Herman Cappelen, eds., Assertion: New Philosophical Essays, 153–174. Oxford University Press.

Buchak, Lara, 2014. ‘Belief, Credence, and Norms’. Philosophical Studies, 169(2):285–311.

Cappelen, Herman, 2011. ‘Against Assertion’. In Jessica Brown and Herman Cappelen, eds., Assertion: New Philosophical Essays, 21–48. Oxford University Press.

Cariani, Fabrizio, 2020. ‘On Predicting’. Ergo, 7(11):339–361.

Carter, Sam, 2022. ‘Degrees of Assertability’. Philosophy and Phenomenological Research, 104(1):19–49.

Chierchia, Gennaro, 2006. ‘Broaden Your Views: Implicatures of Domain Widening and the ‘Logicality’ of Language’. Linguistic Inquiry, 37:535–590.

Clarke, Roger, 2018. ‘Assertion, Belief, and Context’. Synthese, 195(11):4951–4977.

Condoravdi, Cleo and Lauer, Sven, 2011. ‘Performative Verbs and Performative Acts’. In Ingo Reich eds., Sinn und Bedeutung, volume 15, 149–164. Saarland University Press, Saarbrücken, Germany.

Cross, Troy, 2010. ‘Skeptical Success’. Oxford Studies in Epistemology, 3:35–62.

DeRose, Keith, 1992. ‘Contextualism and Knowledge Attributions’. Philosophy and Phenomenological Research, 52(4):913–29.

DeRose, Keith, 1995. ‘Solving the Skeptical Problem’. The Philosophical Review, 104(1):1–52.

DeRose, Keith, 2000. ‘Now you know it, now you don’t’. The Proceedings of the Twentieth World Congress of Philosophy, 5:91–106.

Dorst, Kevin, 2019. ‘Lockeans Maximize Expected Accuracy’. Mind, 128(509):175–211.

Dorst, Kevin and Mandelkern, Matthew, fc. ‘Good Guesses’. Philosophy and Phenomenological Research, To appear.

Douven, Igor, 2006. ‘Assertion, Knowledge and Rational Credibility’. Philosophical Review, 115(4):449–485.

van Elswyk, Peter, 2021. ‘Representing knowledge’. Philosophical Review, 130(1):97–143.

Ganson, Dorit, 2008. ‘Evidentialism and Pragmatic Constraints on Outright Belief’. Philosophical Studies, 139:441–458.

Goodman, Jeremy and Salow, Bernhard, 2018. ‘Taking a Chance on KK’. Philosophical Studies, 175(1):183–196.

Hawthorne, John, Rothschild, Daniel, and Spectre, Levi, 2016. ‘Belief is weak’. Philosophical Studies, 173(5):1393–1404.

Holguín, Ben, 2021. ‘Knowledge by Constraint’. Philosophical Perspectives, 35(1):275–302.

Holguín, Ben, 2022. ‘Thinking, Guessing, and Believing’. Philosophers’ Imprint, 22(1):1–34.

James, William, 1897. The Will to Believe. Longmans, Green, and Co.

Kahneman, Daniel and Tversky, Amos, 1982. ‘Variants of uncertainty’. Cognition, 11:143–157.

Katzir, Roni, 2007. ‘Structurally-Defined Alternatives’. Linguistics and Philosophy, 30(6):669–690.

Kneer, Markus, 2018. ‘The Norm of Assertion: Empirical Data’. Cognition, 177:165–171.

Kneer, Markus, 2021. ‘Norms of Assertion in the United States, Germany, and Japan’. Proceedings of the National Academy of Sciences of the United States of America, 118(37):1–3.

Krifka, Manfred, 2014. ‘Embedding Illocutionary Acts’. In T. Roeper and M. Speas, eds., Recursion: Complexity in Cognition, volume 43 of Studies in Theoretical Psycholinguistics, 59–87. Springer.

Kvanvig, Jonathan L, 2009. ‘Assertion, knowledge, and lotteries’. In Duncan Pritchard and Patrick Greenough, eds., Williamson on Knowledge, 140–160. Oxford University Press.

Kyburg, Henry E, 1964. ‘Recent Work in Inductive Logic’. American Philosophical Quarterly, 1(4):249–287.

Lackey, Jennifer, 2007. ‘Norms of Assertion’. Noûs, 41(4):594–626.

Lackey, Jennifer, 2011. ‘Assertion and Isolated Second-Hand Knowledge’. In Jessica Brown and Herman Cappelen, eds., Assertion: New Philosophical Essays, 251–276. Oxford University Press.

Lackey, Jennifer, 2016. ‘Assertion and Expertise’. Philosophy and Phenomenological Research, 92(2):509–517.

Lasersohn, Peter, 1999. ‘Pragmatic Halos’. Language, 75(3):522–551.

Lauer, Sven, 2012. ‘On the Pragmatics of Pragmatic Slack’. In Proceedings of Sinn und Bedeutung, volume 16, 389–401.

Levi, Isaac, 1967. Gambling with Truth. The MIT Press.

Levin, Janet, 2008. ‘Assertion, Practical Reason, and Pragmatic Theories of Knowledge’. Philosophy and Phenomenological Research, 76(2):359–384.

Lewis, David, 1970. ‘General Semantics’. Synthese, 22:18–67.

Lewis, David, 1996. ‘Elusive Knowledge’. Australasian Journal of Philosophy, 74(4):549–567.

Maitra, Ishani, 2011. ‘Assertion, Norms, and Games’. In Jessica Brown and Herman Cappelen, eds., Assertion: New Philosophical Essays, 277–296. Oxford University Press.

Makinson, D. C., 1965. ‘The Paradox of the Preface’. Analysis, 25:205–207.

Mandelkern, Matthew, 2021. ‘Practical Moore Sentences’. Noûs, 55(1):39–61.

Marsili, Neri, 2018. ‘Truth and Assertion: Rules Versus Aims’. Analysis (United Kingdom), 78(4):638–648.

Marsili, Neri, 2021. ‘Truth: The Rule or the Aim of Assertion?’ Episteme, 1–7.

McKinnon, Rachel, 2012. ‘How Do You Know That 1How Do You Know?’ Challenges a Speaker’s Knowledge?’ Pacific Philosophical Quarterly, 93(1):65–83.

McKinnon, Rachel, 2015. Norms of Assertion: Truth, Lies, and Warrant. Palgrave-Macmillan.

McKinnon, Rachel and Turri, John, 2013. ‘Irksome assertions’. Philosophical Studies, 166(1):123–128.

Meyer, Marie-Christine, 2013. Ignorance and Grammar. Ph.D. thesis, Massachusetts Institute of Technology.

Moss, Sarah, 2019. ‘Full belief and loose speech’. Philosophical Review, 128(3):255–291.

Ninan, Dilip, 2021a. ‘Assertion, Evidence, and the Future’. Manuscript, Tufts University.

Ninan, Dilip, 2021b. ‘Knowing, Believing, and Acting as if You Know’. Behavioral and Brain Sciences, To appear.

Oppy, G., 2007. ‘Norms of Assertion’. In D. Greimann and G. Siegwart, eds., Truth and Speech Act: Studies in the Philosophy of Language, 226–249. Routledge.

Pagin, Peter, 2016. ‘Problems With Norms of Assertion’. Philosophy and Phenomenological Research, 93(1):178–207.

Percus, Orin, 2006. ‘Antipresuppositions’. In A. Ueyama, ed., Theoretical and Empirical Studies of Reference and Anaphora: Toward the establishment of generative grammar as an empirical science, 52–73. Japan Society for the Promotion of Science.

Roberts, Craige, 2012. ‘Information Structure: Towards an Integrated Formal Theory of Pragmatics’. Semantics and Pragmatics, 5(6):1–69.

Silk, Alex, 2015. ‘What Normative Terms Mean and Why It Matters for Ethical Theory’. In M. Timmons, ed., Oxford Studies in Normative Ethics, volume 5, 296–325. Oxford University Press.

Silk, Alex, 2022. ‘Weak and Strong Necessity Modals’. In B. Dunaway and D. Plunkett, eds., Meaning, Decision, and Norms: Themes from the Work of Allan Gibbard, 203–245. Maize Books: Michigan Publishing, University of Michigan.

Smith, Martin, 2016. Between Probability and Certainty: What Justifies Belief. Oxford University Press Uk.

Sosa, David, 2009. ‘Dubious Assertions’. Philosophical Studies, 146(2):269–272.

Staffel, Julia, 2016. ‘Beliefs, Buses and Lotteries: Why Rational Belief Can’t Be Stably High Credence’. Philosophical Studies, 173:1721–1734.

Stalnaker, Robert, 1984. Inquiry. Cambridge University Press.

Stalnaker, Robert, 2002. ‘Common Ground’. Linguistics and Philosophy, 25:701–721.

Stalnaker, Robert, 2006. ‘On the Logics of Knowledge and Belief’. Philosophical Studies, 128(1):169–199.

Stanley, Jason, 2008. ‘Knowledge and Certainty’. Philosophical Issues, 18:33–55.

Stone, Jim, 2007. ‘Contextualism and Warranted Assertion’. Pacific Philosophical Quarterly, 88:92–113.

Turri, John, 2010a. ‘Epistemic Invariantism and Speech Act Contextualism’. Philosophical Review, 119(1):77–95.

Turri, John, 2010b. ‘Prompting Challenges’. Analysis, 70(3):456–462.

Turri, John, 2011. ‘The Express Knowledge Account of Assertion’. Australasian Journal of Philosophy, 89(1):37–45.

Turri, John, 2016a. Knowledge and the Norm of Assertion: An Essay in Philosophical Science. Open Book Publishers.

Turri, John, 2016b. ‘The Point of Assertion Is To Transmit Knowledge’. Analysis, 76(2):130–136.

Unger, Peter, 1975. Ignorance: A Case for Scepticism. Oxford University Press.

Weatherson, Brian, 2005. ‘Can We Do Without Pragmatic Encroachment?’ Philosophical Perspectives, 19(1):417–443.

Willard-Kyle, Christopher, 2020. ‘Being in a Position to Know is the Norm of Assertion’. Pacific Philosophical Quarterly, 101(2):328–352.

Williams, John N., 1994. ‘Moorean Absurdity and the Intentional ‘Structure’ of Assertion’. Analysis, 54(3):160–166.

Williamson, Timothy, 2000. Knowledge and its Limits. Oxford University Press.

Williamson, Timothy, fc. ‘Knowledge, Credence, and the Strength of Belief’. In Amy Flowerree and Baron Reed, eds., Expansive Epistemology: Norms, Action, and the Social World, volume To appear, To appear. Routledge.

Windschitl, P. D. and Wells, G. L., 1998. ‘The Alternative-Outcomes Effect’. Journal of Personality and Social Psychology, 75(6):1411–1423.

Yalcin, Seth, 2010. ‘Probability Operators’. Philosophy Compass, 5(11):916–937.

Zakkou, Julia, 2021. ‘On Proper Presupposition’. Manuscript, Bielefeld University.