Research Article

When Virtues are Vices: 'Anti-Science' Epistemic Values in Environmental Politics

  • Daniel J. Hicks orcid logo (University of California, Merced)


Since at least the mid-2000s, political commentators, environmental advocates, and scientists have raised concerns about an “anti-science” approach to environmental policymaking in conservative governments in the US and Canada. This paper explores and resolves a paradox surrounding at least some uses of the “anti-science” epithet. I examine two cases of such “anti-science” environmental policy, both of which involve appeals to epistemic values that are widely endorsed by both scientists and philosophers of science. It seems paradoxical to call an appeal to epistemic values “anti-science.” I develop an analysis that, I argue, can resolve this paradox. This analysis is a version of the “aims approach” to science and values, drawing on ideas from axiology and virtue ethics. I characterize the paradox in terms of conflicts or tensions between epistemic and pragmatic aims, and argue that there is a key asymmetry between them: epistemic aims are valuable, in part, because they are useful for pursuing pragmatic aims. Thus, when epistemic and pragmatic aims conflict, epistemic aims need to be reconceptualized in order to reconcile them to pragmatic aims. When this is done, in the “anti-science” cases, the epistemic values are scientific vices rather than virtues. Thus the “anti-science” epithet is apt.

Keywords: epistemic values, values in science, anti-science, aims of science, aims approach, environmental politics, air pollution, multiple comparisons

How to Cite:

Hicks, D. J., (2022) “When Virtues are Vices: 'Anti-Science' Epistemic Values in Environmental Politics”, Philosophy, Theory, and Practice in Biology 14: 12. doi:



Published on
30 Apr 2022
Peer Reviewed

1 Introduction

Since at least the mid-2000s, political commentators, environmental advocates, and scientists have raised concerns about an “anti-science” approach to policymaking in conservative governments in the US and Canada, including the G.W. Bush and Trump administrations and Harper government. By the 2010s, these concerns about “anti-science” were sufficiently widespread to prompt mass protests, including the Death of Evidence protest against the Harper government in 2012 (Nature 2012) and the March for Science in April 2017. The latter event saw an estimated one million participants across 600 sites, including an estimated 100,000 participants at the primary site in Washington, DC (March for Science 2017). A survey of March for Science participants indicated that major motivations for participation included concerns about the Trump administration (despite the event being nominally non-partisan), the environment, and “facts, alt-facts, and misperceptions” (Ross et al. 2018, table 2).

This paper explores and resolves a paradox surrounding at least some uses of the “antiscience” epithet. I examine two cases, one that has been labeled “anti-science” by environmental advocates, and another that might plausibly also be so labelled. Both cases involve appeals to epistemic values that are widely endorsed by both scientists and philosophers of science. It seems paradoxical to call an appeal to epistemic values “anti-science.”

I develop an analysis that, I argue, can resolve this paradox. This analysis is a version of the “aims approach” to science and values (Elliott and McKaughan 2014; Intemann 2015; Franco 2019), drawing explicitly on ideas from axiology and virtue ethics. I characterize the paradox in terms of conflicts or tensions between epistemic and pragmatic aims, and argue that there is a key asymmetry between them: epistemic aims are valuable, in part, because they are useful for pursuing pragmatic aims. Because of this asymmetry, when epistemic and pragmatic aims conflict, epistemic aims need to be reconceptualized in order to reconcile them to pragmatic aims. When this is done, in the “anti-science” cases, the epistemic values are scientific vices rather than virtues. Thus the “anti-science” epithet is apt.

I proceed as follows. Section 2 presents the cases that motivate me here. One case concerns multiple comparisons and the control of error rates in air pollution epidemiology. The other case involves appeals to open science and the recent replication crisis in psychology and biomedical research. I characterize these cases in terms of conflicts between epistemic and pragmatic values, and highlight how it is paradoxical to call them “anti-science” when they appeal to epistemic values. I suggest that we can resolve the paradox by reflecting on the aims of fields such as environmental epidemiology. In section 3 I introduce a general conception of science as a social practice and an account of the relationship between aims and virtues in scientific research. I present two rival “views” of the aims of fields such as environmental epidemiology. I argue that, if we view the aims of science as narrowly or purely epistemic, we reach an impasse, confronted by conflicting epistemic values and no good way to decide between them. An alternative view, which allows science to have both epistemic and pragmatic aims, is more promising, but requires additional resources to resolve apparent conflicts between epistemic and pragmatic values. In section 4 I introduce these resources, from Henry Richardson’s account of intrinsic and instrumental goods and from Julia Annas’s account of the unity of the virtues, and then apply them to the arguments in question.

I spend the rest of this introduction discussing some terminology.

In recent work in philosophy of science, epistemic values are often understood, following McMullin (1982) and Steel (2010), as truth-promoting features of scientific practices, their products (whether understood as theories, hypotheses, or models), and specific scientific activities (such as designing studies, analyzing data, or drawing conclusions from findings). I will use the term pragmatic values in a parallel way, indicating features that promote practical, nonepistemic goods, with human health as the primary example.1

These definitions do not imply that pragmatic values are not epistemic values. A feature of scientific practice can be both an epistemic and a pragmatic value, if it promotes both truth and the pragmatic goods at hand. So this pair of terms does not pick out a distinction, but instead two ways to evaluate a feature of scientific practice. We can ask whether a feature is truth-promoting; and we can ask whether it promotes certain practical, non-epistemic goods. Note that both kinds of values are, in the first instance, features of practices, products, and activities, rather than attitudes that people take towards these features. (Compare McMullin’s 1982 distinction between evaluation and valuing.) Given certain epistemic goods, a feature can be an epistemic value (can promote some of those goods) even if no one takes a positive attitude towards it. (And likewise for pragmatic values, given certain pragmatic goods.)

In this paper I am interested in cases in which actual, uncontroversial epistemic values— things that everyone agrees are truth-promoting—are labeled “anti-science.” Other cases where it is not clear whether the features are truth-promoting—such as the use of uncertainty factors in regulatory toxicology (Steel 2015b, ch. 8)—can be philosophically interesting, but the “antiscience” label doesn’t carry the same air of paradox. So, throughout this paper, I will generally take it for granted that the things I label as epistemic values are indeed epistemic values (at least generally and for the most part in contemporary scientific practice), and that this claim is not controversial. For instance, I take it for granted that adopting practices that prevent us from drawing false conclusions would be truth-promoting, and that everyone concerned agrees that they would be truth-promoting. (And likewise for pragmatic values, such as a precautionary attitude promoting human health.)

Next, throughout this paper I generally use the term virtues and scientificvirtues as synonyms. In section 3 I define the virtues of a practice as features of the practice that promote its aims, and so scientific virtues are features of science that promote its aims. As a preliminary logical point, these definitions alone do not entail that epistemic values (things that are truth-promoting) are scientific virtues (things that are aims-promoting). And indeed my overall thesis is that, in some cases, epistemic values are scientific vices (things that are aims-frustrating).2

The contemporary scholarly literature on “anti-science” generally focuses on individual attitudes towards things like climate change, genetically modified foods, and vaccines (for example, Washburn and Skitka 2018; Lobato and Zimmerman 2019). The physicist and historian of science Gerald Holton used the term to refer to a “countervision of the world” that “has as its historical function nothing less than the delegitimation of (conventional) science in its widest sense,” which includes a more-or-less Enlightenment notion of human progress (Holton 1993, 152). Neither sense applies to the kind of “anti-science” that interests me here. First, I am not interested here in whether the individuals who put forward the arguments that I examine accept or reject, say, the scientific consensus on climate change. With respect to environmental policymaking, their individual attitudes are less important than the content of the arguments that they publish.3 Second, the individuals who put forward these arguments are often practicing scientists, and their arguments appeal to epistemic values that are widely accepted by practicing scientists. So it is highly implausible that the function of these arguments is the wholesale delegitimization of science.4

Instead of these conceptions of “anti-science,” in this paper I conceptualize “anti-science” as things that frustrate the pursuit of the aims of science. When these things are in some sense “internal” to science, “anti-science” is synonymous with scientific vices. But “anti-science” might also include “external” impediments, in some sense. (The target of Holton’s concern might be one example here.) In this paper I am interested only in “internal” impediments, and so for the rest of the paper I use “anti-science” as synonymous with scientific vices.

2 Epistemic Values and Environmental Politics

In this section, I introduce my examples of appeals to epistemic values that, I think, are “antiscience.” The first example is a recent argument, put forward primarily by a single author working with various coauthors in a series of papers. The second example is a strategy that has been deployed several times since the 1990s, including in a recently proposed rule at US EPA. Because of the extended history, the second example is somewhat longer than the first.

All of the examples concern environmental policy controversies, and specifically environmental public health and environmental epidemiology. Throughout the paper I use environmental epidemiology as my main example of a scientific field. Similar examples can be found in other policy-relevant scientific fields, such as macroeconomics and conservation biology. I make no claims about whether and to what extent my analysis would apply to non-policy-relevant fields of science.

I present the two examples in sections 2.12.2, characterizing them in terms of a conflict or tension between epistemic and pragmatic values. In section 2.3 I develop the thought that there is something paradoxical about calling these arguments “anti-science,” then sketch my approach to resolving this paradox as a segue to the next section.

2.1 Air pollution and control of error rates

My first example comes from the work of S. Stanley Young, a statistician with ties to, among other organizations, two conservative think tanks, the Heartland Institute and the Hoover Institution; the pharmaceutical industry; several North American universities; the American Statistical Association; and the American Association for the Advancement of Science (“Who We Are: S. Stanley Young,” n.d.). In a series of recent papers funded in part by the American Petroleum Institute, Young and collaborators have criticized work in environmental epidemiology on the hazards of air pollution, especially particulate matter (PM) (Young 2017a; Young and Kindzierski 2019; Young, Acharjee, and Das 2019).

Young levels three related but distinguishable arguments. Here I consider just one, which appeals to the idea, from frequentist hypothesis testing, that corrections must be made for multiple testing to avoid inflating the false positive rate.5

Recall that frequentist hypothesis tests are designed around the control of error rates (Mayo 1996).6 The conventional 0.05 threshold for statistical significance corresponds to a false positive rate of 5%, that is, when a test with this threshold is used correctly, the probability of rejecting a true null hypothesis is 5%. However, when multiple tests are conducted in the context of a single study, the probability that at least one result is a false positive (assuming that all test results are statistically significant) is greater than 5%. For example, if two tests are conducted and both have p=.05 (observed), the probability that at least one result is a false positive is 1− .952 =9.75%. This is called a multiple comparisons scenario, and statisticians have developed various ways of correcting for multiple comparisons to control the error rate.

Young does not argue that the air pollution research he targets has incorrectly conducted multiple comparisons without correction. Rather, he argues that the studies could have conducted alternative analyses, and that this requires the same kind of correction. For example, for one study (Koken et al. 2003), Young observes that the study includes 5 different health outcomes, 6 different predictors (variables of interest, i.e., pollutants), 5 different lags (values of predictors from previous days in the time series), and 5 different covariates. Young calculates that there are 150 = 5×6×5 different “questions” that could be asked in this study (is predictor x with lag t correlated with outcome y?); 32=25 different “models” that could be used to answer each question (one for each subset of covariates included as controls); and all together that the “search space” for this study is at least 4800=150×32 “question-model” pairs (Young 2017a, table 1, 178). Young argues that the p-values reported in environmental epidemiology studies should be corrected for these (potential) multiple comparisons. For the studies scrutinized in Young (2017a) and Young, Acharjee, and Das (2019), the “search space” is generally on the order of 104 to 106.

Based on this line of argument, Young and his collaborators have concluded that “causality of PM10/PM2.5 [particles smaller than 10 or 2.5 microns, respectively] on heart attacks is not supported” (50), “There is no convincing evidence of an effect of PM2.5 on all-cause mortality” (Young and Kindzierski 2020), and indeed that “regulation of PM2.5 should be abandoned altogether” (Young 2017b). Young’s potential multiple comparisons argument exemplifies the epistemic value of skepticism, in the sense of requiring strong evidence before accepting a substantive claim.

However, in this case, skepticism imposes a demanding burden: some rough calculations assuming Bonferroni corrections for multiple comparisons suggest that, for one paper targeted by Young (Koken et al. 2003), collecting sufficient data to satisfy this demand would take well over a century. And this study has a relatively small “search space.” Thus, satisfying the epistemic value of skepticism—and, in the meantime, accepting the claim that PM does not cause heart attacks—would effectively block air pollution regulation. Skepticism contrasts with a precautionary attitude, a willingness to accept a claim and take action accordingly, based on relatively provisional, weaker evidence (Steel 2015b, 9–10).7 A precautionary attitude is a pragmatic value: it leads us to take action earlier to address possible threats to human health (ch. 4).

Thus, this case gives us an example of a tension or conflict between the epistemic value of skepticism and the pragmatic value of precaution.

2.2 Open science and regulatory delay

My second example is a strategy of appealing to transparency, open data, and open science to delay environmental regulation. The strategy appears to have originated in the tobacco industry in the 1990s (Lerner 2017) and has been adopted by several industries whose products pose environmental threats to public health. I first discuss a case in which this strategy was used to attack an individual study. I then discuss some more general efforts, in the US Congress and US Environmental Protection Agency (EPA), which would have imposed open science requirements on all science used in US EPA’s regulatory decision-making.

2.2.1 The Six Cities Study

The Six Cities Study (Dockery et al. 1993) was a prospective cohort study of the effects of air pollution in 6 US cities from 1974–77 through 1991. In line with several other epidemiological studies conducted around the same time (Dockery 2009, 259–60), the Six Cities Study indicated that particular matter (PM)—especially very small particles, smaller than 2.5 micrometers—was a major threat to human health. In 1997, these findings were cited to support more stringent regulation of PM emissions (Kaiser 1997). Opponents of the regulation—including industry groups, such as the American Petroleum Institute (API), and allied scientists8 and politicians— responded in part by calling for the underlying data to be publicly available (467). Dockery et al. did not make the data public, but did provide it to Health Effects Institute (HEI) (467), a research center created by US EPA and the automotive industry to provide something like a politically neutral review of controversial regulatory science (Jasanoff 1992, 212). The HEI reanalysis was extensive—manual checks were done to ensure data integrity and numerous alternative statistical models were constructed—and ultimately supported the Six Cities Study’s major findings (Krewski et al. 2003). Nonetheless, critics continue to complain that the Six Cities data are not publicly available (Dunn 2004; Schipani 2018; Van Doren 2018; Dinerstein 2019; Milloy 2019).

2.2.2 Strengthening Transparency in Regulatory Science

From 2014 through 2017, Republican legislators in the US Congress repeatedly introduced a bill—variously titled the HONEST Act and Secret Science Reform Act—that would have required “all scientific and technical information relied on to support [regulation by US EPA] … [be] publicly available online in a manner that is sufficient for independent analysis and substantial reproduction of research results” (Schweikert 2014). None of these bills were passed by the US Senate. Then, in April 2018—a little more than a year after Trump assumed office—US EPA promulgated a draft rule, “Strengthening Transparency in Regulatory Science” (henceforth Strengthening Transparency), that stated that “When promulgating significant regulatory actions, the Agency shall ensure that dose response data and models underlying pivotal regulatory science are publicly available in a manner sufficient for independent validation” (US EPA 2018, 18769). (The final version of this rule was adopted in the last few days of the Trump administration in January 2021, vacated on procedural grounds the next month, and abandoned by the Biden administration.)

All of these efforts prompted substantial backlash from environmental advocates, who argued that broad data availability requirements would undermine the evidence base for environmental policymaking and connected Strengthening Transparency in particular to a general pattern of “anti-science” in the Trump administration (Schwartz 2018; Friedman 2019).

However, both advocates of Strengthening Transparency and indeed the text of the draft rule itself defended the proposal by appealing to the burgeoning movement for open science, and specifically the epistemic values of “independence, objectivity, transparency, clarity, and reproducibility” (US EPA 2018, 18769), associated with the open science movement. Specifically, open science has been widely promoted as a solution to the replication crisis in psychology and biomedicine (Gelman 2011; Nosek et al. 2015; Munafò et al. 2017; Moher et al. 2018; Tackett et al. 2019), and Strengthening Transparency includes both direct and indirect references to these arguments (including direct citations to Ioannidis 2005; Munafò et al. 2017). So the “anti-science” label may seem to be inapt: the rule is appealing to epistemic values that are embraced by many scientists. (Despite the appeals to open science, several proponents of open science objected to Strengthening Transparency: Berg et al. 2018; Gelman 2018; Ioannidis 2018; Wagner, Fisher, and Pascual 2018; Nosek 2019)

2.2.3 Transparency and timeliness

Open science initiatives—whether strict requirements, suggestions, or moralistic exhortations— promote the epistemic value of transparency, enabling the critical scrutiny of the data and analysis used to support claims that many philosophers and scientists regard as key to science’s epistemic achievements (Longino 1990). Working as a filter, Strengthening Transparency’s open science requirements would plausibly help avoid some falsehoods, either by preventing some studies that happen to have false conclusions from being included in a policy evaluation or by enabling other analysts to discover data or analysis errors.

However, Leonelli and collaborators (Leonelli 2016b, 2016a; Bezuidenhout et al. 2017; Leonelli 2018) have pointed out that open data requires specialized infrastructure and knowhow, and that open data requires ongoing maintenance (to ensure that data remains readable, archives are uncorrupted, and code can be run). In a field such as environmental epidemiology, where this infrastructure and know-how are still being developed, the kind of sharp open science requirement imposed by Strengthening Transparency can be expected to significantly slow research production and dissemination (Hicks 2021). Strict open science requirements, without the necessary infrastructure, sacrifice the pragmatic value of timeliness, that is, research that can inform near-term policy decisions.9 Without this research, policymakers are likely to overlook some environmental threats to human health.

So Strengthening Transparency gives us another example of an apparent conflict between epistemic and pragmatic values.

2.3 Anti-science epistemic values?

Like the critics of Strengthening Transparency, I am strongly inclined to say that something has gone wrong when open science is used to argue against environmental regulation. Young’s concerns about control of error rates in environmental epidemiology have not been widely discussed, but I expect the environmentalist critics of Strengthening Transparency would say that his argument goes wrong in the same general way. Both of these arguments are “anti-science,” even as they appeal to widely shared epistemic values. How can we make sense of this thought? In this subsection, I consider two approaches, one from Karen Kovaka and the other from the literature on “normatively inappropriate dissent.” I then sketch my approach.

In an analysis of climate denial, Kovaka (2021) observes a common argument pattern:

  1. In order for us to accept its results, science must meet the standards for good science.

  2. Climate science does not meet these standards.

  3. So we should not accept the results of climate science. (2364)

Kovaka considers the possibility that the standards in question “are widely accepted …, shared by accepters, deniers, experts, and non-experts.” But she rejects this as “implausible” (2364), and argues that “pro-science science-denial” is more likely to be based on common misconceptions about science, including the value-free ideal (2365), the idea of “a universal, step-by-step scientific method” (2366), and “the mythic treatment that scientists such as Galileo, Mendel, and Darwin receive in science textbooks” (2368).

Swapping out climate science for environmental epidemiology, the argument pattern that Kovaka identifies also seems to fit Young’s critique and the various calls for publicly-accessible data. But, while Kovaka’s diagnosis of common misconceptions might be apt for climate denial, I don’t think it will work for the cases that I am interested in here. Control of error rates and open science are widely-accepted by both philosophers of science and scientists themselves.

Another recent line of work has concerned the concept of “normatively inappropriate dissent” (Biddle and Leuschner 2015; Melo-Martín and Intemann 2018; Miller 2021). It might be thought that our two examples of “anti-science epistemic values” can be covered by this concept. Both Young and proponents of Strengthening Transparency do appear to be dissenting from the scientific mainstream, in ways that their critics think is inappropriate. I consider two accounts of normatively inappropriate dissent, both based on the concept of inductive or epistemic risk. First, Biddle and Leuschner (2015) give a set of four jointly sufficient conditions for dissent to be normatively inappropriate. However, one of their conditions—which Miller (2021) labels Standards—is that “The dissenting research that constitutes the objection violates established conventional standards” (Biddle and Leuschner 2015, 273). Again, in the two case studies of interest in this paper, the dissenters are appealing to established conventional standards. So Standards is not satisfied. Because Biddle and Leuschner’s criteria are jointly sufficient, not individually necessary, their account does not logically entail that this dissent is not normatively inappropriate. But this just means that their account does not have much to say about these two cases.

Second, Miller (2021) offers a revised version of the criteria from Biddle and Leuschner, to address criticisms from Melo-Martín and Intemann (2018). Miller’s account involves three individually necessary criteria for normatively appropriate dissent; so, any case of dissent that violates at least one criterion is inappropriate. Based on concerns about excessively lax conventional standards, Miller replaces Standards with a criterion he calls Workable Evidential Thresholds: “The dissent adopts evidential thresholds within a range that allows researchers to make meaningful knowledge claims and that is not knowingly likely to rule out a priori the attainment of certain empirical results” (Miller 2021, 925). Both Young and proponents of Strengthening Transparency present themselves as supporting evidential standards that allow researchers to make meaningful knowledge claims, so this clause seems satisfied. Do they “rule out a priori the attainment of certain empirical results,” such as conclusions about the hazards of air pollution? Miller’s criterion is ambiguous here. On the one hand, the proposals from these dissenters would significantly delay drawing such conclusions. But, on the other hand, they would not make drawing such conclusions impossible. So whether or not this dissent is appropriate depends on the importance of timeliness. From a point of view that understands the aim of science in narrowly epistemic terms, timeliness is likely to seem less important than skepticism and transparency, and so this dissent might seem appropriate. I discuss this, and a rival point of view of the aims of science, below.

One of Miller’s (2021) other criteria, Fairness, might seem more promising: “The inductive risks that would stem from accepting or adopting the dissent, and the respective harms and benefits that they entail, would be fairly and justly distributed among relevant members of or groups in society” (921). The mainstream view is that air pollution, and other environmental hazards, are highly unjustly distributed (Buzzelli 2017). But the evidence for this unjust distribution uses much the same methods as the evidence for the hazards of air pollution (333ff). So assessments of the inductive risk of, say, Young’s potential multiple comparisons argument depend on whether we find his argument plausible. From Young’s point of view, it’s reasonable to think that distributive environmental justice research is highly unreliable, we should remain skeptical about how air pollution (if it turns out to be hazardous) is distributed, and so the inductive risks of adopting more stringent research standards are not necessarily unjustly distributed.

I suggest taking another approach to analyzing our cases of “anti-science epistemic values.” The underlying issue in these cases can be understood as a tension between epistemic and pragmatic aims of science. On the one hand, control of error rates and open science are—both generally and in the particular cases of environmental epidemiology presented above—put forward as epistemic values, that promote the aim of accepting true claims about the world. On the other hand, environmental epidemiology also has pragmatic aims. Environmental epidemiology is valuable because it is useful for doing things in the world. Specifically, it is useful for designing regulation to protect human health. If, on the account that I am proposing, it makes sense to say that environmental regulation is (the? a? among the?) aims of science (in general? certain fields?), then it seems to make sense to say that things (anything? including epistemic values?) that work against environmental regulation are “anti-science.” Such things would be scientific vices, rather than virtues—they would frustrate, rather than promote, the aims of science. So (perhaps) epistemic values can be vices.

As the parentheses suggest, developing this account will acquire some work. I begin that work in the next section.

3 The Aims of Science

In this section, I develop an account of the aims of science. While the ultimate goal of the paper is to resolve the paradox of “anti-science epistemic values,” this section can only offer a more precise diagnosis of the paradox.

First, to conceptualize the aims or goals of science, I conceptualize science itself as a practice, a complex, collaborative, socially organized, goal-oriented, sustained activity (Hicks 2012; Hicks and Stapleford 2016). The aims of a practice give it an evaluative structure. Aims are typically characterized in terms of goods—objects or states of affairs that practitioners regard as valuable without further justification. Aims provide the raison de faire for a practice: they are the reason why practitioners engage in the practice, but also the reason why society more broadly values and supports the practice (for example, by providing it with resources, such as research funding and the special protections of academic freedom). Virtues are excellences, properties that promote the aims of the practice. In the science and values literature, epistemic values are typically understood as virtues of abstract objects produced by scientists (say, simple theories, hypotheses, models). But we can also understand them as habits or patterns of behavior of scientists (say, pursuing simple theories, hypotheses, models). Virtues are valuable—virtues are virtues—because and insofar as they promote the aims of the practice at hand. Among other implications, this means that practitioners can be mistaken: a habit that they take to be a virtue might in fact be a vice, if they believe that the habit promotes the aims of the practice but actually frustrates it.

The aims of a practice have a different evaluative standing from its valuable accidents or byproducts. Consider the aim of knowledge and the accident of esteem or prestige for a researcher and their institution. Both knowledge and esteem are valuable to researchers: knowledge because it is the aim of research and esteem because it functions as a form of “credit” or “capital” within the scientific community (Latour and Woolgar 1979, ch. 5). Success in the pursuit of knowledge tends to lead to further esteem, and esteem is useful for acquiring the resources to pursue further knowledge. But of course this connection between knowledge and esteem is very noisy and imperfect. Critically, the kinds of cases in which aims and accidents can diverge are evaluated differently. Research that pursues knowledge while sacrificing prestige—say, focusing on an especially complex but intellectually valuable project for years without publishing, as in the standard narrative of Andrew Wiles’s development of the first proof of Fermat’s Last Theorem—is still virtuous. But research that pursues prestige while sacrificing knowledge—say, engaging in research misconduct to produce numerous high-profile but deeply flawed studies, as in the case of Brian Wansink (Reardon 2018; see also Smaldino and McElreath 2016)—is vicious. To put the point generally, aims (and their associated virtues) (should) take priority over accidents (and the habits that promote them).

Environmental epidemiology clearly has epistemic aims, namely, knowledge concerning environmental threats to human health, especially from pollution and related social processes. This knowledge also has practical value, namely, for protecting human health and the environment. In terms of the distinction in the last paragraph, should we understand this practical value as an aim of environmental epidemiology, or merely an accident?

Hicks (2012) poses this question more generally, in terms of a contrast between two “views” of science. According to the narrow view, science has only epistemic aims, and the practical value of science is a mere accident. On this view, the sole aim of environmental epidemiology is to produce knowledge concerning environmental threats to human health. The fact that this knowledge is useful for preventing or mitigating such threats is a happy accident, but not by itself a good reason for an individual to pursue a career in environmental epidemiology or for society to support environmental epidemiologists as a field. By contrast, on the broad view, science can have both epistemic and pragmatic aims. On this view, the utility of the knowledge produced by environmental epidemiology is not a mere accident, but instead a key part of the reason for pursuing environmental epidemiology.

In the remainder of this section, I develop two analyses of the examples from section 2, one each for the narrow and broad views. These analyses try to develop the thought that there’s something wrong with the way these arguments appeal to epistemic values—and thus that these arguments are “anti-science.” I argue that substantial complications arise for both views. While the narrow view seems to be trapped by its complications, in the next section I argue that the broad view can avail itself of resources from virtue ethics to address its complications.

3.1 The narrow view

The narrow view seems to provide us with a simple account of the relationship between epistemic aims and pragmatic goods: the sole aim of science is knowledge, and so, any pragmatic goods that science produces are mere accidents. Thus, when there is a conflict between them, pragmatic values should be sacrificed for the sake of epistemic values. To return to the example of Strengthening Transparency, from the narrow view, open science requirements promote transparency, an epistemic value, and so promote the epistemic aims of environmental epidemiology. While it may be regrettable that requiring open science would delay protective regulation and result in preventable harms to human health and the environment, this is an unhappy accident. Similarly, for the conflict between the epistemic value of skepticism and the pragmatic value of precaution in Young’s concerns about potential multiple comparisons in air pollution epidemiology, the narrow view seems to imply directly that precaution should be sacrificed for the sake of maintaining skepticism. The epistemic aims of environmental epidemiology take priority over any merely accidental practical application. Doing otherwise would be just like engaging in research misconduct for the sake of prestige.

This simple analysis becomes more complicated when we recognize that science has a plurality of epistemic aims.10 Magnus (2013) draws on William James to distinguish the aim of believing the truth from the aim of not believing falsehoods. In these terms, skepticism can be understood as an epistemic value because it promotes the epistemic aim of not believing falsehoods. But precaution can likewise be understood as an epistemic value because it promotes the epistemic aim of believing truth. As Fraser (2020) puts it, “Sure, if you only believe things for which you have good evidence, you are less likely to have false beliefs. But you will also miss out on a lot of true ones.” On this analysis, Young’s potential multiple comparisons argument would protect us from believing falsehoods, but also prevents us from believing truth claims. So it raises a conflict between two epistemic values. Steel (2016) makes a related point, arguing that the problem with climate skeptics is that they delay knowledge. As Steel sees it, the tension is between modestly-secure knowledge now and highly-secure knowledge in the future. We can apply this to the conflict between transparency—securing our knowledge claims in the future, after other scientists have scrutinized a study’s data and analysis code—and timely but less secure knowledge claims that can be used by policymakers now. In the debate surrounding Strengthening Transparency, both timeliness and transparency can be understood as epistemic values.

This analysis is plausible, and so far fits neatly within the narrow view of the aims of science. But it doesn’t seem to provide any resources for resolving the tensions between these values. Should we choose believing truth or avoiding error, knowledge now or knowledge later? Notably, Magnus, Fraser, and Steel authors make their points to introduce arguments that what are usually called “non-epistemic values” can legitimately inform the judgments we use to resolve these kinds of tensions. Magnus presents William James’ observation as an early version of an inductive risk argument (Magnus 2013). Fraser argues that “epistemic fear of missing out” among conspiracy theorists leads to a lack of shared belief that is necessary for democratic public deliberation. And Steel argues for a generalized version of inductive risk, “from risk of error to risk of ignorance” (Steel 2016, 705) where the problem with ignorance is that “prolonged suspension of judgment … leads to serious difficulties if policy decisions are expected to be ‘science based’ in the sense that the course of action chosen is justified by scientific knowledge” (706–7). That is, Magnus, Fraser, and Steel identify conflicting epistemic values, and go on to argue that the conflict should be resolved by appeal to non-epistemic considerations.

At this point, it seems to me that the narrow view has followed Magnus, Fraser, and Steel into a box canyon. The narrow view can recognize that science has a variety of epistemic aims, but this recognition does not help us resolve tensions between different aims. Magnus, Fraser, and Steel provide a way forward. But their routes appeal to what the narrow view regards as accidents or by-products of scientific research. It may be appropriate to bring in these kinds of considerations, if there really is no way to move forward on epistemic grounds. But this kind of route should be taken only with great reluctance and care. Specifically, on the narrow view, pragmatic considerations should not influence our conception of the epistemic aims of science—to allow this would be to risk “corrupting” or “infecting” the practice.11

3.2 The broad view

Like the narrow view, the broad view takes science to have epistemic aims, plausibly including some combination of accepting true claims, rejecting false claims, and achieving understanding, perhaps among others. The broad view adds that science also has pragmatic aims.12 For example, epidemiologists and public health researchers regularly characterize their fields in terms of pragmatic aims, usually some version of protecting or promoting human health:

Stated another way, the objective of epidemiology, as long-argued by many leading epidemiologists, and as underscored in the “Ethics Guidelines” issued by the American College of Epidemiology in 2000, is to create knowledge relevant to improving population health and preventing unnecessary suffering, including eliminating health inequities. (Krieger 2011, 31, citations to 10 works removed; see also Krieger 2015, 5–6)

As epidemiology is one of the essential disciplines of public health, its major aim is to contribute to fulfilment of the definition of public health as “a science and art to promote health and prevent disease by organized effort of society” (Gulis and Fujino 2015, 179; see also US EPA 2017; Nature Communications 2018; Fernández Pinto and Hicks 2019)

In light of these self-characterizations, I propose that the primary pragmatic aim of environmental epidemiology is protecting human health. This means that the benefits that environmental epidemiological research produces for protecting human health are not accidents or by-products of that research. Rather, they are among the primary reasons for pursuing that research. If environmental epidemiology is producing research that helps protect human health, then it is being done well (flourishing). At the same time, if environmental epidemiology is not (or prevented from) producing research that helps protect human health, then it is not being done well (degenerating).

On the broad view, a scientific field typically promotes its pragmatic aims by achieving its epistemic aims. Environmental epidemiology protects human health by producing knowledge of the distribution and causes of environmental-related injury and disease. In this relationship, epistemic aims are proximate or direct, while pragmatic aims are distal or indirect. In their everyday work, scientists will often attend exclusively to epistemic aims, but pragmatic aims provide a broader context and purpose for these everyday activities.

Borrowing a pattern from action theory (Boesch 2019, 2311–12; MacIntyre 1973; and compare the “significance graphs” of Kitcher 2001, ch. 6), consider the following exchange of questions and answers about the research activity of an environmental epidemiologist:

Q: Why are you writing code in R?

A1: To fit a linear regression model.

Q: Why are you fitting a linear regression model?

A2: To understand how air pollution exposure is correlated with race.

Q: Why are you trying to understand how air pollution is correlated with race?

A3: To reduce racial disparities in asthma.

Using an analogous example, Boesch (2019) points out that the answers (A1-A3) “are not separate actions, but rather separate descriptions of one and the same action” (2312). In this example, it’s not that the scientist is currently (in some “core” or “epistemic” phase, stage, or context) engaged in fitting a model, then later will pursue the epistemic aim of understanding air pollution and race, and only subsequently (in a distinct “applied” or “practical” phase, stage, or context) will pursue the pragmatic aim of reducing racial health disparities. Rather, the activity of fitting the model is (part of) the activity of understanding air pollution and race, which is (part of) the activity of reducing racial health disparities (Hicks 2014, 3289). Fitting the model is pursuing both epistemic and pragmatic aims. And so it’s appropriate to evaluate this activity (and the resulting model) in terms of whether and to what extent it promotes both epistemic and pragmatic aims.

This action-theoretic pattern is focused on individual actors. As noted in section 1, I am not focused on individual actors here, because I think in most cases the available evidence is inadequate to make claims about individuals’ intentions.13 For example, alternatives to A3 might include:

  • to publish a high-impact paper;

  • to get tenure;

  • to get my next grant;

  • to win the Nobel prize in medicine;

  • to delay regulation (because it would harm my industry funder’s profits);

  • to delay regulation (because I’m opposed to regulation on philosophical grounds);

  • to delay regulation (because environmentalism is a socialist conspiracy); and/or

  • to accelerate regulation (because it’s one of the goals of my nonprofit funder).

A research community is composed of individual researchers, with diverse material interests. Even members of a close-knit community—such as the lab of a single faculty member and their graduate students—might take the series of why questions in discordant directions. So why think that something like the entire field of environmental epidemiology could have a remotely coherent set of aims?14

Consider the quotations from epidemiologists above. I suggest that these statements of the pragmatic aims of epidemiology and public health are best read normatively, rather than descriptively. That is, the authors are not claiming that all or even most individual epidemiologists would agree that promoting human health is the (primary) aim of epidemiology. Rather, I suggest that these authors are claiming that promoting human health should be regarded as the aim of epidemiology.

This does not mean that epidemiologists will all agree on a specific list of aims for their field, especially a fine level of detail. Krieger (2011), a textbook in “epidemiological theory,” is organized historically as a series of debates about conceptualizations of public health—from miasma to germ theory to the social determinants of health. These debates all presumed that epidemiology should promote “health”; the disagreement was over how this key concept should be conceptualized. Similarly, one can be a pluralist about the conceptualization of human health in epidemiology while still maintaining that promoting human health (understood generically or ambiguously) is the overarching pragmatic aim of the field.

This also does not mean that every individual epidemiologist must have the exclusive intention to promote human health. There is no necessary tension between working to promote human health and working to get tenure. Tenure could even be seen as a means to promote human health, by giving researchers secure positions from which to do their work. Indeed, an epidemiologist might even work for an industry sponsor to design changes to production processes that preserve profits while also protecting human health.

But job security and industry profits are not—that is, should not be regarded as—aims of epidemiology. They are mere accidents. So if an epidemiologist sacrifices the protection of human health for the sake of getting tenure or their funder’s profits, then they have sacrificed the pragmatic aim of epidemiology for the sake of mere accidents. And so they have acted viciously.

The analysis of the last several paragraphs does not apply to the arguments presented in section 2. If we take those arguments at face value, they are not cases in which there’s a tension between the aims of environmental epidemiology and mere accidents, such as industry profit. Rather—again, taken at face value—they are cases of conflicts between epistemic and pragmatic aims. Skepticism and transparency promote the epistemic aims of environmental epidemiology, while precaution and timeliness promote the pragmatic aims. On the broad view, the conflict between epistemic and pragmatic values in these cases are conflicts between epistemic and pragmatic aims.

3.3 Conflicting aims and “anti-science” epistemic values

The arguments in section 2 appeal to epistemic values, yet are “anti-science.” This seems paradoxical: how can appeals to epistemic values—skepticism and transparency, things that promote the aims of science—be “anti-science”? My development of the narrow and broad views of the aims of science suggests a more precise diagnosis. Both arguments draw on values that would (I assume) promote certain aims of science, but in ways that require sacrificing other aims. On the narrow view these other aims are epistemic; on the broad view they are pragmatic. In either case, the paradox comes from the underlying conflict between aims.

Thus, in order to maintain—or deny—that these arguments are “anti-science,” we need to resolve the conflict between the relevant aims. We need to be able to say that skepticism and transparency are scientific vices, not virtues. This requires some additional conceptual tools.

4 Virtue Ethics and Conflict Resolution

I suggest that virtue ethics can provide us with the necessary conceptual tools to resolve the apparent conflicts between epistemic and pragmatic aims. In this section, I first introduce the concept of instrumentally valuable intrinsic goods, then argue that epistemic goods can be both valuable for their own sake but also instrumentally valuable for promoting the pragmatic aims. Next I draw on Julia Annas’s defense of the unity of the virtues, to argue that there is an important role for moral imagination and practical wisdom (phronesis) in reconceptualizing virtues to resolve apparent conflicts between them. Finally I illustrate how this exercise of practical wisdom can resolve the two pairs of tensions between epistemic and pragmatic values identified in our cases of “anti-science” epistemic values. In each case, after the tensions has been resolved, I argue that the original version of the epistemic value can be recognized as a vice rather than a virtue, which supports the charge that the original arguments are indeed “anti-science.”

4.1 Instrumental and Intrinsic Goods

I have supposed that the aims of a practice are, at least for practitioners,15 intrinsic goods, valuable for their own sake and without need for further justification. Axiology (theory of value) traditionally distinguishes intrinsic goods from instrumental goods, which are valuable because they are useful for promoting some other goods. Havstad (2021) develops something like this distinction in her review of Brown (2020). On Brown’s Deweyan account, inquiry “begins in doubt or perplexity” arising from a material breakdown in established habits or routines (25); Havstad proposes that science can also be “spurred by positive sensations like curiosity, wonder, adventure, and delight” (Havstad 2021). Brown’s emphasis on doubt and perplexity reflects the pragmatist emphasis on the instrumental value of scientific research, while Havstad’s contrasting aesthetic responses correspond to science as intrinsically valuable.

Often these categories are treated as mutually exclusive: an instrumental good is not intrinsically valuable, and vice versa. But this does not follow from the definitions. As Richardson puts it, “it is also coherent to say that we pursue something both for its own sake and for the sake something else” (Richardson 2002, 101).

Consider the activity of coding, writing computer software. While not everyone finds coding an intrinsically valuable activity, many do (including myself). We enjoy the intellectual challenge of explicating a complicated task as a precise sequence of computational steps. Coding is done in many different contexts (practices), with different aims, which can have different standards for good code. One context with unusual standards is the game of code golf. The aim of code golf is typically to produce a given output using source code with the fewest number of characters. This is a purely intrinsic good, with no instrumental value. Good code golf code is often extremely difficult for humans to understand. For example, the following line is a complete program in the Befunge language, which does the calculation 452–11 and then prints the target result, “2014,” using 4 characters (Justin 2015): @.ߞ’ .

By contrast, for most practicing software engineers coding is both intrinsically valuable and useful for some other purpose. Good code is an instrumentally valuable intrinsic good. Importantly, producing useful software is often a complex, highly collaborative activity. Some popular open source projects might have hundreds or even thousands of contributors. As an extreme example, there are approximately 12,000 official contributors to the GitHub repository for the open source Linux kernel as of 2021–12-15 (, and anyone with a GitHub account can submit a “pull request” to propose changes to the code. In this radically collaborative context (Winsberg, Huebner, and Kukla 2014), it’s essential that software engineers write code that potential collaborators are able to understand relatively quickly and easily. For this reason, software engineers are taught to follow style guides and use techniques such as “self-documenting” to produce highly readable code, while code that is difficult to read is derided as “smelly” or “messy.” In this context good code golf code would be bad code. While good code is an intrinsic good, the standards for good code depend on the context, in ways that correspond to differences in the instrumental value of the code.

I suggest that scientific knowledge often has a similar status, valuable both for its own sake as well as because it is useful. That is, I suggest that the knowledge produced by environmental epidemiology is both intrinsically and instrumentally valuable. If a line of research has good prospects for producing new knowledge, it does not require further justification to claim that the research is valuable. But it can be given further justification by showing that the knowledge it will produce has good prospects for being useful for protecting human health.

Further, I claim that there is an asymmetry between environmental epidemiology’s epistemic and pragmatic aims. Knowledge of the distribution and health effects of pollutants is useful for protecting human health, but human health is not useful for producing knowledge of the distribution and health effects of pollutants. While human health is instrumentally valuable for many other goods, from the perspective of environmental epidemiology it is a non-instrumental or purely intrinsic good.16

Richardson notes that “When we pursue x for the sake of y, … we judge it appropriate or acceptable to regulate the manner and extent of pursuit of x by reference to y” (Richardson 2002, 101). That is, if we do x in order to do y, then we should do x in a way that tends to promote y, and we do x well only if we have, by so doing, also (contributed to) doing y well. When we write code that is purely intrinsically valuable (as in code golf), we don’t need to take into account whether it promotes any further goods. But most practicing software engineering is producing code for some further purpose. Because code golf ‘s extremely succinct style generally frustrates these further purposes, this style is unacceptable in everyday software engineering. In these contexts, good golf code is bad code; or, in terms of virtues and vices, an extremely succinct style can be a virtue in code golf but a vice in everyday software engineering. In terms of science, when we pursue the epistemic aims of science for the sake of some pragmatic aims, then it is appropriate to regulate the manner of the epistemic pursuits by reference to the pragmatic aims. In other words, when the epistemic aims of science are instrumentally valuable for promoting some pragmatic aims, then epistemic virtues must also be pragmatic virtues. And so epistemic values that are pragmatic vices—things that help us avoid false belief but frustrate the protection of human health, for example—are actually epistemic vices—they frustrate, not promote, the epistemic aims of science.

To be clear, on an alternative version of the broad view, fields such as environmental epidemiology have two independent sets of aims, in the sense that we can achieve one set (say, the epistemic aims) without achieving the other. Janet Kourany seems to have this kind of view (2010, 62). By contrast, here I am proposing that the aims are interdependent. When and insofar as the epistemic aims of science are valuable because (along with any intrinsic value) they are useful for achieving the pragmatic aims of science, the epistemic aims are not achieved if they are not useful in this way. Apparent epistemic achievement is merely apparent unless it is also pragmatic achievement. This means that things that appear to promote epistemic achievements while frustrating pragmatic achievements do not actually promote epistemic achievements. In this way, vices can be mistaken for virtues.17

Now consider an objection.18 Suppose it is December 2021 (the beginning of the Omicron wave of Covid-19 in the US), I am attending a social gathering tomorrow, and I want to take a Covid-19 test. I might use a faster (15-minute) but less accurate (80%, say) at-home antigen test, or a more accurate (95%, say) but slower (48-hour) PCR test. The slower test is pragmatically useless, and so my argument in the last few paragraphs implies that it is also epistemically useless. But, so the objection goes, it seems like the results of the PCR test are still epistemically valuable.

Some sense of value for the PCR test might come from pragmatic value on a longer time scale. Suppose I take both tests, the home test is negative, and the PCR test is positive. Because of its higher accuracy, the PCR test is regarded as authoritative, and I can conclude that the home test gave me a false negative. I can then let the other attendees know that I exposed them to Covid-19, which will hopefully reduce further downstream cases. Here the epistemic features of the PCR test contribute to its pragmatic value, just on a longer time frame than the initial decision to attend the social gathering. PCR tests in general are also pragmatically valuable for making these kinds of decisions, even if the pragmatic value of this particular test is limited. If I had gone to get my Covid-19 test a few days earlier, all else being equal a PCR test would have been more pragmatically valuable than a home test, because of the greater accuracy.

Some readers might insist that the PCR test has some purely intrinsic epistemic value, completely independent from any pragmatic value on any time scale. I can imagine some readers feeling some sense of wonder or awe at the scientific accomplishment of making the PCR process incredibly inexpensive and reliable over just a few decades, or perhaps at the high degree of accuracy of the PCR test. (In personal communication, Havstad says that she would not have these feelings herself if she were in this situation.) I think these feelings seem more appropriately directed at PCR tests in general, not at this particular PCR test, even if they are prompted by contemplating this particular test.

We might also say that the epistemic aims of the particular test have been achieved but in a compromised way. Insofar as the epistemic aims of the test are intrinsically valuable, and so have standards independent of any pragmatic aims, they have been achieved; but because they are also instrumentally valuable, and they have failed to contribute to those further uses, they are flawed or incomplete.

In general, the 48-hour turnaround time and high accuracy of PCR tests are impressive epistemic achievements. But, given my needs in this particular situation, the 48-hour turnaround time makes this particular PCR test seriously epistemically deficient, even with its high accuracy. Even if we want to say its epistemic aims have been achieved, they have only been achieved in a compromised, flawed, or incomplete way. So this kind of case shows how epistemic virtues in one context can be epistemic vices in another.

4.2 Disunity and reconciliation

In virtue ethics, the unity of the virtues is the view that—at least for the “cardinal” or primary moral virtues—having one virtue, such as benevolence, requires having all of the others, such as courage and temperance. The view has been controversial, in part because it seems highly plausible that someone could be both kind and cowardly, for example. I take no position here on whether the unity of the virtues is true. Instead, I suggest that Annas’s (2011) defense of the unity of the virtues can provide some guidance when we are faced with a practical dilemma that appears to require choosing between conflicting virtues.

The crux of Annas’s argument is a recognition that a virtue must be both stable and contextsensitive. On the one hand, it must be stable, in the sense that it “involves more than the activity performed in the situations in which it is first learned: it involves something on the person’s part which can also be shown in entirely different activity in a different context” (84). Comforting a child with a scraped knee and granting a paper extension to an undergraduate student are both acts of benevolence. If we say someone has a benevolent character, we are saying that we can rely on them to do these kinds of actions in a variety of contexts. On the other hand, the specific actions used to “do benevolence” are quite different across contexts. The specific things that you do to comfort a child (giving a hug, gently applying a bandage, perhaps giving a cookie) have little or no superficial resemblance to the specific things you do to grant a paper extension (writing an email, adding a paragraph to a syllabus, perhaps changing a due date in a learning management system). So if we say someone has a benevolent character, we are also saying that we can rely on them to act differently in different situations, according to differences in what benevolence requires.

Further, the stability of a virtue typically requires its context-sensitivity. Someone who responds to an extension request by offering a hug, bandage, and cookie isn’t exemplifying a benevolent character. As an epistemological point, insofar as our conceptualization of a virtue is tied too closely to the specific ways that virtue is exemplified in certain contexts, we will be unable to recognize that this same virtue is exemplified in different ways in other contexts.

The next key move in Annas’s argument is to claim that the context-sensitivity of a virtue— and so also its stability—often (or perhaps typically) requires the exercise of other virtues, even ones that are conceptually distinct.

The compassionate person might well need courage to insist that a victim be treated properly, or to stand up to a bully on someone else’s behalf. If he [sic] lacks courage, his compassion will be flawed too; victims can’t rely on it, and others generally can’t rely on him to be compassionate in appropriate circumstances. In general a virtue which is unreliable in its exercise because of facts about the person (rather than external circumstances) is a compromised virtue. (Annas 2011, 88)

Consider the paper-extension request. At least in North America, the culture of higher education is often highly suspicious of students. We instructors are encouraged to think of students as lazy and mendacious. In this cultural context, granting paper extensions—especially explicitly adopting a policy that invites extension requests and grants them freely—requires being prepared to defend the policy if challenged by employers or colleagues. That is, exemplifying benevolence in this way requires also exemplifying courage. Generalizing, as we move between different situations, we may need to clarify or reconceptualize virtues, to ensure that they are context-sensitive in a way that maintains their stability.

Again, Annas is primarily interested in the theoretical question of whether exemplifying one of the primary moral virtues requires also exemplifying the others. Here I am interested in the practical question of how to reconcile a conflict between two apparent virtues of a specific practice, such as the apparent scientific virtues of skepticism and precaution, or transparency and timeliness.

One response to such a conflict would be to maintain the independence of the virtues. Transparency and timeliness are conceptually independent, and so we would expect both happy cases in which they agree and unhappy cases in which they disagree. When they disagree, we need to pick one, or bring in some other kind of consideration to decide how to move forward. (Compare Neurath’s 1983, concept of the auxiliary motive—which might resolve such dilemmas with a coin flip—and the way the narrow view seemed required to bring in “non-epistemic” values above.)

Another response to apparent conflict, inspired by the doctrine of the unity of the virtues, would see reconciliation between these virtues as a practical challenge. It may not be obvious how we might reconcile these seemingly-conflicting virtues. But perhaps we can do it if we try.

Consider an apparent conflict in paper-extension cases between benevolence and justice. We might have a conception of justice as procedural fairness and formal equality, according to which justice is exemplified by announcing paper deadlines well in advance and holding students responsible for managing their various responsibilities. On this conception of justice, giving some students additional time on the paper because they failed to manage their responsibilities well would be ad hoc and unjust to the rest of the class. Thus there appears to be a conflict between the virtues of justice (having the same expectations for all students) and benevolence (granting exemptions).

Deliberating on this apparent conflict, we might learn that today traditional-aged higher education students (18–25) may have little or no control over their work schedules (Lambert 2008; Henly and Lambert 2014) and are significantly more likely than other age groups to experience debilitating depressive episodes (Lipari 2018, fig. 46).19 These sorts of factors may mean that many higher education students are much more likely to face unexpected challenges than their instructors did as students. At the same time, we also might realize that many students have not had explicit instruction or guidance in time management, and so it is unreasonable to expect them to have mastered this challenging skill; and that students might reasonably choose to misrepresent the reasons for their time management challenges in order to preserve their privacy. All together, we might conclude that we need to modify our conception of justice in our assignment schedules, to be more flexible and more accommodating of unexpected challenges in our students’ lives. According to this modified conception of justice, different students face different challenges and so it would actually be unjust to hold all students to the same rigid assignment schedule. Instead, the assignment schedule should be flexible, to accommodate unexpected complications. Now benevolence and justice agree rather than conflict: both virtues direct us to adopt a generous extension policy. Put more generally, we respond to a seeming disunity between benevolence and justice by reconceptualizing justice in a way that allows benevolence and justice to be unified. I will call this activity of reconceptualizing one virtue, in order to resolve a seeming conflict with another virtue, reconciliation. Note that reconciliation will often involve the exercise of creativity (Brown 2020) and judgment (phronesis). There is no deterministic procedure for reconciling an apparent conflict between two virtues.

After this reconciliation, we can recognize that the previous conception of justice was not just mistaken, but actually vicious. This is in part because of the way the first conception of justice was context-insensitive, namely, the way it neglected differences between traditionalaged higher education students today and otherwise similar students decades ago. Because of this context-insensitivity, the procedural and formal conception of justice actually frustrated the pursuit of both benevolence and justice.

4.3 Resolving the paradox

In the last two subsections, I’ve developed two conceptual tools: the concept of intrinsicallyvaluable instrumental goods, with a corresponding asymmetry between the epistemic and pragmatic aims of some scientific fields; and the reconciliation of one virtue to another as a way to ensure that virtues are appropriately sensitive to context. Here I apply these tools to the apparent conflicts between epistemic and pragmatic values: between transparency and timeliness, and between skepticism and precaution. Reconciliation suggests that, when confronted by this kind of apparent conflict between a pair of epistemic and pragmatic values, we can remove the conflict by reconceptualizing one. The asymmetry between epistemic and pragmatic aims suggests that the epistemic value is the one that should be reconceptualized.

Let us begin with transparency and timeliness. The epistemic goods promoted by open science are valuable on their own, but also must be valuable for the way they promote the pragmatic aims of protecting human health and the environment. Specifically, they should not delay research just because a field like environmental epidemiology doesn’t (yet) have the infrastructure and know-how that is required for open science to be relatively time-efficient. This means that open science initiatives, to be virtuous, must include infrastructure development, training for both students and professional researchers, expanded research funding (to cover additional researcher time and maintenance expenses), and ready access to data management and informatics experts. These efforts must continue over a substantial period of time—perhaps a decade or two—in order for the vast majority of research groups in the field to be able to do open science efficiently. Until that time, a sharp open science requirement such as Strengthening Transparency is almost certain to delay research, and hence be vicious.

This proposal expands the conception of transparency in open science, from a single action performed by an individual researcher—“release your data”—to a complex collaborative activity, involving significant infrastructure and the ongoing work of different kinds of specialists (Levin et al. 2016; Bezuidenhout et al. 2017; Elliott and Resnik 2019). That is, the previous paragraph reconceptualizes transparency from an individual virtue to a social one (compare Longino’s (1990) social reconceptualization of objectivity), in a way that reconciles it with the pragmatic virtue of timeliness. After this reconciliation, from the perspective of the social conception of transparency, the completely individualistic conception of transparency—the way transparency is implicitly understood by Strengthening Transparency—is recognized as a vice, not a virtue, in two ways. First, the completely individualistic conception conflicts with the pragmatic aims of environmental public health; and second, it fails to recognize that individual researchers can only exemplify transparency when the appropriate infrastructure is available.

Next, skepticism and precaution. To reconcile skepticism and precaution, we might apply skepticism to skepticism itself. Recall that Young’s argument was not that error rates in air pollution studies are actually inflated by multiple comparisons across different model specifications, but rather that they might be. A skeptical attitude towards this skeptical argument would point out that the mere possibility of a problem is not yet very compelling evidence of an actual problem. A special line of research funding might be developed to examine the way alternative model specifications might make a difference to the conclusions of air pollution studies (perhaps using the method of “multiverse analysis,” which attempts to fit every alternative model specification; Steegen et al. 2016). Until this kind of research has produced evidence of an actual problem, the pragmatic aim of protecting human health and the environment recommends a precautionary attitude—namely, accepting the conclusion that air pollution is hazardous.20 This skeptical attitude—directed towards hasty claims of problems with air pollution research—supports, rather than undermines, a precautionary attitude, and thus promotes both the epistemic and pragmatic aims of environmental epidemiology.21

5 Conclusion

In this paper, I’ve attempted to resolve an apparent paradox. Opponents of environmental regulation have sometimes made arguments that appeal to epistemic values. Environmental advocates and others have explicitly called some of these arguments “anti-science,” and likely would apply this label to the other arguments as well. But how can an appeal to epistemic values be “anti-science”?

My answer turns on the idea that fields such as environmental epidemiology have both epistemic and pragmatic aims, and that the epistemic aims are valuable in part because they are useful for promoting the pragmatic aims. Drawing on ideas from axiology and virtue ethics, I’ve shown that we can reconceptualize the epistemic values at work in the arguments in question, reconciling the apparent conflict between epistemic and pragmatic values. Once this is done, it is clear that the original, un-reconceptualized appeals to epistemic values frustrate, rather than promote, both the epistemic and pragmatic aims of science. And so these original, un-reconceptualized epistemic values are scientific vices rather than virtues. There is thus no paradox in labeling these values “anti-science.”


Thanks to members of the UC Davis Philosophy of Biology Lab and attendees of the UT Dallas 2020 Online Symposium on Values in Medicine, Science, and Technology for discussion of early versions of this paper. Thanks to Matt Brown for suggesting a paper further developing my version of the “aims approach.”


    © 2022 Author(s)

    This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits anyone to download, copy, distribute, display, or adapt the text without asking for permission, provided that the creator(s) are given full credit.

  1. Douglas (2013) also uses these terms, but in a different way. Douglas distinguishes between, on the one hand, epistemic or cognitive values that function as “minimal criteria”—necessary conditions for a theory to be true, such as internal consistency and empirical adequacy—and, on the other hand, “desiderata” such as “scope, simplicity, and (potential) explanatory power” that are not logically necessary for truth but they can “give us [some epistemic] assurance” because “we are more likely to hone in on the truth with the presence of these values than in their absence” (800). Douglas refers to these as “strategic or pragmatic values” (800). Note that pragmatic values, in Douglas’ sense, are epistemic values, in Steel’s sense: simplicity can be truth-promoting, even if it is not a necessary (or sufficient) condition for truth. Indeed, Douglas relates her use of “pragmatic values” to Steel’s notion of “extrinsic epistemic values” (Douglas 2013, 800; Steel 2010, 15). On my use, “pragmatic values” contrast with (though do not logically exclude) epistemic values. The fact that a theory would promote human health does not give an assurance that we are more likely to hone in on the truth. [^]
  2. Thanks to Jim Griesemer for suggesting that I clarify this distinction between values and virtues. [^]
  3. Hicks (2014) and Melo-Martín and Intemann (2018) have argued that, often, there is too little evidence to make reliable claims about individuals’ intentions. Quill Kukla (writing as Rebecca Kukla) and coauthors have raised questions about the concept of authorship in extended research collaborations (Kukla 2012; Winsberg, Huebner, and Kukla 2014). These same points frequently apply to individuals’ beliefs. For example, the proposed rule Strengthening Transparency (section 2.2.2) was most likely written as a collaborative effort involving a dozen or more individuals in various positions at US EPA, borrowing language that several Congressional staff had used in various iterations of the HONEST and Secret Science bills, and responding to suggested revisions from other individuals in the White House and other executive branch agencies. With the readily-available evidence, it would be at best extremely difficult to identify these individuals, much less relate particular sections of the text to their individual intentions or beliefs, especially on topics such as the hazards of PM 2.5 that are not explicitly addressed in the text of Strengthening Transparency. [^]
  4. Dunlap and McCright (McCright et al. 2013; Dunlap 2014) have argued that climate skepticism involves a simultaneous rejection of “impact” or regulatory science and a defense of “production” or industry science, rather than a rejection of science in general or as such. [^]
  5. Hicks (2022) gives a technical critique of a second method, based on graphs of distributions of p-values. [^]
  6. Philosopher of statistics Deborah Mayo—who has defended this link between evidence, frequentist hypothesis testing, and the control of error rates—has hosted guest posts by Young on her blog (Young 2013b, 2013a, 2014). Young also visited a course taught by Mayo on “Statistics and Scientific Integrity,” where he presented a version of the argument that I consider here (Mayo 2014a, 2014b). [^]
  7. Note that I am talking about acceptance rather than belief, and specifically accepting a claim to inform policymaking rather than committing to the truth of the claim (Elliott and Willmes 2013; Steel 2013; McKaughan and Elliott 2015; Steel 2015a; Brown 2015; Lacey 2015; Potochnik 2015; Franco 2017). [^]
  8. S. Stanley Young is among the API-funded scientists who have made this argument, using the epithet “trust me science” (Committee on SST 2012; Young 2013b). [^]
  9. Open science requirements might still introduce some delay even with the appropriate infrastructure and know-how. For example, there probably isn’t any way to automate the production of a data codebook, which can take a substantial amount of time even when done by an experienced researcher. So, while infrastructure and know-how can reduce the tension between timeliness and transparency, they probably can’t completely eliminate it. Thanks to an anonymous reviewer for encouraging me to clarify this. [^]
  10. Thanks to Heather Douglas, P.D. Magnus, Greg Lusk, and Matt Brown for making versions of the argument in this paragraph. [^]
  11. The Science Wars of the 1990s can be read as an extended debate over the narrow view, the value-free ideal, and related concepts. An interesting empirical project would be to analyze the metaphors used by either side of this debate. The analysis I’m giving here implies that we would expect defenders of the narrow view/value-free ideal to use disease and contamination metaphors more than their critics do. [^]
  12. In this paper I’m using “science” as a generic term, not a universal, with environmental epidemiology as the prototype. Some scientific fields, such as cosmology, might not have pragmatic goods, much less pragmatic aims. Any such fields are outside of the scope of this paper. [^]
  13. Boesch (2019) does not seem to address this problem, but does develop a set of individual-level ideas that seem to fit well with my community-level analysis: in order for the researcher’s activity to count as producing a representation, the series of questions and answers must ultimately appeal to “some scientific aim,” which must be “recognized or accepted by the broader scientific community as a scientific aim” and which might be “[a] nonepistemic aim[], including things like mitigation or practical implementation” (Boesch 2019, 2314). Boesch (2019) draws heavily on Anscombe (2000), who offers an account of intention on which “actions are not explained in terms of internal, individual mental states … [but instead] are to be explained in terms of the wider … [social] practice” (Boesch 2019, 2309). The current paper might be read as developing an account of science as the kind of social practice that can support such an account of intention. [^]
  14. . Thanks to P.D. Magnus for raising this question. [^]
  15. My analysis throughout this paper generally only applies “internally,” to the standpoint of practitioners, namely, environmental epidemiologists. Other, “external” standpoints—say, the fossil fuels industry—can have very different ideas about whether the goods produced by a practice are intrinsically valuable, instrumentally valuable, or indeed perhaps not even valuable at all (Hicks 2014). In this sense, my analysis in this paper is relativist. [^]
  16. Note that my claims here are about justificatory relationships, not causal ones. Human health is a causal prerequisite for the production of epidemiological knowledge, in the sense that if all environmental epidemiologists were too unhealthy to work then knowledge would not be produced. But the further production of epidemiological knowledge does not provide a good reason for promoting human health. [^]
  17. Another useful comparison might be to the “epistemic constraint” view of Steel (2017). This view is based on “a distinction … between the final, long-term aims of science and the means by which science attempts to achieve those aims … namely, by advancing knowledge” (58). Based on this distinction, I think Steel would agree that some of the aims of science are pragmatic, and that scientific knowledge is instrumentally valuable for promoting these pragmatic aims. However, Steel claims that advancing knowledge is the only (legitimate) means that science uses to promote these pragmatic aims. Parker offers a slightly more hedged version of this thought: “Sometimes, the stated purposes for which models are used are practical …. Even in these cases, however, the intended contribution of the model is often epistemic: it is expected that the model’s serving one or more epistemic purposes will, in the context of a more extended activity, facilitate the achievement of the practical purpose” (Parker 2020, 460, my emphasis; see also Parker 2021). Steel argues that (his version of) this view entails the eponymous epistemic constraint, namely, that science “should not operate in ways that are incompatible with basic criteria for advancing empirical knowledge” (Steel 2017, 58). I’m not certain I would agree with this argument, in part because I am more inclined to Parker’s hedged version and in part because I’m not sure what “basic criteria” are left once we recognize the incredible pluralism of scientific practices. Even the concept of empirical adequacy has value-laden complexities (Longino 1995). I’m not sure how Steel would react to my argument (below) that epistemic values should be reconciled to pragmatic aims. Without some “basic criteria” to constrain reconciliation, perhaps he’d worry that my view collapses into an anything-goes relativism? On the other hand, I agree that epistemic values promote an intrinsic good of science, and so science should exemplify epistemic values. There may not be much substantive difference between myself, Steel, and Parker here. [^]
  18. I thank an anonymous reviewer for suggesting this kind of case. [^]
  19. This paragraph was written in January 2020. Some studies have found that mental health issues in US university students increased during the first few months of the Covid-19 pandemic (Browning et al. 2021; Copeland et al. 2021); a grey literature survey of students at about 400 US colleges and universities reports that 41% had major or moderate depression, 34% had an anxiety disorder, 23% had non-suicidal self-injury in the past year, and 52% received mental health treatment (Eisenberg et al., n.d., note that this report is subtitled “2021 Winter/Spring Data Report” but is not dated and does not appear to indicate when this survey was fielded). And students have likely been impacted by dramatic swings in employment in the restaurant industry and the emergence of the “essential worker” category. So it’s plausible that the factors identified in this paragraph are even more serious two years later. [^]
  20. Why can’t the narrow view give a similar response to this conflict? We saw above that both skepticism and precaution can be understood in purely epistemic terms. But note that we might also try to resolve the conflict by reconciling precaution to skepticism, accepting Young’s skeptical concerns about air pollution research unless and until further evidence is available. So we have two basic options—we can accept the air pollution research until it’s shown to be problematic, or we can accept the criticism until it’s shown that there’s actually nothing to worry about—that parallel the original conflict between precaution and skepticism. The narrow view doesn’t seem to provide any resources for helping us choose between the epistemic goods of avoiding error and believing truth. [^]
  21. An anonymous reviewer raises the concern that this argument doesn’t reconceptualize skepticism, but instead merely makes the point that skepticism “as already conceptualized” doesn’t entail Young’s view. My primary concern in this paper is to explain how arguments such as Young’s are vicious by arguing that we should think about skepticism differently in light of the pragmatic aims of science. It’s not so important to me whether this counts as “reconceptualization” in some strict sense. I will note that skeptical arguments usually don’t lead to the conclusion that we should (provisionally) accept a first-level claim such as “air pollution is hazardous.” So at least I’m using skepticism in an unusual way. [^]

Literature cited

Annas, Julia. 2011. Intelligent Virtue. Oxford University Press.

Anscombe, Gertrude Elizabeth Margaret. 2000. Intention. Harvard University Press. Google Books: _D1xjNXFT8cC.

Berg, Jeremy, Campbell Philip, Kiermer Veronique, Raikhel Natasha, and Sweet Deborah. 2018. “Joint Statement on EPA Proposed Rule and Public Availability of Data.” Science 360, no. 6388 (May 4, 2018): eaau0116.

Bezuidenhout, Louise M., Leonelli Sabina, Kelly Ann H., and Rappert Brian. 2017. “Beyond the Digital Divide: Towards a Situated Approach to Open Data.” Science and Public Policy 44, no. 4 (August 1, 2017): 464–475.

Biddle, Justin B., and Leuschner Anna. 2015. “Climate Skepticism and the Manufacture of Doubt: Can Dissent in Science Be Epistemically Detrimental?” European Journal for Philosophy of Science 5, no. 3 (June 19, 2015): 261–278.

Boesch, Brandon. 2019. “The Means-End Account of Scientific, Representational Actions.” Synthese 196, no. 6 (June): 2305–2322.

Brown, Matthew. 2015. “John Dewey’s Pragmatist Alternative to the Belief-Acceptance Dichotomy.” Studies in History and Philosophy of Science Part A 53 (October): 70.

Brown, Matthew. 2020. Science and Moral Imagination: A New Ideal for Values in Science. University of Pittsburgh Press, October 27, 2020. Google Books: M4ZozQEACAAJ.

Browning, Matthew H. E. M., Larson Lincoln R., Sharaievska Iryna, Rigolon Alessandro, Olivia McAnirlin Lauren Mullenbach, Cloutier Scott, 2021. “Psychological Impacts from COVID-19 among University Students: Risk Factors across Seven States in the United States.” PLOS ONE 16, no. 1 (January 7, 2021): e0245327.

Buzzelli, Michael. 2017. “Air Pollution and Respiratory Health: Does Better Evidence Lead to Policy Paralysis?” In The Routledge Handbook of Environmental Justice, edited by Holifield, Ryan, Chakraborty Jayajit, and Walker Gordon, 327–37. Routledge, September 14, 2017. Google Books: Sa 41DwAAQBAJ.

Copeland, William E., Ellen McGinnis Yang Bai, Adams Zoe, Nardone Hilary, Devadanam Vinay, Rettew Jeffrey, 2021. “Impact of COVID-19 Pandemic on College Student Mental Health and Wellness.” Journal of the American Academy of Child & Adolescent Psychiatry 60, no. 1 (January 1, 2021): 134–141.e2.

Dinerstein, Chuck. 2019. “Revisiting ‘Harvard Six Cities’ Study.” American Council on Science and Health, December 4, 2019, 8:00 a.m. (−05:00). Accessed January 28, 2020.

Dockery, Douglas W. 2009. “Health Effects of Particulate Air Pollution.” Annals of Epidemiology 19, no. 4 (April 1, 2009): 257–263.

Dockery, Douglas W., Pope C. Arden, Xu Xiping, Spengler John D., Ware James H., Fay Martha E., Ferris Benjamin G., 1993. “An Association between Air Pollution and Mortality in Six U.S. Cities.” New England Journal of Medicine 329, no. 24 (December 9, 1993): 1753–1759.

Douglas, Heather. 2013. “The Value of Cognitive Values.” Philosophy of Science 80 (5): 796–806.

Dunlap, Riley E. 2014. “Clarifying Anti-Reflexivity: Conservative Opposition to Impact Science and Scientific Evidence.” Environmental Research Letters 9 (2): 021001.

Dunn, John. 2004. “EPA Junk Science on Air Pollution Deaths.” American Council on Science and Health, December 22, 2004. Accessed January 28, 2020.

Eisenberg, Daniel, Sarah Ketchen Lipson Justin Heinze, Zhou Sasha, Talaski Amber, Patterson Akilah, Foge Sarah, n.d. The Healthy Minds Study: 2021 Winter/Spring Data Report. Healthy Minds Network. Accessed January 4, 2021.

Elliott, Kevin C., and McKaughan Daniel J.. 2014. “Nonepistemic Values and the Multiple Goals of Science.” Philosophy of Science 81 (1): 1–21. JSTOR: info/10. 1086/674345.

Elliott, Kevin C., and Resnik David B.. 2019. “Making Open Science Work for Science and Society.” Environmental Health Perspectives 127, no. 7 (July): 075002.

Elliott, Kevin C., and Willmes David. 2013. “Cognitive Attitudes and Values in Science.” Philosophy of Science 80 (5): 807–817. JSTOR: info/10.1086/673719.

Fernández Pinto, Manuela, and Hicks Daniel J.. 2019. “Legitimizing Values in Regulatory Science.” Environmental Health Perspectives 127, no. 3 (March 14, 2019): 035001.

Franco, Paul L. 2017. “Assertion, Nonepistemic Values, and Scientific Practice.” Philosophy of Science 84, no. 1 (January): 160–180.

Franco, Paul L. 2019. “Speech Act Theory and the Multiple Aims of Science.” Philosophy of Science 86, no. 5 (December 1, 2019): 1005–1015.

Fraser, Rachel. 2020. “Epistemic FOMO: A Review of Conspiracy Theories by Quassim Cassam (Polity Press, 2019) and A Lot of People Are Saying by Russell Muirhead and Nancy L. Rosenblum (Princeton University Press, 2020).” Cambridge Humanities Review 16 (Autumn): 15–28.

Friedman, Lisa. 2019. “E.P.A. to Limit Science Used to Write Public Health Rules.” The New York Times: Climate, November 11, 2019.

Gelman, Andrew. 2011. “Ethics and Statistics: Open Data and Open Methods.” Chance 24 (4): 51–53.

Gelman, Andrew. 2018. “Proposed New EPA Rules Requiring Open Data and Reproducibility.” Statistical Modeling, Causal Inference, and Social Science, April 25, 2018. Accessed March 8, 2020.

Gulis, Gabriel, and Fujino Yoshihisa. 2015. “Epidemiology, Population Health, and Health Impact Assessment.” Journal of Epidemiology 25, no. 3 (March 5, 2015): 179–180.

Havstad, Joyce C. 2021. “Book Review.” Studies in History and Philosophy of Science Part A (April): S0039368121000406.

Henly, Julia R., and Lambert Susan J.. 2014. “Unpredictable Work Timing in Retail Jobs: Implications for Employee Work–Life Conflict.” ILR Review 67, no. 3 (July 1, 2014): 986–1016.

Hicks, Daniel J. 2012. “Scientific Practices and Their Social Context.” PhD diss., U. of Notre Dame.

Hicks, Daniel J. 2014. “A New Direction for Science and Values.” Synthese 191, no. 14 (September): 3271–3295.

Hicks, Daniel J. 2021. “Open Science, the Replication Crisis, and Environmental Public Health,” July 30, 2021. Accessed July 31, 2021.

Hicks, Daniel J. 2022. “The P Value Plot Does Not Provide Evidence against Air Pollution Hazards.” Environmental Epidemiology 6, no. 2 (April): e198.

Hicks, Daniel J., and Stapleford Thomas A.. 2016. “The Virtues of Scientific Practice: MacIntyre, Virtue Ethics, and the Historiography of Science.” Isis 107, no. 3 (September 7, 2016): 449–472.

Holton, Gerald James. 1993. Science and Anti-science. Harvard University Press. Google Books: 7JiHi UHrsgsC.

Intemann, Kristen. 2015. “Distinguishing between Legitimate and Illegitimate Values in Climate Modeling.” European Journal for Philosophy of Science 5, no. 2 (May): 217–32.

Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2, no. 8 (August 30, 2005): e124.

Ioannidis, John P. A. 2018. “All Science Should Inform Policy and Regulation.” PLOS Medicine 15, no. 5 (May 3, 2018): e1002576.

Jasanoff, Sheila. 1992. “Science, Politics, and the Renegotiation of Expertise at EPA.” Osiris 7, no. 1 (January 1, 1992): 194–217.

Justin. 2015. “Answer to ‘Produce the Number 2014 without Any Numbers in Your Source Code’.” Code Golf Stack Exchange, April 8, 2015. Accessed December 15, 2021.

Kaiser, Jocelyn. 1997. “Showdown Over Clean Air Science.” Science 277, no. 5325 (July 25, 1997): 466–469.

Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford Studies in Philosophy of Science. Oxford ; New York: Oxford University Press.

Koken, Petra J. M., Piver Warren T., Ye Frank, Elixhauser Anne, Olsen Lola M., and Portier Christopher J.. 2003. “Temperature, Air Pollution, and Hospitalization for Cardiovascular Diseases among Elderly People in Denver.” Environmental Health Perspectives 111, no. 10 (August 1, 2003): 1312–1317.

Kourany, Janet A. 2010. Philosophy of Science after Feminism. Studies in Feminist Philosophy. Oxford, New York: Oxford University Press.

Kovaka, Karen. 2021. “Climate Change Denial and Beliefs about Science.” Synthese 198, no. 3 (March): 2355–2374.

Krewski, Daniel, Burnett Richard, Goldberg Mark, Hoover B. Kristin, Siemiatycki Jack, Jerrett Michael, Abrahamowicz Michal, 2003. “Overview of the Reanalysis of the Harvard Six Cities Study and American Cancer Society Study of Particulate Air Pollution and Mortality.” Journal of Toxicology and Environmental Health, Part A 66, nos. 16–19 (January 1, 2003): 1507–1552.

Krieger, Nancy. 2011. Epidemiology and the People’s Health: Theory and Context. New York: Oxford University Press.

Krieger, Nancy. 2015. “Public Health, Embodied History, and Social Justice: Looking Forward.” International Journal of Health Services 45, no. 4 (October 1, 2015): 587–600.

Kukla, Rebecca. 2012. “‘Author TBD’: Radical Collaboration in Contemporary Biomedical Research.” Philosophy of Science 79, no. 5 (December 1, 2012): 845–858.

Lacey, Hugh. 2015. “‘Holding’ and ‘Endorsing’ Claims in the Course of Scientific Activities.” Studies in History and Philosophy of Science Part A 53 (October): 89–95. 2015.05.009.

Lambert, Susan J. 2008. “Passing the Buck: Labor Flexibility Practices That Transfer Risk onto Hourly Workers.” Human Relations 61, no. 9 (September 1, 2008): 1203–1227.

Latour, Bruno, and Woolgar Steve. 1979. Laboratory Life: The Construction of Scientific Facts.

Leonelli, Sabina. 2016a. Data-Centric Biology: A Philosophical Study. University of Chicago Press,. Google Books: 0sJADQAAQBAJ.

Leonelli, Sabina. 2016b. “Open Data: Curation Is Under-Resourced.” Nature 538, no. 7623 (October 6, 2016): 41–41.

Leonelli, Sabina. 2018. “The Time of Data: Timescales of Data Use in the Life Sciences.” Philosophy of Science 85, no. 5 (December 1, 2018): 741–754.

Lerner, Sharon. 2017. “Republicans Are Using Big Tobacco’s Secret Science Playbook to Gut Health Rules.” The Intercept, February 5, 2017. Accessed January 6, 2021.

Levin, Nadine, Leonelli Sabina, Weckowska Dagmara, Castle David, and Dupré John. 2016. “How Do Scientists Define Openness? Exploring the Relationship Between Open Science Policies and Research Practice” Bulletin of Science, Technology & Society (September 30, 2016).

Lipari, Rachel N. 2018. “Key Substance Use and Mental Health Indicators in the United States: Results from the 2018 National Survey on Drug Use and Health,” 82.

Lobato, Emilio J. C., and Zimmerman Corinne. 2019. “Examining How People Reason about Controversial Scientific Topics.” Thinking & Reasoning 25, no. 2 (April 3, 2019): 231–255.

Longino, Helen E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, N.J: Princeton University Press.

Longino, Helen E. 1995. “Gender, Politics, and the Theoretical Virtues.” Synthese 104 (3): 383–397.

MacIntyre, Alasdair. 1973. “Ideology, Social Science, and Revolution.” Comparative Politics 5 (3): 321– 342. JSTOR: 421268.

Magnus, P. D. 2013. “What Scientists Know Is Not a Function of What Scientists Know.” Philosophy of Science 80 (5): 840–849. JSTOR: 10.1086/673718.

March for Science. 2017. “The Science Behind the March for Science Crowd Estimates.” March for Science, May 16, 2017. Accessed June 9, 2020.

Mayo, Deborah. 1996. Error and the Growth of Experimental Knowledge. Science and Its Conceptual Foundations. Chicago: University of Chicago Press.

Mayo, Deborah. 2014a. “Phil 6334 Visitor: S. Stanley Young, “Statistics and Scientific Integrity” .” Error Statistics Philosophy, April 23, 2014, 8:45 p.m. (Z). Accessed January 22, 2020.

Mayo, Deborah. 2014b. “Reliability and Reproducibility: Fraudulent p-Values through Multiple Testing (and Other Biases): S. Stanley Young (Phil 6334: Day#13).” Error Statistics Philosophy, April 27, 2014. Accessed January 22, 2020.

McCright, Aaron M, Dentzman Katherine, Charters Meghan, and Dietz Thomas. 2013. “The Influence of Political Ideology on Trust in Science.” Environmental Research Letters 8, no. 4 (November 13, 2013): 044029.

McKaughan, Daniel J., and Elliott Kevin C.. 2015. “Cognitive Attitudes and Values in Science.” Studies in History and Philosophy of Science Part A,

McMullin, Ernan. 1982. “Values in Science.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1982 (2): 3–28.

Melo-Martín, Inmaculada de, and Intemann Kristen. 2018. The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. Oxford University Press, July 2, 2018. Google Books: U2hiDwAAQBAJ.

Miller, Boaz. 2021. “When Is Scientific Dissent Epistemically Inappropriate?” Philosophy of Science 88, no. 5 (December 1, 2021): 918–928.

Milloy, Steven. 2019. “Milloy Presentation to EPA CASAC Re Claim That PM Kills.” , October 22, 2019. Accessed January 28, 2020.

Moher, David, Naudet Florian, Christea Ioana A., Miedema Frank, Ioannidis John P., and Goodman Steven N.. 2018. “New Principles for Assessing Scientists.” Issues in Science and Technology, November 27, 2018. Accessed January 1, 2019.

Munafò, Marcus R., Nosek Brian A., Bishop Dorothy V. M., Button Katherine S., Chambers Christopher D., du Sert Nathalie Percie, Simonsohn Uri, 2017. “A Manifesto for Reproducible Science.” Nature Human Behaviour 1, no. 1 (1 2017): 1–9.

Nature. 2012. “Death of Evidence.” Nature 487, no. 7407 (July): 271–272.

Nature Communications. 2018. “Epidemiology Is a Science of High Importance.” Nature Communications 9, no. 1 (May 7, 2018): 1–2.

Neurath, Otto. 1983. “The Lost Wanderers of Descartes and the Auxiliary Motive: On the Psychology of Decision.” In Philosophical Papers, 1913–1946 , 1–12. Dordrecht-Holland: D. Reidl Publishing Company.

Nosek, B. A., Alter G., Banks G. C., Borsboom D., Bowman S. D., Breckler S. J., Buck S., 2015. “Promoting an Open Research Culture.” Science 348, no. 6242 (June 26, 2015): 1422–1425.

Nosek, Brian. 2019. Testimony of Brian A. Nosek, Ph.D., November 13, 2019. Accessed June 11, 2020.

Parker, Wendy. 2020. “Model Evaluation: An Adequacy-for-Purpose View.” Philosophy of Science 87 (3): 457–477.

Parker, Wendy. 2021. “Science and Values in Climate Risk Management Speaker Series: Wendy Parker.” Rock Ethics Institute, June 11, 2021.

Potochnik, Angela. 2015. “The Diverse Aims of Science.” Studies in History and Philosophy of Science Part A 53 (October): 71–80.

Reardon, Sara. 2018. “University Says Prominent Food Researcher Committed Academic Misconduct.” Nature (September 21, 2018).

Richardson, Henry. 2002. Democratic Autonomy. Oxford University Press.

Ross, Ashley D., Struminger Rhonda, Winking Jeffrey, and Wedemeyer-Strombel Kathryn R.. 2018. “Science as a Public Good: Findings From a Survey of March for Science Participants.” Science Communication 40, no. 2 (April 1, 2018): 228–245.

Schipani, Vanessa. 2018. “Debate Over EPA’s ‘Transparency’ Rule.” , May 14, 2018, 8:23 a.m. (Z). Accessed January 28, 2020.

Schwartz, Joel. 2018. ““Transparency” as Mask? The EPA’s Proposed Rule on Scientific Data.” New England Journal of Medicine 0, no. 0 (August 29, 2018): null.

Schweikert, David. 2014. “H.R. 4012: Secret Science Reform Act of 2014.” 113th US Congress (2013–2014), November 20, 2014. Webpage. Accessed January 29, 2020.

Smaldino, Paul E., and McElreath Richard. 2016. “The Natural Selection of Bad Science.” Royal Society Open Science 3, no. 9 (September): 160384.

Committee on SST (United States House of Representatives Committee on Science, Space, and Technology). 2012. “Energy and Environment Subcommittee Hearing - Fostering Quality Science at EPA: Perspectives on Common Sense Reform – Day II,” February 3, 2012, 10:00 a.m. (−05:00). Accessed January 27, 2020.

Steegen, Sara, Tuerlinckx Francis, Gelman Andrew, and Vanpaemel Wolf. 2016. “Increasing Transparency Through a Multiverse Analysis.” Perspectives on Psychological Science 11, no. 5 (September 1, 2016): 702–712.

Steel, Daniel. 2010. “Epistemic Values and the Argument from Inductive Risk.” Philosophy of Science 77 (1): 14–34. JSTOR: 10.1086/650206.

Steel, Daniel. 2013. “Acceptance, Values, and Inductive Risk.” Philosophy of Science 80 (5): 818–828. JSTOR: 10.1086/673936.

Steel, Daniel. 2015a. “Acceptance, Values, and Probability.” Studies in History and Philosophy of Science Part A 53 (October): 81–88.

Steel, Daniel. 2015b. Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy. Cambridge University Press.

Steel, Daniel. 2016. “Climate Change and Second-Order Uncertainty: Defending a Generalized, Normative, and Structural Argument from Inductive Risk.” Perspectives on Science 24, no. 6 (June 28, 2016): 696–721.

Steel, Daniel. 2017. “Qualified Epistemic Priority.” In Current Controversies in Values and Science, edited by Elliott, Kevin C. and Steel Daniel, 49–63. New York and London: Routledge.

Tackett, Jennifer L., Brandes Cassandra M., King Kevin M., and Markon Kristian E.. 2019. “Psychology’s Replication Crisis and Clinical Psychological Science.” Annual Review of Clinical Psychology 15 (1): 579–604.

US EPA. 2017. “Our Mission and What We Do.” US EPA, March 22, 2017. Accessed June 16, 2020.

US EPA. 2018. “Strengthening Transparency in Regulatory Science.” Federal Register 83 (April 30, 2018): 18768–18774.

Van Doren, Peter. 2018. “EPA and Data Transparency.” Cato Institute, April 2, 2018. Accessed January 28, 2020.

Wagner, Wendy, Fisher Elizabeth, and Pascual Pasky. 2018. “Whose Science? A New Era in Regulatory “Science Wars”.” Science 362, no. 6415 (November 9, 2018): 636–639.

Washburn, Anthony N., and Skitka Linda J.. 2018. “Science Denial Across the Political Divide: Liberals and Conservatives Are Similarly Motivated to Deny Attitude-Inconsistent Science.” Social Psychological and Personality Science 9, no. 8 (November 1, 2018): 972–980.

Who We Are: S. Stanley Young.” n.d. Heartland Institute. Accessed January 20, 2020.

Winsberg, Eric, Huebner Bryce, and Kukla Rebecca. 2014. “Accountability and Values in Radically Collaborative Research.” Studies in History and Philosophy of Science 46 (June): 16–23.

Young, S. Stanley. 2013a. “S. Stanley Young: More Trouble with ‘Trouble in the Lab’ (Guest Post).” Error Statistics Philosophy, November 16, 2013. Accessed January 22, 2020.

Young, S. Stanley. 2013b. “S. Stanley Young: Scientific Integrity and Transparency.” Error Statistics Philosophy, March 12, 2013, 2:23 a.m. (Z). Accessed January 20, 2020.

Young, S. Stanley. 2014. “S. Stanley Young: Are There Mortality Co-Benefits to the Clean Power Plan? It Depends. (Guest Post).” Error Statistics Philosophy, December 13, 2014, 5:15 p.m. (Z). Accessed May 27, 2018.

Young, S. Stanley. 2017a. “Air Quality Environmental Epidemiology Studies Are Unreliable.” Regulatory Toxicology and Pharmacology 86 (June): 177–180.

Young, S. Stanley. 2017b. Suggestions for EPA, May 26, 2017. Accessed July 28, 2020.

Young, S. Stanley, Acharjee Mithun Kumar, and Das Kumer. 2019. “The Reliability of an Environmental Epidemiology Meta-Analysis, a Case Study.” Regulatory Toxicology and Pharmacology 102 (March 1, 2019): 47–52.

Young, S. Stanley, and Kindzierski Warren. 2019. “Combined Background Information for MetaAnalysis Evaluation,” January 15, 2019. Accessed January 22, 2020. arXiv: 1808.04408 [stat].

Young, S. Stanley, and Warren Kindzierski. 2020. “PM2.5 and All-Cause Mortality,” October 31, 2020. Accessed December 9, 2020. arXiv: 2011.00353 [stat].