Skip to main content
Article

The Notion and Assessment of ‘Predatory’ in Scholarly Publishing

Authors
  • Teresa Schultz orcid logo (University of Nevada, Reno)
  • Leila Belle Sterman orcid logo (Montana State University)
  • Joshua Neds-Fox orcid logo (Wayne State University)
  • Matt Ruen orcid logo (Grand Valley State University)
  • Brianne Selman orcid logo (University of Winnipeg)
  • Stephanie Towery orcid logo (Texas State University)

Abstract

The notion of predatory publishing as a foil for “traditional” publishing encourages a binary differentiation between subscription publishing and all other forms of scholarly discourse. By leaning into the familiar, publishers and those seeking to maintain control, profit, and prestige in the publishing ecosystem label all other forms as other or predatory and conflate innovation with scam.

Keywords: assessment, predatory, evaluation

How to Cite:

Schultz, T., Sterman, L. B., Neds-Fox, J., Ruen, M., Selman, B. & Towery, S., (2023) “The Notion and Assessment of ‘Predatory’ in Scholarly Publishing”, The Journal of Electronic Publishing 26(1). doi: https://doi.org/10.3998/jep.3681

1404 Views

284 Downloads

Published on
2023-05-09

The Problem

The notion of predatory publishing as a foil for “traditional” publishing encourages a binary differentiation between subscription publishing and all other forms of scholarly discourse. By leaning into the familiar, publishers and those seeking to maintain control, profit, and prestige in the publishing ecosystem label all other forms as “other” or “predatory” and conflate innovation with scam.

Open access is possible based on the distributive power of the internet. Unfortunately, the digital, networked landscape makes multiple types of production easier, and this landscape has enabled the proliferation of many varieties of publication. Just as vanity publishing was a catch-all to deride monographs that didn’t fit the perceived standard of publishing (Laquintano, 2013), predatory is a non-specific term used to warn authors about a publication without offering analysis or education. Lacking a definition, the concept of predatory journals is used as a threat to scholarship that corrals authors into the safety of large, commercial, “traditional” publishers.

The term predatory is often applied to publications that promise peer review and do not perform it and those that impose article processing charges (APCs) without peer review or prior notice. The idea of predatory publishers and journals is well documented in the scholarly literature, but there is a lack of consensus about what actually constitutes a predatory journal or publisher. There is no one agreed-upon definition for the term. A scoping review attempting to create a definition found 334 articles that mention predatory journals. Of those articles, only 38 actually provide “relevant empirically derived characteristics of predatory journals” (Cobey et al. 2018), thus the authors did not synthesize a single overarching definition. Another group attempted to provide a definitive definition through a Delphi-like project bringing together scholars whom they deemed important stakeholders (Grudniewicz et al. 2019). They eventually decided that “predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices” (Grudniewicz et al. 2019, 211). There is no evidence that the broader scholarly communication community has accepted this illuminating but non-instructive definition. We could suggest another definition for predatory publishing: If all scholarly publishing is extractive in some way, predatory publishing should be defined as extractive without the consent of the author. This could be extractive of time, money, knowledge, or expertise. And while this may serve as an interesting discussion point, without actionable criteria we continue to create lists and checklists to identify predatory publishers without more deeply engaging those authors trying to navigate academic publishing.

Journal watchlists attempt to deter authors from predation, with Jeffrey Beall’s list the first and likely most well known. A large number of lists have appeared since his was shuttered in 2017, each displaying issues inherent to all lists; for example, lists rely on the false binary of “good” and “bad” journals, ignoring the complexities of each journal, their authors, and circumstances. Definitions, lists, and criteria have proliferated, with some research advocating for greater transparency (Grudniewicz et al. 2019), the facilitation of a more nuanced decision-making process (Koerber et al. 2020, 7), or checklists of criteria to create a ratings score (Teixeira da Silva et al. 2021) for publications.

Transparency is critical to the assessment and evaluation of publishing venues. A lack of peer review or formal validation is often cited as a predatory indicator, yet existing lists do little to evaluate or document this practice (Strinzel et al. 2019). Agnes Grudniewicz and colleagues allude to the difficulty of evaluating peer reviewing in their definition of predatory journals: “We are not saying that peer review is unimportant, only that it is currently impossible to assess” (2019, 212). This is a common thread in evaluating journals: lists use indicators that flag the journal as predatory without a nuanced review of actual practices. Saurabh Khanna and John Willinsky also note this, stating that “without access to a journal’s editorial processes, Beall and Cabells rely on proxies for ‘probable threats’ to scholarly integrity, such as unprofessional websites, incomplete mastheads, exaggerated claims, and email spamming” (2022, 1). Unclear criteria and a lack of transparency in their evaluation produce lists that engender decision-making without consideration. Beall often did not explain which criteria journals or publishers violated before adding them to his list, expecting readers to trust his “expertise.” A tool that assesses predatoriness with criteria that is sheltered from view and critique “is not a tool of any real use or value at all since such tools themselves cannot be properly vetted” (Teixeira da Silva et al. 2021, 8592).

Clarity and nuance are crucial to a meaningful journal evaluation process, and some have made attempts to move beyond the yes-or-no list approach. There are tools that aim to assess journals through an academic crowdsourced model (QOAM 2022) and proposals to use a ranking system similar to the ones used to determine a person’s credit rating (Teixeira da Silva et al. 2021). These and other ranking systems do attempt to acknowledge the complexity involved in scholarly journals and the decisions of where to publish, but ultimately they fall prey to some of the problems of more binary lists: they continue to suggest definitive answers to questions that are subjective, interconnected, and evolving.

Furthermore, these evaluative tools and assessments are not applied to all journals equally. Notably missing are subscription-based journals or journals that levy APCs from the comfort of a prestige brand (e.g., Nature Communications and Scientific Reports). Predatory definitions and criteria often ignore other potentially predatory practices, such as artificial selectivity, color charges for fully digital publications, the promise of publicity or prestige, cascading journals from a prestige brand, an emphasis on only extremely novel research, not publishing null results, and so on. Many well-respected journals evade the predatory label (Siler 2020) even while their publishing companies engage in extractive practices such as charging excessive publishing fees or forcing libraries to pay for little-used journals through Big Deal subscription packages.

This evasion is reinforced by lists that only assess open access journals. When Beall’s criteria were applied to a number of respected, subscription-based library and information science journals, many of them failed, demonstrating the hypocrisy of using Beall’s list as an assessment tool only for OA journals (Olivarez et al. 2018). Many popular journals, including the New England Journal of Medicine and The Lancet, do not provide details of their editorial board members, something that Beall would have seen as a clear predatory indicator for OA journals (Kratochvíl et al. 2020).

With tens of thousands of journals publishing millions of academic articles a year, any list will be inherently incomplete. Many evaluation criteria lack transparency and ignore the complex subjective nature of “quality” for more readily available signals of practice and, at least in the case of Cabells’ lists, are subscription-based, creating inequity of access to this knowledge.

Tools such as Think. Check. Submit. encourage authors to perform more in-depth, qualitative critiques of scholarly journals and publishers. Researchers often turn to their scholarly communications librarian to help with this type of work. However, this work largely goes unseen and unrecognized by anyone other than the initial requester. It also means that there are likely redundant evaluations as others must do their own journal assessment.

Our Solution

We set out to create an openly accessible, transparent evaluation tool that engages with the nuance of publishing circumstances and creates a clear record of the assessment. Without redefining or seeking to categorize journals, we hope to provide information in a format that allows authors to make considered choices and librarians to record the efforts of labor they likely already engage in. Working with the inherent humor of meta-analysis, we created Reviews: The Journal of Journal Reviews (RJJR). RJJR will publish nuanced, context-centered reviews of scholarly journals based on available, observable evidence. The “Journal Reviews”—peer-reviewed evaluations of journals across disciplines, subscription models, and regions—will offer researchers an alternative tool for evaluating unfamiliar publications while also modeling contextual evaluation. We are dedicated to the process of journal evaluation as an educational tool as well as a resource for the community.

The submission process for RJJR is much like that of other academic journals. Authors can submit their Journal Review as they might submit research to a typical scholarly journal. Peer reviewers will then review the Journal Review through an open process—both the Journal Review and the peer review will be publicly available. We know that many librarians and others are already doing the work of evaluating journals to support researchers, and by providing a platform for public sharing, we hope RJJR can elevate these products of librarian labor into tangible, citable, and reusable units of publication.

We recognize that conversation is a central part of scholarship and that Journal Reviews may become outdated, miss important facts, or benefit from further information. To enable publishers or editors to participate collegially, we have created “Responses.” These editorially reviewed Responses will add timely context to Journal Reviews and transparently display the conversation as an additional point of assessment for authors seeking to publish their work. Additionally, we anticipate that as the publishing ecosystem evolves, journals will change, so updated reviews will be both desired and needed. We aim to create a place for scholars and journals to have a conversation, as we recognize that values and methods change and nothing is static.

A key element of RJJR is a rubric for journal evaluation that encourages context- centered reviews of journals, referencing established ethical publishing guidelines, such as the Committee on Publication Ethics’ “Principles of Transparency and Best Practice in Scholarly Publishing” (Committee on Publication Ethics 2022). The rubric is organized into several areas of evaluation, including the transparency of the journal, the journal’s policies and observable actions, people connected to the journal, the material published by the journal, and the journal’s relationships to indexes and professional societies (Neds-Fox 2022). The rubric, which will guide authorship of Journal Reviews, peer reviews, and editorial review, provides a framework to encourage objective observation of available evidence and to avoid subjective value judgments about a journal. This rubric is openly available, so that it might be properly vetted itself, and is licensed for reuse, so others may modify it for their own purposes.

Importantly, Journal Reviews do not carry a numeric rating, a ranked status, an overt acceptability decision, or other factors that suggest a decision for a potential author. We believe that, to modify some of Ranganathan’s Five Laws, “Every author their journal. Every journal their scholarship.” There are necessarily differing journal scopes, levels of care, services, review processes, and resultant scholarship. With the global proliferation of journals, RJJR aims to help authors make the right choice for their work rather than serve as an arbiter of value. Our goal is to “save the time of the author” while also, through a peer review publication, credit librarians for work they are already doing and reduce replication of their work.

Instead of reinforcing the binary of “good” and “bad” journals, the RJJR founding editorial board hopes to enrich the landscape of journal information and provide another tool to assist authors in journal assessment and evaluation. We hope that RJJR will reinforce the practice of ethical publication while broadening access to this highly specific knowledge base. Finally, just as we hope to help document the evolving nature of the journal publishing landscape, we expect that Reviews: The Journal of Journal Reviews will change, grow, and adapt to fit the needs of authors in the future.

(RJJR is in development, to be hosted at the Texas Digital Library, and welcomes readers interested in contributing to the journal or participating in peer review to contact the editors at ReviewsJournal@protonmail.com.)

Bibliography

Cobey, Kelly D., Manoj M. Lalu, Becky Skidmore, Nadera Ahmadzai, Agnes Grudniewicz, and David Moher. 2018. “What Is a Predatory Journal? A Scoping Review.” F1000Research, August 23. https://doi.org/10.12688/f1000research.15256.2.https://doi.org/10.12688/f1000research.15256.2

Committee on Publication Ethics (COPE). 2022. “Principles of Transparency and Best Practice in Scholarly Publishing.” Last updated December 15. https://doi.org/10.24318/cope.2019.1.12.https://doi.org/10.24318/cope.2019.1.12

Grudniewicz, Agnes, David Moher, Kelly D. Cobey, Gregory L. Bryson, Samantha Cukier, Kristiann Allen, Clare Ardern, et al. 2019. “Predatory Journals: No Definition, No Defence.” Nature 576, no. 7786 (December): 210–212. https://doi.org/10.1038/d41586-019-03759-y.https://doi.org/10.1038/d41586-019-03759-y

Khanna, Saurabh, and John Willinsky. 2022. “What Those Responsible for Open Infrastructure in Scholarly Communication Can Do about Possibly Predatory Practices.” SciELO Preprints, March 14. https://doi.org/10.1590/SciELOPreprints.3474.https://doi.org/10.1590/SciELOPreprints.3474

Koerber, Amy, Jesse C. Starkey, Karin Ardon-Dryer, R. Glenn Cummins, Lyombe Eko, and Kerk F. Kee. 2020. “A Qualitative Content Analysis of Watchlists vs Safelists: How Do They Address the Issue of Predatory Publishing?” Journal of Academic Librarianship 46, no. 6 (November): 102236. https://doi.org/10.1016/j.acalib.2020.102236.https://doi.org/10.1016/j.acalib.2020.102236

Kratochvíl, Jiří, Lukáš Plch, Martin Sebera, and Eva Koriťáková. 2020. “Evaluation of Untrustworthy Journals: Transition from Formal Criteria to a Complex View.” Learned Publishing 33, no. 3: 308–322. https://doi.org/10.1002/leap.1299.https://doi.org/10.1002/leap.1299

Laquintano, Timothy. 2013. “The Legacy of the Vanity Press and Digital Transitions.” Journal of Electronic Publishing 16, no. 1. https://doi.org/10.3998/3336451.0016.104.https://doi.org/10.3998/3336451.0016.104

Neds-Fox, Joshua, Matthew Ruen, Teresa Schultz, Brianne Selman, Lella Sterman, and Stephanie Towery. 2022. Rubric for Reviews: The Journal of Journal Reviews. https://digital.library.txstate.edu/handle/10877/16433.https://digital.library.txstate.edu/handle/10877/16433

Olivarez, Joseph D., Stephen Bales, Laura Sare, and Wyoma vanDuinkerken. 2018. “Format Aside: Applying Beall’s Criteria to Assess the Predatory Nature of Both OA and Non-OA Library and Information Science Journals.” College & Research Libraries 79, no. 1. https://doi.org/10.5860/crl.79.1.52.https://doi.org/10.5860/crl.79.1.52

QOAM. 2022. “Bona Fide Journals.” Last updated September 23. https://www.qoam.eu/bfj.https://www.qoam.eu/bfj

Siler, Kyle. 2020. “There Is No Black and White Definition of Predatory Publishing.” Impact of Social Sciences (blog), May 13. https://blogs.lse.ac.uk/impactofsocialsciences/2020/05/13/there-is-no-black-and-white-definition-of-predatory-publishing/.https://blogs.lse.ac.uk/impactofsocialsciences/2020/05/13/there-is-no-black-and-white-definition-of-predatory-publishing/

Strinzel, Michaela, Anna Severin, Katrin Milzow, and Matthias Egger. 2019. “Blacklists and Whitelists to Tackle Predatory Publishing: A Cross-Sectional Comparison and Thematic Analysis.” mBio 10, no. 3. https://doi.org/10.1128/mBio.00411-19.https://doi.org/10.1128/mBio.00411-19

Teixeira da Silva, Jaime A., Daniel J. Dunleavy, Mina Moradzadeh, and Joshua Eykens. 2021. “A Credit- like Rating System to Determine the Legitimacy of Scientific Journals and Publishers.” Scientometrics 126, no. 10: 8589–8616. https://doi.org/10.1007/s11192-021-04118-3.https://doi.org/10.1007/s11192-021-04118-3

Teixeira da Silva, Jaime A., and Panagiotis Tsigaris. 2020. “Issues with Criteria to Create Blacklists: An Epidemiological Approach.” Journal of Academic Librarianship 46, no. 1 (January): 102070. https://doi.org/10.1016/j.acalib.2019.102070.https://doi.org/10.1016/j.acalib.2019.102070

Yeates, Stuart. 2017. “After Beall’s ‘List of Predatory Publishers’: Problems with the List and Paths Forward.” Information Research 22, no. 4 (December). http://www.informationr.net/ir/22-4/rails/rails1611.html.http://www.informationr.net/ir/22-4/rails/rails1611.html