Reflecting on the last 30 years of the Journal of Electronic Publishing (JEP) as the current co-editors, we find ourselves tracing the role of the journal editor over time. It is difficult to reflect backward without making sweeping claims that may or may not be grounded in truth: that journal editing used to be more about gatekeeping (was it?) and ensuring rejection rates stayed high (for everyone?) and thereby preserving prestige (always?). What we can speak to with confidence, however, is our own commitment to editorial practice as curation and community building—to taking a values-based approach to editing this journal that balances inclusion with interest, collaboration with timeliness. We remain grateful to the editors who came before us and steered the JEP ship in such a way that we can enter this fourth decade of the journal with our value-laden commitment at the helm.
JEP started off as an academic publishing trade-initiated, free, online journal, with an editorial focus on curation and experimentation. Colin Day (then director of the University of Michigan Press) started JEP in 1995 to “bring together all the interesting papers he had been reading and hearing presented about scholarly electronic publishing” alongside original refereed publications, interesting links, and news items. Yet the journal was also a testbed, a vehicle for experimentation to “test new ideas about the way an electronic journal might operate” (Day, quoted in Turner 1998). Judith Axler Turner, who took over from Colin Day as editor in 1997, further explored this in her first issue for JEP, asking editors of electronic-only, peer-reviewed scholarly journals “to write articles explaining what they were doing, why, and how it was working” (1998). For Day and Turner, the journal was primarily aimed at publishers, in addition to librarians, academics, and the generally interested; yet they saw what they were doing as a “broader, more intellectual approach to the universe of issues that cluster around the topic of electronic publishing.” As Day explained, “we do not seek how-to articles but we do seek how-to-think-about articles” (quoted in Turner 1998). Turner and Maria Bonn, JEP’s editor in the mid-2010s, continued this focus while exploring “many of the urgent topics of the contemporary publishing conversation”—mainly in relation to the economics and politics of publishing while experimenting with the “expressive possibilities of the media and methods that our authors use to communicate their work” (Bonn 2018).
As JEP’s current editors, we want to open out further from the now decades-long JEP focus on “how-to-think-about publishing.” To do so, we are taking a value-driven approach that aims to explore how, as editors, we can position the journal in such a way that it can play a more active, interventionist role in shaping the future of scholarly publishing—all in service of contributing to a more equitable academic publishing world. For us, community building and experimentation are key, tying in with how Bonn positioned JEP as “a kind of publishing laboratory, experimenting with the affordances of the Web” (Bonn 2018). We also follow in the tradition of another previous JEP editor, Shana Kimball, who established a tradition of producing themed issues and working directly with guest special issue editors to make connections to the broader field of publishing scholarship. In doing so, the journal takes position as a nodal point within this wider discursive community.
In 2024, one year into our tenure as the new co-editors of JEP, we asked the following set of questions: What do we want the future of scholarly publishing to be? How do we take an active role in shaping that future, instead of reacting to corporate-led decisions and initiatives that dominate the current scholarly communication landscape? Now that we are two and a half years in, these questions have become more pressing than ever. The year and a half between these last published questions and this writing has demonstrated just how much “corporate-led decisions and initiatives” can shape scholarly communication. Yes, we are gesturing to AI. No, we will not develop yet another think piece on whether or not AI is coming for our jobs/destroying the social contract/ruining people’s brains. We do, however, want to reflect on how specific corporate-led decisions are starting to integrate AI into journals and journal editing and outline our own reflections on this relatively recent development.
Like many journals, we have seen abstracts come in over the past 18 months that appear to have been AI generated or at least AI assisted. In part, we can identify AI-generated abstracts because we exclusively publish special issues with specific themes, as noted above. Since we are not accepting general submissions for the time being, authors need to write proposals in response to a very specific prompt. We can also spot AI-generated work because we read everything that gets submitted to the journal (with our own eyes!). And then we, or our special issue guest editors, work with each author on an individual basis to develop their piece. We hand select peer reviewers based on the subject of each article and on who we think has the subject matter expertise to provide a thorough, critical, and constructive review. And then we ensure that each author responds to these reviews appropriately.
But we understand that other journals, and other editors, are struggling more with the glut of AI-generated submissions—especially megajournals that see hundreds, if not thousands, of submissions a year. This, of course, raises all sorts of editorial concerns: Who is the author(s)? Who will respond to the peer reviews? (The LLM that wrote the paper or the human author who submitted it?) And perhaps most crucially, will the citations be correct or confabulated?
Beyond the excessive, resource-intensive crawling of existing open access journals and scholarly content by the bots of AI companies—overwhelming servers, disrupting access, and sometimes even leading to platform outages—AI tools and generative technologies are increasingly introduced into or integrated within existing applications in the various stages of the research and publishing workflow (Hellman 2025; Kaebnick et al. 2024; Khalifa and Albadawy 2024; Weinberg 2025). Our concern around this is mostly in relation to how these tools are predominantly positioned as efficiency tools to achieve increased time savings and improved productivity. As editors, we do not consider efficiency and productivity to be the most “urgent topic” (cf. Bonn 2018) in academic publishing. There is an immense amount of research being published worldwide every single day. We are much more interested in tools and technologies that could promote increased care, engagement, interaction, and collaboration in academic research, publishing, and communication.
We see a clear use for tools that assist with various editorial processes, including as aids for editing, proofing, and translation. JEP, for instance, runs on the journal management system Janeway and relies on various tools, technologies, and platforms throughout the publishing process and has always been keen to experiment with these. But as do many others (Bender and Hanna 2025; Crawford 2021), we remain very concerned about the biases (related to gender, race, geography, language, culture, etc.) and inaccuracies AI tools introduce or further cement and perpetuate. These biases currently come to the fore most dangerously in situations where automation processes are (uncritically) introduced, such as using AI to do preliminary editorial screenings of abstract, automated reviews of articles or to identify and locate potential reviewers for manuscripts (Doskaliuk et al. 2025). At what costs do these proposed efficiencies come, and are they really efficiencies if they are introducing noise?
When we think of JEP and our editorial practice in service of it, we are apprehensive about how AI tools are currently being developed and implemented by large corporate players. There is a lack of clarity and transparency around who owns and controls these tools and their parent companies. We remain cautious about the various risks their uncritical implementation can introduce, from accuracy and reliability to ethical questions related to transparency, accountability, data privacy, and the proprietary nature of many of these tools (posing costs and access issues) (Kingsley 2023; Lo 2025; Singley 2025). Of course, we also have grave concerns related to the environmental impacts of the data centers that fuel these technologies, concerns that were in part explored in our recent special issue on “Publishing and Climate Justice” (Adema 2025). We also worry about the sector’s rapid (over)reliance on these tools: What will be the consequence of the introduction of these new, costly, predominantly proprietary AI writing, editing, and proofreading tools? Why is academia once again exacerbating its reliance on proprietary systems and software throughout the research and publishing workflow (Pooley 2024)?
After years of academic focus on the value of care (as espoused in JEP 28.1; see Rogers 2025), we are witnessing in real time a run toward replacing carework with automation. Careful editorial labor is being replaced with AI labor in the interest of profit, speed, and increased functionality. Instead of addressing the crisis in peer review by providing scholars more time as part of their academic roles to take part in publishing processes as both editors and reviewers (Adema and Moore 2023), more and more of these processes are outsourced to automation—be prepared to be inundated by even more spambots in the future!
We support experiments with creating more ethical AI models (e.g., those focused on/trained on small language models) and look forward to their response to the concerns listed above. In a similar vein, we are also somewhat disappointed in how so many in the publishing sector seem more interested in using AI tools to create efficiencies in existing systems to further boost productivity rather than in how these tools could be used to explore new innovative forms of (posthuman) research creation and communication (Amerika 2022; Hall 2025; Leib 2023; Zylinska 2020). How could these technologies potentially benefit the way that we currently engage with and gather around texts? What is our entanglement as scholars and editors with the media and technologies that we use to communicate our research? What is the nature of agency within AI-driven knowledge production?
Perhaps most importantly, we are disquieted that the scholarly publishing industry’s infatuation with AI simply distracts from what are arguably much more urgent matters. There are pressing issues at hand, including the ongoing commercialization of the knowledge ecosystem; the precarity and inequities of academic labor; attacks on free speech and growing attempts at censorship of critical research content (Singley 2025); and the ongoing, devastating impacts of the climate crisis, exacerbated by the industry’s excessive data use and connections to fossil fuel industries.
For us, these many concerns provide yet another argument for scaling small and keeping relationality and community at the heart of knowledge production (Adema and Moore 2021). Here at JEP we do not need AI tools to streamline or automate the peer review process, to generate abstracts, or to summarize articles. As noted in our special issue on “Multilingual Publishing and Scholarship” (JEP 27.1; see Arbuckle et al. 2024), we have used an AI tool (DeepL) to do the first round of automated translation of an article, which is then human-checked by the editor, author, and copy editor. It is not that we are Luddites, opposed to the use of technology; we simply like to do editorial work. We like to read the submissions from our current and prospective authors. We like to find the right reviewers to engage with the work, and we like to collaborate to bring a piece to life, together. Does this take time? Yes. But, well, this is why we are here. To take the time. Or rather, to dedicate our time to the ongoing discursive and community building project that is JEP. In saying this, we do not want to argue for human exceptionalism, or for editing as being some kind of human or liberal humanist endeavor exclusively, nor are we denying the crucial and formative agency played by various nonhuman technological agencies (including bots) in the creation and distribution of research. What we want to highlight and celebrate is the value of the editorial role, and of the journal as a relational format, as it has historically developed and will continue to develop into unknown future forms. As Gary Hall (2025) has argued, we want to leave the question of AI and its future, as well as our judgment about it, more open while recognizing what we currently value as editors, including our co-constitutive and entangled relationship as editors with technology.
As we reflect on the 30th anniversary of JEP, we look to its future. We will continue our editorial work to help improve arguments, to curate content, and to build conversations and collaborations around that content in an ethical way with the communities involved in their creation. We want to remain open to the opportunities new tools and technologies can bring to digital academic publishing while remaining aware of some of the things that have worked against it (i.e., ongoing commercialization and inequities). We will (continue to) write and publish with machines in a way that serves the research needs and values of our fields and scholarly communities.
In doing so, we take inspiration from our fellow editors and co-conspirators who promote editing with care (positioning editing predominantly as carework) and support community building around a journal or else a press, collective, or publishing project (Snyder et al. 2023; Kiesewetter 2024). We look to the experiences shared in our “On Gathering” special issue (Rogers 2025); mediastudies.press’s working groups with authors; and punctum books’ pre- and post-publication author groups that make writing and publishing a less lonely and solitary endeavor.1 We will continue to carry the JEP torch into its next decade, with an ongoing focus on curating the scholarly archive, publishing output from important conferences in the field, and experimenting with new forms of publishing and writing.
***
In this celebratory special issue dedicated to JEP’s 30th anniversary, we are pleased to share additional reflections from three current members of JEP’s editorial board: Chérifa Boukacem-Zeghmouri (Université Claude Bernard Lyon 1), John W Maxwell (Simon Fraser University), and Jefferson Pooley (University of Pennsylvania). Boukacem-Zeghmouri provides a historical reflection on JEP’s trajectory over the past few decades and the ways in which her own work has intersected with the journal’s over time. She provides these insights in both French and English. Maxwell telescopes into the Books and Browsers moment, an era that JEP engaged by publishing proceedings from that conference. And Pooley considers the predominance of textuality in scholarly communication, a reality reflected in JEP’s own historical corpus, which is primarily although not exclusively text based. He invites media scholars to more fully embrace different modes of communicating their work. Thank you to these authors and to all the JEP authors, editors, peer reviewers, and readers—as well as the tireless Michigan Publishing team—for their contributions to the discursive community that is JEP: past, present, and future.
Notes
References
Adema, Janneke, ed. 2025. “Publishing and Climate Justice.” Special issue, Journal of Electronic Publishing 28 (1). https://journals.publishing.umich.edu/jep/issue/386/info/.https://journals.publishing.umich.edu/jep/issue/386/info/
Adema, Janneke, and Samuel A. Moore. 2021. “Scaling Small; Or How to Envision New Relationalities for Knowledge Production.” Westminster Papers in Communication and Culture 16 (1): 1. https://doi.org/10.16997/wpcc.918.https://doi.org/10.16997/wpcc.918
Adema, Janneke, and Samuel A. Moore. 2023. “‘Just One Day of Unstructured Autonomous Time’: Supporting Editorial Labour for Ethical Publishing Within the University.” New Formations 2023 (110–111): 8–27. https://doi.org/10.3898/NewF:110-111.01.2024.https://doi.org/10.3898/NewF:110-111.01.2024
Amerika, Mark. 2022. My Life as an Artificial Creative Intelligence. Stanford University Press. https://www.sup.org/books/media-studies/my-life-artificial-creative-intelligence.https://www.sup.org/books/media-studies/my-life-artificial-creative-intelligence
Arbuckle, Alyssa, Janneke Adema, and Élika Ortega, eds. 2024. “Multilingual Publishing and Scholarship.” Special issue, Journal of Electronic Publishing 27 (1). https://journals.publishing.umich.edu/jep/issue/262/info/.https://journals.publishing.umich.edu/jep/issue/262/info/
Bender, Emily M., and Alex Hanna. 2025. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. HarperCollins.
Bonn, Maria. 2018. “An Editorial Farewell.” Journal of Electronic Publishing 21 (1). https://doi.org/10.3998/3336451.0021.104.https://doi.org/10.3998/3336451.0021.104
Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Doskaliuk, Bohdana, Olena Zimba, Marlen Yessirkepov, Iryna Klishch, and Roman Yatsyshyn. 2025. “Artificial Intelligence in Peer Review: Enhancing Efficiency While Preserving Integrity.” Journal of Korean Medical Science 40 (7): e92. https://doi.org/10.3346/jkms.2025.40.e92.https://doi.org/10.3346/jkms.2025.40.e92
Hall, Gary. 2025. Masked Media: What It Means to Be Human in the Age of Artificial Creative Intelligence. Open Humanities Press. https://www.openhumanitiespress.org/books/titles/masked-media/.https://www.openhumanitiespress.org/books/titles/masked-media/
Hellman, Eric. 2025. “AI Bots Are Destroying Open Access.” Go To Hellman (blog), March 21, 2025. https://go-to-hellman.blogspot.com/2025/03/ai-bots-are-destroying-open-access.html.https://go-to-hellman.blogspot.com/2025/03/ai-bots-are-destroying-open-access.html
Kaebnick, Gregory E., David Christopher Magnus, Audiey Kao, et al. 2024. “Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.” American Journal of Bioethics 24 (3): 5–8. https://doi.org/10.1080/15265161.2023.2292437.https://doi.org/10.1080/15265161.2023.2292437
Khalifa, Mohamed, and Mona Albadawy. 2024. “Using Artificial Intelligence in Academic Writing and Research: An Essential Productivity Tool.” Computer Methods and Programs in Biomedicine Update 5 (January): 100145. https://doi.org/10.1016/j.cmpbup.2024.100145.https://doi.org/10.1016/j.cmpbup.2024.100145
Kiesewetter, Rebekka. 2024. “Experiments Towards Editing Otherwise.” Culture Machine: Journal of Culture and Theory 23.
Kingsley, Danny. 2023. “AI and Publishing: Moving Forward Requires Looking Backward.” Digital Science, August 15, 2023. https://www.digital-science.com/blog/2023/08/ai-and-publishing-moving-forward-requires-looking-backward/.https://www.digital-science.com/blog/2023/08/ai-and-publishing-moving-forward-requires-looking-backward/
Leib, Robert. 2023. Exoanthropology: Dialogues with AI. punctum books. https://doi.org/10.53288/0398.1.00.https://doi.org/10.53288/0398.1.00
Lo, Leo S. 2025. “Generative AI and Open Access Publishing: A New Economic Paradigm.” Library Trends 73 (3): 160–76.
Pooley, Jefferson. 2024. “Large Language Publishing: The Scholarly Publishing Oligopoly’s Bet on AI.” KULA: Knowledge Creation, Dissemination, and Preservation Studies 7 (1): 1–11. https://doi.org/10.18357/kula.291.https://doi.org/10.18357/kula.291
Rogers, Katina L., ed. 2025. “On Gathering: Exploring Collective and Embodied Modes of Scholarly Communication and Publishing.” Special issue, Journal of Electronic Publishing 28 (1). https://journals.publishing.umich.edu/jep/issue/336/info/.https://journals.publishing.umich.edu/jep/issue/336/info/
Singley, Jay. 2025. “‘We Couldn’t Generate an Answer for Your Question.’” ACRLog (blog), July 21, 2025. https://acrlog.org/2025/07/21/we-couldnt-generate-an-answer-for-your-question/.https://acrlog.org/2025/07/21/we-couldnt-generate-an-answer-for-your-question/
Snyder, Livy Onalee, Eileen A. Fradenburg Joy, and Vincent W. J. van Gerven Oei. 2023. “Encounters at the End of the Book: Caring for Authors and Exchanging Ideas.” punctum books (blog), September 1, 2023. https://punctumbooks.pubpub.org/pub/encounters-at-the-end-of-the-book/release/1.https://punctumbooks.pubpub.org/pub/encounters-at-the-end-of-the-book/release/1
Turner, Judith Axler. 1998. “Mickey, Judy, Colin, and Me.” First Monday 3 (1). https://firstmonday.org/ojs/index.php/fm/article/download/571/492?inline=1.https://firstmonday.org/ojs/index.php/fm/article/download/571/492?inline=1
Weinberg, Michael. 2025. Are AI Bots Knocking Cultural Heritage Offline? The GLAM-E Lab. https://glamelab.org/products/are-ai-bots-knocking-cultural-heritage-offline/.https://glamelab.org/products/are-ai-bots-knocking-cultural-heritage-offline/
Zylinska, Joanna. 2020. AI Art: Machine Visions and Warped Dreams. Open Humanities Press. https://www.openhumanitiespress.org/books/titles/ai-art/.https://www.openhumanitiespress.org/books/titles/ai-art/
