Skip to main content
Article

Multilingual Scholarly Publishing and Artificial Intelligence Translation Tools: Weighing Social Justice and Climate Justice

Author
  • Lynne Bowker orcid logo (Université Laval)

Abstract

The use of English as a lingua franca for scholarly publishing has created inequities and is leading to a social justice movement to develop a more multilingual scholarly publishing ecosystem. However, implementing multilingualism is complex, and researchers and publishers are investigating the potential of AI translation tools for supporting linguistic diversity. At the same time, the climate justice movement is beginning to reveal some of the environmental and human costs associated with AI tools, which are embedded in an extractivist supply chain. This paper examines the intersection of multilingual scholarly publishing and AI translation tools to consider the benefits and drawbacks of this application of AI through the lenses of social justice and climate justice. Finally, I put forward the position that, in the pursuit of the ideal situation where no language is left behind in the scholarly publishing ecosystem, the climate costs currently outweigh the social benefits. 

Keywords: AI translation tools, machine translation, multilingual scholarly publishing, linguistic diversity, social justice, climate justice, environmental impact

How to Cite:

Bowker, L., (2025) “Multilingual Scholarly Publishing and Artificial Intelligence Translation Tools: Weighing Social Justice and Climate Justice”, The Journal of Electronic Publishing 28(2). doi: https://doi.org/10.3998/jep.7100

1181 Views

125 Downloads

Published on
2025-09-02

Peer Reviewed

The term social justice is commonly applied to movements that seek fairness, equity, and inclusion, or similar goals, for populations that are or are at risk of being marginalized (Duignan 2025). In short, social justice posits that everyone in a given community should have access to the same opportunities. One facet of contemporary scholarly publishing that has caused inequities is language. More specifically, the widespread adoption of English as a lingua franca for publishing in international journals means that non-Anglophone scholars must invest more time, effort, and money than their English-speaking peers to publish their research findings (Amano et al. 2023). Therefore, under the umbrella of social justice we find the movement toward multilingualism in scholarly publishing (e.g., Helsinki Initiative 2019; UNESCO 2021).

Climate justice is a social movement that acknowledges the disproportionate impacts of climate change on vulnerable populations. In what is sometimes described as a triple injustice, it has been pointed out that there is a disparity in responsibility for producing the problem, a disparity in experiencing the impacts of the climate crisis, and a disparity in the available resources for mitigation. Essentially, wealthy and industrialized nations have contributed the most to greenhouse gas emissions, while more vulnerable populations (e.g., low-income or developing countries) are often more seriously affected by climate change impacts such as resource scarcity. Finally, measures designed to address climate change (e.g., transitioning to a low-carbon economy) may be beyond the means of vulnerable groups. As observed by A. K. Menzies et al. (2022), knowledge systems are connected to climate justice. Currently, many of the people most affected by climate change are from non-Western communities, yet many of the policies and decisions aimed at mitigating climate change are based largely on Western science. By failing to consider non-Western knowledge, we overlook opportunities to incorporate relevant know-how to help address the problems of climate change (Menzies et al. 2022).

Intersectionality is an issue relevant to both social justice and climate justice. As put forward by Kimberlé Crenshaw (1991), intersectionality describes a situation in which an individual’s or group’s various social identities (e.g., gender, race, ethnicity, language, class, geographic location, religion, (dis)ability) overlap or interlock in such a way that different advantages or disadvantages can accumulate for these individuals or groups. In practice, it often means that people who are marginalized in one way may also be marginalized in other ways. In the context of scholarly publishing, English has become the key language in which to publish, and this central positioning of the English language facilitates the sharing of knowledge from Western societies but de-centers, or marginalizes, other voices, perspectives, and epistemologies.

Artificial intelligence (AI) tools such as neural machine translation and tools based on large language models (LLMs) have emerged in scholarly publishing as a potential means to ease the burden of non-Anglophone scholars as well as to foster a more multilingual scholarly publishing ecosystem. At the same time, these tools also raise environmental concerns that risk causing additional harm to some of these same disadvantaged communities. To my knowledge, there has been little to no attention paid to the specific issue of using AI translation tools for multilingual scholarly publishing in the context of the climate crisis. The goal of this article is to consider both the social justice and the climate justice implications of using AI translation tools for multilingual scholarly publishing and to present the position that pursuing the ideal of “no language left behind” (Costa-jussà et al. 2022) is not worth the costs in this specific context.

Social justice: Considering the case for multilingual scholarly publishing

Studies have demonstrated that English has become established as the central language for scholarly publishing in recent decades (e.g., Ammon 2010). The underlying reasons have been linked to the colonial imperialism, technological developments, and economic wealth of English-speaking Western countries (Gordin 2015). The resulting intersectionality of these characteristics has conferred an advantage on the English language, such that it has become conflated with the notion of prestige in the context of scholarly publishing (Szadkowski 2023). For instance, some institutional, regional, or national evaluation schemes accord more weight to international journals, whose likely language of publication is English (Nygaard 2019). To mount competitive applications for positions, tenure, promotion, or awards, scholars must therefore strive to publish in English.

Publishing in English comes with various types of costs for non-Anglophone scholars. Tatsuya Amano et al. (2023) have quantified some of these costs, finding that non-native English speakers spend a median 46.6% more time reading English-language literature than do native speakers. On top of this, non-native speakers of English then spend a median 50.6% more time writing a paper than their English-speaking peers. Non-native English speakers also need to spend more effort editing and proofreading their work, and in 75% of cases, they turn to others for support (Amano et al. 2023). While editing and proofreading may cost less than translation, it is still a non-negligible cost for many scholars in low-income countries (Ramírez-Castañeda 2020). In cases where scholars turn to a colleague for support rather than paying for a professional editing service, there may nonetheless be a cost, such as owing a favor to be paid in the future. Moreover, for many non-native English speakers, the costs do not end once a complete manuscript has been produced. Amano et al. (2023) point out that these scholars are about 2.5 times more likely to have their papers rejected, and 42.5% of them are asked to make language-related revisions to their work.

As reported by Valeria Ramírez-Castañeda (2020), faced with such barriers, some excellent researchers may opt out of pursuing a scientific career because they struggle with English, and they may not have reasonable access to courses or other support structures to help them master the language. At the same time, scholars who are highly proficient in English are likely to have more impressive CVs because they can work more quickly and easily in their dominant language. This may enable them to obtain key positions, such as on editorial boards for important journals. A lack of diversity with regard to geographic representation on editorial boards has been observed in a variety of disciplines, such as environmental biology (Espin et al. 2017) and psychology and neuroscience (Palser et al. 2022). In other words, English-speaking scholars from Western countries are overrepresented on the editorial boards of international journals, leading to concerns that these board members might influence—consciously or unconsciously—the subjects, methods, or other characteristics of the research published in the journals. This in turn can lead to other issues, such as a movement toward an epistemic monoculture in scholarly publishing (Bennett 2007). Meanwhile, the pressure to report research findings in English is contributing to domain loss in many other languages. Numerous scholars report being unable to discuss their research in their own language because the specialized terminology does not exist (Sibeko and Setaka-Bapela 2024). This is true not only for Indigenous or less widely used languages but also for some more widely used languages such as Russian (Shchemeleva 2021). Finally, another motivation for making research available in other languages is so that members of the public in different countries (e.g., policy makers, citizens), who might sometimes be funding the research through public monies, have access to the findings so that they can benefit from and apply them in their local contexts.

From a social justice perspective, this situation would seem to call for a movement toward a more multilingual scholarly publishing ecosystem. Indeed, groups such as the Helsinki Initiative on Multilingualism in Scholarly Communication (2019), along with UNESCO’s (2021) Recommendation on Open Science are advocating for linguistic diversity in scholarly publishing—a movement that is gaining traction as evidenced by increased attention on the topic (e.g., Arbuckle et al. 2024b; Soler and Kaufhold 2025). However, there is widespread agreement that implementing multilingualism is a complex endeavor. In the context of scholarly publishing, it is not only the writing of manuscripts that needs to be considered. Questions pertaining to policies and incentives for publishing in languages beyond English are also relevant, as are issues related to peer review and discoverability, not to mention technical issues such as being able to support the scripts and fonts of different languages.

When it comes time to publish their research, researchers need to target a publication venue. Currently, many researchers seek to publish in international journals (which typically publish in English) since these are often considered to be prestigious, have a wide reach, and are highly valued (by employers, peers, funding agencies). To encourage researchers to publish in other languages, it is necessary to decouple language and prestige and to value publications in other languages (CoARA 2022). But part of this process means ensuring that work published in other languages can have a wide reach and be accessed and cited by other researchers.

When journals agree to accept submissions in other languages, and when authors decide to write in other languages, the next challenge to be addressed is peer review. It is currently necessary for the editors to locate reviewers who can understand and provide feedback on the manuscript in the language in which it has been submitted. Using English as a lingua franca facilitates this task, although it limits the pool of potential reviewers, both in terms of numbers and in terms of perspectives (since many proficient speakers of English come from Western countries). To accommodate submissions in multiple languages, editors will need to have a peer review system that allows them to identify reviewers who are competent in both the domain and the language and who have no conflicts of interest. For very specialized content in a less widely used language, this may be quite challenging.

Once a manuscript has been accepted for publication, indexing and discoverability must be addressed. Currently, English is once again prioritized for indexing the majority of scholarly publications in major academic databases (e.g., Scopus, Web of Science), leading many researchers to limit their searches to English, even though studies are emerging that show the dangers of this practice, which can overlook valuable knowledge (Hannah et al. 2024). In a multilingual scholarly publishing ecosystem, it would be necessary to generate and manage multilingual metadata to enable cross-language discovery of publications. And once works in other languages have been discovered, researchers need a means of reading the content.

Implementing multilingual scholarly publishing is complex but, under the banner of social justice, there is a growing appetite for doing so. While there is unlikely to be a one-size-fits-all solution, actors in the scholarly publishing ecosystem are increasingly exploring the potential of AI tools for overcoming some of the challenges (e.g., Commissaire à la langue française 2023; Fiorini 2022). The following section explores some of the benefits and drawbacks of AI tools for scholarly publishing with a specific focus on how they might be used to support multilingualism.

The potential of AI tools for supporting multilingualism in scholarly publishing

By now, many researchers have likely gained some experience using AI tools such as neural machine translation (e.g., Google Translate) or tools based on large language models (LLMs) such as OpenAI’s ChatGPT or Microsoft’s Copilot. Both of these technologies employ a data-driven technique known as machine learning. In brief, this means that developers must first acquire a very, very large dataset of examples that can be used to train the AI tool. The type and number of examples that are needed depend on the task(s) that the tool is intended to perform. In the case of neural machine translation, the type of training data that is needed consists of previously translated texts in one language (e.g., French) that are aligned with their corresponding original texts in another language (e.g., English) to create what is known as a bilingual parallel corpus. Because neural machine translation tools are task-specific (i.e., they only translate), the training dataset typically contains millions of examples of previously translated texts for a give language pair (Forcada 2017). In contrast, ChatGPT is a multipurpose tool that can carry out tasks such as answering questions as well as summarizing, simplifying, generating, or translating texts. Because ChatGPT’s tasks are varied, the content of the LLM used to support it also needs to be diverse and much larger. Estimates suggest that the LLM used to support ChatGPT contains hundreds of billions of examples of different types of texts (Hughes 2023).

The scale of the training datasets required to power AI is not something that people can relate to easily. After all, people are often able to learn a new task by looking at a much, much smaller number of examples. However, AI tools are not actually intelligent. They cannot understand the texts that they are processing. Rather, they are simply consulting the texts in the training data and looking for patterns, calculating probabilities, or making predictions about which word should come next based on the words that have already been produced (Miracchi Titus 2024). Therefore, the training datasets needed for AI are extremely large.

While it is comparatively easy to find lots of examples in widely used languages and on very common topics, it can be much more challenging to amass a large collection of examples in less widely used languages and on very specialized topics. As a result, the performance of AI tools is currently very uneven. At the moment, they tend to work reasonably well for languages such as English, French, or Spanish, but they perform poorly for Indigenous languages. Likewise, they may produce good results on common topics, but in the case of very specialized research areas, they may struggle with the terminology or discourse patterns that are specific to those areas. With regard to social justice, there remains much work to be done in terms of developing tools that will better address a much wider range of use cases, including less widely used languages and more specialized domains. Projects such as “No Language Left Behind” (Costa-jussà et al. 2022) are attempting to address the former, whereas Chenhui Chu and Rui Wang (2018) provide a review of techniques that could be used for the latter. Nevertheless, while the results generated by AI tools are not perfect, researchers and editors around the world are increasingly experimenting with these tools to see whether they can help with a variety of tasks related to multilingual scholarly publishing.

For instance, to facilitate indexing and discovery of works in other languages, Brenda Reyes Ayala et al. (2018) propose a technique that relies on AI-based machine translation tools to generate multilingual metadata. When content is discoverable, AI tools can also help with literature searches and reviews. As described by Helena Kudiabor (2024), LLM-based tools such as Elicit, Consensus, and You offer various ways to speed up a literature search, such as by returning a list of relevant papers and summarizing their key findings. By crafting appropriate prompts, researchers can filter results (e.g., by journal or study type) or ask additional questions about specific papers. Likewise, the developers of major academic databases such as Scopus and Web of Science have also released AI tools to augment searching in these resources. A main touted benefit is that the AI tools can save researchers time, but Kudiabor reports that there can be other benefits, too, such as enabling researchers to obtain summaries of papers in other languages. For instance, while the majority of papers indexed in Web of Science are in English, a researcher who speaks Chinese could ask for the summaries to be provided in this language. On the flip side, using AI tools for literature searches can present limitations too. For instance, AI tools that limit searches to one particular source, such as the Scopus AI tool that can be used to search only Scopus-indexed journals, raise concerns about a potential lack of epistemic or linguistic diversity in a literature search. As explained by Lai Ma (2024), the Scopus database contains predominantly English-language journals published in North America and Western Europe, which means that lists of references or summaries generated from this source will reproduce contents with this focus.

One of the most frequently mentioned uses of AI in a research and publication context is for writing support, with particular emphasis placed on how AI tools can speed up the process and also improve the text quality, particularly for authors who are writing in their less dominant language. In a survey of 1,600 researchers, Richard Van Noorden and Jeffrey Perkel (2023) asked respondents to identify what they saw as the biggest benefit of generative AI for research. For more than half of the respondents, “the clearest benefit … was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work” (674).

With a focus on productivity and quality, Shakked Noy and Whitney Zhang (2023) conducted an experiment that focused on writing tasks, including grant writing. Participants were divided into a control group, who did not use any AI tools, and a test group who used ChatGPT to support the writing task. One of the observations made by Noy and Zhang is that the control group displayed persistent productivity inequality with regard to time taken to complete a task. In contrast, the productivity inequalities reported for the test group were much smaller. Noy and Zhang reached the following overall conclusions: “The generative writing tool [ChatGPT] increased the output quality of low ability workers and reduced time spent on tasks for workers of all ability levels. At the aggregate level, ChatGPT reduced inequality” (190).

Despite the potential of AI tools to support researchers, editors have expressed some concerns. Some editors have gone so far as to ban the use of AI tools in the preparation of publications (Thorp 2023), whereas others have restricted their use to tasks such as grammar checking while rejecting their use for content generation (Seghier 2023). Some researchers have pushed back against editorial policies that ban the use of AI tools, noting that when used transparently and judiciously for writing support, these tools can actually help to level the playing field for non-Anglophones faced with writing in English (Berdejo-Espinola and Amano 2023).

For several years, journal editors across multiple disciplines have begun to flag a peer review crisis, noting that it has become increasingly challenging to find scholars who will accept peer review assignments, often owing to a lack of time (DeLisi 2022). As mentioned above, finding peer reviewers is likely to be even more challenging in a multilingual context, where it is necessary to find willing and available reviewers who are competent in both the content and the language and who have no conflicts of interest. Mohammad Hosseini and Serge Horbach (2023) propose that editors may be able to leverage AI tools for tasks such as identifying and inviting potential reviewers; they even suggest that the presence of AI translation tools means that the pool of candidate reviewers can be enlarged since reviewers and authors need not speak the same language and could instead integrate automatic translation into the process.

Also in the context of peer or editorial review, some editors and peer reviewers have shown an interest in trying to use AI tools to detect plagiarism or inappropriate use of AI to generate texts. There have been some successes reported, such as the use of AI to detect made-up or manipulated statistical data (Heaven 2018). However, there are also some concerns, particularly with regard to language. For instance, Weixin Liang et al. (2023) report that detection tools that are intended to flag the inappropriate use of tools such as ChatGPT for generating papers have been shown to misidentify text that has been written by researchers who have English as a non-dominant language.

As these examples show, while AI tools alone cannot overcome all the challenges associated with implementing a multilingual scholarly publishing ecosystem, they are able to offer some support, and a growing number of researchers are actively exploring their potential in this regard. At present, these use cases are still emerging and tend to be reported in an anecdotal way rather than being rigorously evaluated (Bowker et al. 2023). However, the use of AI tools to support multilingual scholarly publishing seems set to grow. For instance, the French-language commissioner in Quebec recently issued a report calling for more research on how machine translation can support the use of French as a language of science and research (Commissaire à la langue française 2023). Similarly, in 2025, the CHIST-ERA consortium of research funding organizations issued a call on the theme of “science in your own language” with the specific goal of funding research into the automatic translation of scientific knowledge (CHIST-ERA 2025). Nevertheless, while research into the potential of translation tools for supporting multilingualism forges ahead, the environmental impact of these tools and their relation to climate justice must also be considered.

Climate justice and the use of AI in support of multilingual scholarly publishing

When it comes to the use of AI for multilingual scholarly publishing, we can identify at least four groups with an interest in the subject. First are researchers who work in natural language processing (NLP) for whom the design and development of AI-based machine translation tools is a key area of research. Scholars working in areas such as AI and ethics or critical AI who are concerned with understanding the impacts of AI, including environmental impacts, make up the second group. Third are information science researchers and practitioners involved in scholarly publishing who want to understand how and where AI tools could be integrated into their processes and workflow (e.g., metadata creation, peer review) to support multilingual publishing. Finally, the fourth group includes researchers in every discipline who want to publish their research and who want to apply AI translation tools to help them engage with some part of the research process, which could include writing an article in a second language, searching for and reading publications in another language, or participating in peer review in another language. Each of these groups has contemplated the relationship between AI and the environment in different ways and to a different degree. Some key points from the literature in these areas are summarized below.

Natural language processing

Recent research in the NLP community has drawn attention to the environmental cost of machine learning, particularly the cost of training language models. For example, Emma Strubell et al. (2019) estimate the energy consumption of different state-of-the art language models, and they use this information to estimate the carbon emissions and electricity costs of the models. In the case of the widely used deep learning language model known as BERT (Bidirectional Encoder Representations from Transformers), Strubell et al. estimate that training this model generates as many carbon emissions as flying across the American continent, and the fine-tuning process adds even more. Meanwhile, Roy Schwartz et al. (2020) are critical of the so-called Red AI trend in which massive language models are trained using vast resources. By analyzing trends in major conferences (e.g., Conference on Neural Information Processing Systems), Schwartz et al. determine that researchers in machine learning tend to focus on the performance or accuracy of the proposed models and pay far less attention (if any) to measures such as model size, speed, or efficiency.

Not all types of AI tools and tasks have the same environmental impact. For instance, Sasha Luccioni et al. (2024) compared the performance of multipurpose tools such as ChatGPT against task-specific tools such as Google Translate and found that the latter are less energy intensive than the former. For their part, Dimitar Shterionov and Eva Vanmassenhove (2023) focus more specifically on investigating the power consumption and carbon emissions related to the use of graphics processing units (GPUs) for training and translating with neural machine translation models. Based on experimentation, they calculate that a GPU workstation that is used to train simple models could produce up to 2,500 kg of CO2 emissions in one year, an amount that is approximately equivalent to the CO2 emissions from the electricity consumption of two small households in the United Kingdom (Shterionov and Vanmassenhove 2023).

Finally, as pointed out by Luccioni et al. (2024), most of the calculations that have been performed to determine the costs and emissions associated with AI tools have focused on the training of the models. However, there are also costs associated with querying these models, such as the inferences needed to calculate the output. According to Luccioni et al., the inference phase stands to impact the environment just as much as, or more than, the training phase. However, in-depth research that quantifies the environmental costs of model inference is currently limited.

As environmental costs of using AI tools start to become clearer, some NLP researchers are indeed calling for a more environmentally friendly or green approach to AI, such as by encouraging other NLP researchers to be more selective when choosing data, to report information such as time required to (re)train models, and to focus on building more efficient hardware and models (Luccioni et al. 2024; Schwartz et al. 2020; Shterionov and Vanmassenhove 2023; Strubell et al. 2019). Peter Henderson et al. (2020) take these ideas further by providing a tool that enables researchers and developers to calculate, report, and track the energy and carbon emissions of their machine learning systems. However, making more efficient models could increase the overall use of AI across society (Luccioni et al. 2025), another concern for climate justice; the response intended to improve the situation may end up exacerbating it, especially for those already negatively impacted by the initial situation. Joss Moorkens et al. (2024) put forward a proposal for a triple bottom line for translation automation and sustainability in which social and environmental considerations are just as important as performance considerations. However, they caution that putting this triple bottom line of people, planet, and performance into practice will be complex and that benchmarking, comparing, and communicating this holistic result will be difficult and introduce new challenges.

Critical AI and AI ethics

Critical AI is itself an interdisciplinary area. Though rooted in critical methods from the humanities, social sciences, and arts, it can also include input from technologists, scientists, economists, and health researchers, among others. Closely related to critical AI is AI ethics, which is also an interdisciplinary endeavor that investigates how we can optimize the benefits of AI while reducing the harms. Researchers working in these areas have highlighted that the whole supply chain of AI is environmentally extractive and at the same time not very transparent, making it challenging to arrive at precise measurements of cost. Aspects of concern include the mining of critical rare metals needed to construct hardware on a massive scale, the pollution caused by mining and manufacturing, the e-waste caused by regular upgrading of hardware, the energy and cooling costs of large-scale data centers, and the exploitative conditions of human workers.

Sebastián Lehuedé (2025) notes that critical rare metals such as lithium and cobalt are crucial for building the batteries that power wireless devices that are used both to generate training data for AI and to enable AI-powered applications. One area where such metals are mined is in Chile, where the extraction method used to mine lithium involves using vast amounts of water in one of the world’s driest regions, a situation that is negatively affecting local plant and animal life (Lehuedé 2025). Meanwhile, cobalt is mined in the Democratic Republic of Congo (DCR), which is also suffering multiple negative effects of mining, as described by Daniel Krummel and Patrick Siegfried (2021). In the DCR, blasting releases dust and grit into the air, which is toxic to breathe. Mining equipment consumes electricity and emits carbon dioxide and nitrogen dioxide. Added to this, the working conditions for the miners are often exploitative and include underpayment, child labor, and unsafe working practices (Krummel and Siegfried 2021). Another type of exploitative human labor within the AI industry is the work done by data annotators. As reported by James Muldoon et al. (2024), much of the annotation is carried out by workers in Kenya and Uganda, who work long shifts on precarious contracts for extremely low pay and whose physical and mental health suffer as a result.

To power AI, big technology companies such as Google, Meta, Amazon, and Microsoft have constructed massive data centers with hundreds of thousands of computers, which are often located in developing countries (Mazzucato 2024; Urquieta and Dib 2024). As already noted, the electricity costs and carbon emissions from the data centers are enormous, and once up and running, the computers generate a significant amount of heat and need to be cooled, which diverts energy and water resources away from local communities (Mazzucato 2024; Urquieta and Dib 2024). Worse, as explained by Lehuedé (2025), there is often a lack of transparency about how many resources are available to local communities, making it difficult for them to know how to plan or react. Meanwhile, the hardware is upgraded regularly, and the resulting e-waste, which contains toxic and harmful substances, is disposed of in countries such as Ghana and Nigeria (Liu et al. 2023).

To bring the conversation back to the use of AI for multilingualism, it is worth noting that the areas being affected by the mining and droughts include the DRC, where the four national languages are Kituba, Lingala, Swahili, and Tshiluba, as well as the Atacama region of Chile, where the traditional language of the Lickan Antay people is Kunza. Hazardous e-waste is being disposed of in Ghana and Nigeria, where the local languages include Akan, Hausa, Yoruba, and Igbo, among others. Meanwhile, some of the large-scale data centers are located in Ireland and the Netherlands, where local languages include Irish and Dutch, respectively. None of the Indigenous languages are well served by existing AI translation tools, and while Irish and Dutch fare slightly better, the performance of machine translation in these languages still lags behind that available for English. The connection to climate justice therefore becomes apparent when we consider that AI translation tools do not work well for the language communities that are providing the minerals, receiving the e-waste, or hosting the data centers and suffering the related ill effects of these activities.

Scholarly publishing

As discussed above in the section on social justice, adopting a lingua franca for scholarly publishing leads to issues such as excluding certain voices from research conversations, promoting epistemic injustice, and overlooking knowledge that may help to address some of the world’s major problems, including climate change. The movement for a more multilingual scholarly publishing ecosystem is growing, but as pointed out by Alyssa Arbuckle et al., “multilingual publishing is pragmatically challenging” (2024a, 35), with issues ranging from being able to reach different communities with a call for papers to the risk of papers in languages other than English not being cited. Translation is recognized as a potential and partial solution but not one that is problem free: “translation, though possible, can be costly and labor intensive and can lengthen the production process of individual articles and issues as a whole; there is only so much capacity for activities such as copyediting and proof setting in multiple languages” (36). Hence there is a growing interest in AI-based translation as a means of addressing some, though by no means all, of the challenges with multilingual scholarly publishing.

At the same time that the scholarly publishing community is exploring how to foster linguistic diversity, it is also engaging with concerns about sustainability, environmental impacts, and climate justice, as attested to by the present volume. One active area of inquiry has explored the links between open access and climate justice—which was selected as the theme for Open Access Week in 2022—noting that openness can create pathways to more equitable knowledge sharing, including with communities that may be struggling to deal with the effects of the climate crisis (International Open Access Week 2022). Meanwhile, Anne Baillot (2023) engages in a detailed reflection and calculation of the environmental costs of producing physical and digital texts, using the production of her own book as an example. However, to my knowledge, no one has yet explicitly reflected on the intersection of multilingual scholarly communication and climate justice.

Researchers

While researchers of NLP, of AI and ethics, and of scholarly publishing have clear reasons to contemplate how AI translation tools might intersect with climate justice, the fourth group—researchers of all other disciplines—may not yet have seen an immediate need to do so. For many in this group, AI translation tools are likely to be perceived as a means to an end rather than as an object of inquiry through an environmental lens. And yet this group, which is by far the largest of the four, is the one that we most need to reach if we want to achieve a cumulative effect of more conscious use of AI tools. Recall that according to Van Noorden and Perkel’s (2023) survey of over 1,600 researchers, more than half of the respondents identified language- and translation-oriented activities as the biggest benefit of AI tools. Meanwhile, Luccioni et al. (2024) point out that just one AI translation tool, Google Translate, is known to be queried billions of times each day. If we progress further toward a multilingual scholarly publishing ecosystem, these numbers are likely to increase, thus further fueling the climate crisis.

Discussion: The tension between social justice and climate justice in multilingual scholarly communication

When seen through the lens of social justice, there are good reasons to support multilingual scholarly publishing. For instance, the playing field would be more level if individual scholars could work in their own language, and research results would be more accessible to local communities. But implementing multilingualism is complex, and it requires finding ways to support multilingual peer review, multilingual metadata, and multilingual searching. AI translation tools can potentially help to overcome these challenges, although more work is needed to improve the tools for less widely used languages and more specialized domains. Work is already underway to address these issues, and assuming that they will be resolved, the question remains whether the benefits of multilingual scholarly publishing will outweigh the harmful environmental effects of AI.

Implementing large-scale multilingual scholarly publishing without support from AI translation tools seems unlikely. There are not enough professional translators to keep up with volume of research that would need to be translated into the world’s 7,000 languages (or even a subset of them). On top of this, human translation (without support from AI) is slow and expensive. The inability of human translators to cope with the increased volume of scientific output and increasing specialization of research was a contributing factor in the adoption of a lingua franca for scholarly publishing. Denying the use of AI in scholarly publishing—whether to allow non-Anglophone scholars to participate in English more easily or to support a more genuinely multilingual scholarly communication ecosystem—would continue to penalize those populations who are already at a linguistic disadvantage in the scholarly community. Moreover, since language frequently intersects with other factors such as geographic location, which in turn intersects with socioeconomic status, the penalties for some of these scholars may be multiplied.

From the perspective of climate justice, adopting AI on a wide scale to support multilingual scholarly publishing will contribute to environmental harm. Moreover, the brunt of the harm will be felt by those very populations for whom the tools do not work well, and improving the tools could produce rebound effects that end up doing additional environmental harm (i.e., more uptake of AI tools means more mining, e-waste, data centers).

The challenge, then, is to unpack the resulting tension between social justice and climate justice to determine whether one should be prioritized in the context of scholarly publishing. An initial argument in favor of using AI translation tools could be that these tools may facilitate the inclusion of voices that are currently not part of, or a big part of, research conversations. Since different languages often go hand in hand with different cultures, worldviews, and knowledge systems, this diversification could bring fresh perspectives and valuable ideas that could help to tackle the climate crisis. For instance, Justine Townsend et al. (2020) identify Indigenous knowledge systems as being critical for policy and decision-making in regard to climate justice. Likewise, using a case study of lithium mining in the Atacama region of Chile, Lehuedé (2025) challenges the top-down approach prevailing in debates on AI ethics and sustainability and strongly emphasizes that no ethical and sustainable AI will be possible as long as the communities participating, and their ways of relating to the environment, are excluded from the design and development of so-called intelligent systems. In this case, the community consists of the Lickan Antay people, whose ancestral language is Kunza and who also use a distinct variety of Chilean Spanish. Townsend et al. echo that Indigenous Peoples around the world are among the groups most affected by climate change; however, many of the policies and decisions about how to deal with the climate crisis are largely based on Western science. As pointed out by Menzies et al. (2022), by failing to consider non-Western knowledge, we miss opportunities to incorporate the best available knowledge to address the climate crisis.

A related argument could be made in favor of using AI translation because it could enable better access to knowledge by local communities. At present, with most research being published primarily in English, many local communities cannot access it. The need for local communities to access research findings about climate change was a major focus of Open Access Week in 2022, which had the theme “Open for Climate Justice.” There is a case to be made for viewing language as a facet of openness, which includes making knowledge available in local languages.

However, both of these arguments break down when we consider that, at present, AI translation tools do not work well for less widely used languages, which include Indigenous languages and language varieties that are not common. Therefore, these tools can neither facilitate a greater inclusion of Indigenous knowledge in the research conversation nor transfer existing knowledge to Indigenous communities. As many Indigenous languages are currently among the most under-resourced languages, the effort to collect (or generate) a sufficient volume of data and train corresponding models for these languages is a mammoth task—and one that would very likely lead to a rebound effect that would inflict further harm on these vulnerable communities.

Another problem that is raised by the argument in favor of developing AI translation tools for Indigenous or less widely used languages is that it assumes both that the data is available and that the communities want these products. A recent example from Canada recounts how a group of Indigenous scholars submitted a grant application to a national funding council that was written in the Cree language (Cyperling and Arzola Salazar 2024). After initially rejecting the application on linguistic grounds, the council later agreed to process it. They discovered that potential peer reviewers from the Indigenous community could speak Cree fluently, but they were not familiar with the written form of the language since, for them, it was primarily an oral language. The relative lack of written data for this and some other Indigenous languages will hamper efforts to create high-quality AI translation tools using data-driven techniques. This example also makes the point that in some communities, specialized knowledge may not reside in scholarly publications, again raising the question whether there is a need or desire for scholarly publications to be translated into these languages, or whether other forms of knowledge capture and sharing might be more appropriate.

One could argue that having AI translation tools available might benefit communities in other ways beyond scholarly publishing. It is true that these communities may wish to be able to communicate in other contexts; however, it is worth noting that data are more plentiful for general language than for specialized language, and so these tools already exist and work comparatively well for many languages when applied to everyday tasks. However, making them work well for research and scholarly publishing tasks requires additional data collection and domain adaptation, which requires additional training and fine-tuning of the models, which in turn adds to the environmental harms. And once again, it is likely to come with rebound effects.

Given these harms, are there other ways of supporting multilingualism in scholarly publishing that do not depend on AI translation tools? The global Coalition for Advancing Research Assessment (CoARA) has set out a shared direction for changes in research assessment practices intended to maximize the quality and impact of research (CoARA 2022), including research published in languages beyond English. In 2024, CoARA established a Working Group on Multilingualism and Language Biases in Research Assessment that is currently working to develop proposals for valuing and incentivizing publication in languages beyond English. Traditional translation is still a possibility, although, again, this may be expensive and slower. One solution here may be to make more funds available expressly for hiring professional translators, although it should also be noted that a large number of translators have integrated AI tools into their workflow (ELIS Research 2025), so hiring a professional translator may not serve to reduce the environmental impact of AI within scholarly publishing. Alternatively, Rassim Khelifa et al. (2022) have suggested that researchers could establish an exchange of services, which could offer various types of support to other researchers, including editing, proofreading, or translation. More work is required to search for other sustainable options for supporting multilingual scholarly communication.

Conclusion

I am somewhat comforted by Anne Baillot’s statement at the end of her book From Handwriting to Footprinting: Text and Heritage in the Age of the Climate Crisis, in which she writes, “I do not have many final conclusions to draw from this attempt to reflect on environmentally aware workflows for providing access to text” (2023, 150). After setting out to assess the environmental footprint of text, Baillot observes the myriad difficulties involved in picking apart the benefits and drawbacks of technology use when it is so firmly embedded in broader socioeconomic mechanisms. Likewise, weighing the balance of social justice and climate justice in the context of multilingual scholarly publishing is a similarly challenging situation.

I do not want to negate the value of participating in research in one’s own language—a privilege that I have long enjoyed as an Anglophone—nor to suggest that AI translation tools have no benefit or place in research and publishing. But I believe that it is important to evaluate whether using AI translation tools to make multilingual scholarly publishing available in all languages is a worthy goal to pursue, given that the communities who are harmed the most by the development of these technologies also currently benefit the least from their use. Just because we can (or may one day be able to) develop high-quality AI translation tools for (almost) all languages for scholarly publishing purposes, does it mean that we should? Are there other forms of knowledge and other approaches to knowledge sharing that might better serve the needs of some communities?

My own thinking on this topic is evolving as I continue to inform myself on the issues. At this moment, it appears to me that the harms currently outweigh the benefits, but I recognize that I am speaking from a position of privilege as an Anglophone researcher based in a Western country, as well as from the perspective of a translator and translation technologist. The issue is exceedingly complex and requires input from other quarters, such as experts in AI and ethics and in scholarly publishing, and above all from the wider community of non-Anglophone researchers and knowledge keepers from other regions.

Defining a course of action is not easy, but, like Baillot (2023), I believe that education and awareness raising are useful short-term measures. Baillot muses about the need for a basic training in environmental awareness, initially suggesting that it could be carried out by professional institutions such as state schools or publicly funded universities, but then reflecting that “maybe this kind of training should be developed on a more widely distributed and accessible level, such as community colleges or universités populaires, for people of all ages as long as they understand what is at stake” (134). Indeed, this is very similar to the type of work that I have been carrying out as part of the Machine Translation Literacy Project (Bowker and Buitrago Ciro 2019). The Machine Translation Literacy Project began in 2019 as a way of helping scholars in all disciplines to understand more about AI translation tools, how they work, and how to work with them to get higher-quality output (Bowker 2021). The first five years of the project focused firmly on the social justice agenda, but moving forward, it will be important to incorporate climate justice and to temper the ideal of universally available AI translation for all languages in scholarly publishing with information about the effects of pursuing this ideal at all costs.

Author Biography

Lynne Bowker is Full Professor and Canada Research Chair in Translation, Technologies and Society at Université Laval in Canada. Her research interests include understanding more about the use of translation tools by people within and beyond the language professions, including for the purposes of scholarly publishing. She is the co-author of Machine Translation and Global Research (Emerald Publishing, 2019) and author of the open access book De-mystifying Translation: Introducing Translation to Non-translators (Routledge, 2023). She is the director of the Machine Translation Literacy Project funded by the Social Sciences and Humanities Research Council of Canada (Insight Grant: 435-2020-0089).

References

Amano, Tatsuya, Valeria Ramírez-Castañeda, Violeta Berdejo-Espinola, et al. 2023. “The Manifold Costs of Being a Non-native English Speaker in Science.” PLoS Biology 21 (7): e3002184. https://doi.org/10.1371/journal.pbio.3002184https://doi.org/10.1371/journal.pbio.3002184

Ammon, Ulrich. 2010. “The Hegemony of English.” In World Social Science Report: Knowledge Divides, 154–55. UNESCO Publishing. https://unesdoc.unesco.org/ark:/48223/pf0000188333.https://unesdoc.unesco.org/ark:/48223/pf0000188333

Arbuckle, Alyssa, Janneke Adema, and Élika Ortega. 2024a. “Editors’ Gloss: The Problem with Monolingualism in Academic Knowledge Production.” Journal of Electronic Publishing 27 (1): 33–40. https://doi.org/10.3998/jep.6258.https://doi.org/10.3998/jep.6258

Arbuckle, Alyssa, Janneke Adema, and Élika Ortega, eds. 2024b. “Multilingual Publishing and Scholarship.” Special issue, Journal of Electronic Publishing 27 (1). https://journals.publishing.umich.edu/jep/issue/336/info/.https://journals.publishing.umich.edu/jep/issue/336/info/

Baillot, Anne. 2023. From Handwriting to Footprinting: Text and Heritage in the Age of Climate Crisis. Open Book Publishers. https://www.openbookpublishers.com/books/10.11647/OBP.0355.https://www.openbookpublishers.com/books/10.11647/OBP.0355

Bennett, Karen. 2007. “Epistemicide! The Tale of a Predatory Discourse.” The Translator 13 (2): 151–69. https://doi.org/10.1080/13556509.2007.10799236.https://doi.org/10.1080/13556509.2007.10799236

Berdejo-Espinola, Violeta, and Tatsuya Amano. 2023. “AI Tools Can Improve Equity in Science.” Science 379 (6636): 991. https://doi.org/10.1126/science.adg9714.https://doi.org/10.1126/science.adg9714

Bowker, Lynne. 2021. “Machine Translation Literacy Instruction for Non-translators: A Comparison of Five Delivery Formats.” In TRITON 2021: TRanslation and Interpreting Technology ONline: Proceedings of the Conference, edited by Ruslan Mitkov, Vilelmini Sosoni, Julie Christine Giguère, Elena Murgolo, and Elizabeth Deysel, 25–36. INCOMA. http://triton-conference.org/proceedings/.http://triton-conference.org/proceedings/

Bowker, Lynne, Philips Ayeni, and Emanuel Kulczycki. 2023. Linguistic Privilege and Marginalization in Scholarly Communication: Understanding the Role of New Language Technologies for Shifting Language Dynamics. Final report submitted to the Social Sciences and Humanities Research Council of Canada, December 17, 2023. https://doi.org/10.20381/858s-q632.https://doi.org/10.20381/858s-q632

Bowker, Lynne, and Jairo Buitrago Ciro. 2019. Machine Translation and Global Research: Towards Improved Machine Translation Literacy in the Scholarly Community. Emerald Publishing.

CHIST-ERA. 2025. “Science in Your Own Language (SOL).” Call announcement. https://www.chistera.eu/call-2025-announcement.https://www.chistera.eu/call-2025-announcement

Chu, Chenhui, and Rui Wang. 2018. “A Survey of Domain Adaptation for Neural Machine Translation.” In Proceedings of the 27th International Conference on Computational Linguistics, 1304–19. Association for Computational Linguistics. https://aclanthology.org/C18-1111/.https://aclanthology.org/C18-1111/

Coalition for Advancing Research Assessment (CoARA). 2022. The Agreement on Reforming Research Assessment. Accessed June 18, 2024. https://coara.eu/agreement/the-agreement-full-text/.https://coara.eu/agreement/the-agreement-full-text/

Commissaire à la langue française. 2023. Le français, langue du savoir? Pour une approche structurée de l’usage de la traduction automatique dans le milieu scientifique [French, a language of knowledge? Toward a structured use of machine translation in research]. https://commissairelanguefrancaise.quebec/publications/avis/francais-traduction-milieu-scientifique.pdf.https://commissairelanguefrancaise.quebec/publications/avis/francais-traduction-milieu-scientifique.pdf

Costa-jussà, Marta R., James Cross, Onur Çelebi, et al. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. Preprint, arXiv, updated August 25, 2022. https://arxiv.org/abs/2207.04672.https://arxiv.org/abs/2207.04672

Crenshaw, Kimberlé. 1991. “Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Color.” Stanford Law Review 43 (6): 1241–99. https://doi.org/10.2307/1229039.https://doi.org/10.2307/1229039

Cyperling, Marta, and Aylin Arzola Salazar. 2024. “Research Proposal Written Entirely in Cree Language Receives Federal Funding.” UCalgary News, January 29, 2014. https://ucalgary.ca/news/research-proposal-written-entirely-cree-language-receives-federal-funding.https://ucalgary.ca/news/research-proposal-written-entirely-cree-language-receives-federal-funding

DeLisi, Lynn E. 2022. “Where Have All the Reviewers Gone? Is the Peer Review Concept in Crisis?” Psychiatry Research 310:114454. https://doi.org/10.1016/j.psychres.2022.114454.https://doi.org/10.1016/j.psychres.2022.114454

Duignan, Brian. 2025. “Social Justice.” In Encyclopedia Britannica. https://www.britannica.com/topic/social-justice.https://www.britannica.com/topic/social-justice

ELIS Research. 2025. European Language Industry Survey (ELIS) 2025: Trends, Expectations and Concerns of the European Language Industry. https://elis-survey.org/wp-content/uploads/2025/03/ELIS-2025_Report.pdfhttps://elis-survey.org/wp-content/uploads/2025/03/ELIS-2025_Report.pdf

Espin, Johanna, Sebastian Palmas, Farah Carrasco-Rueda, et al. 2017. “A Persistent Lack of International Representation on Editorial Boards in Environmental Biology.” PLoS Biology 15 (12): e2002760. https://doi.org/10.1371/journal.pbio.2002760.https://doi.org/10.1371/journal.pbio.2002760

Fiorini, Susanna. 2022. “Traduction automatique et édition scientifique” [Machine translation and scientific publishing]. Traduire 246:36–45. https://doi.org/10.4000/traduire.2805.https://doi.org/10.4000/traduire.2805

Forcada, Mikel L. 2017. “Making Sense of Neural Machine Translation.” Translation Spaces 6 (2): 291–309. https://doi.org/10.1075/ts.6.2.06fo.https://doi.org/10.1075/ts.6.2.06fo

Gordin, Michael D. 2015. Scientific Babel: How Science Was Done Before and After English. University of Chicago Press.

Hannah, Kelsey, Neal R. Haddaway, Richard A. Fuller, and Tatsuya Amano. 2024. “Language Inclusion in Ecological Systematic Reviews and Maps: Barriers and Perspectives.” Research Synthesis Methods 15 (3): 466–82. https://doi.org/10.1002/jrsm.1699.https://doi.org/10.1002/jrsm.1699

Heaven, Douglas. 2018. “AI Peer Reviewers Unleashed to Ease Publishing Grind.” Nature 563 (7733): 609–10. https://doi.org/10.1038/d41586-018-07245-9.https://doi.org/10.1038/d41586-018-07245-9

Helsinki Initiative. 2019. The Helsinki Initiative on Multilingualism in Scholarly Communication. https://www.helsinki-initiative.org.https://www.helsinki-initiative.org

Henderson, Peter, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. “Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning.” Journal of Machine Learning Research 21 (248): 1–43. http://jmlr.org/papers/v21/20-312.html.http://jmlr.org/papers/v21/20-312.html

Hosseini, Mohammad, and Serge P. J. M. Horbach. 2023. “Fighting Reviewer Fatigue or Amplifying Bias? Considerations and Recommendations for Use of ChatGPT and Other Large Language Models in Scholarly Peer Review.” Research Integrity and Peer Review 8 (1): Article 4. https://doi.org/10.1186/s41073-023-00133-5.https://doi.org/10.1186/s41073-023-00133-5

Hughes, Alex. 2023. “ChatGPT: Everything You Need to Know About OpenAI’s GPT-4 Tool.” BBC Science Focus, September 25, 2023. https://www.sciencefocus.com/future-technology/gpt-3.https://www.sciencefocus.com/future-technology/gpt-3

International Open Access Week. 2022. “Theme for Open Access Week 2022: Open for Climate Justice.” https://www.openaccessweek.org/blog/theme-for-open-access-week-2022-open-for-climate-justice.https://www.openaccessweek.org/blog/theme-for-open-access-week-2022-open-for-climate-justice

Khelifa, Rassim, Tatsuya Amano, and Martin A. Nuñez. 2022. “A Solution for Breaking the Language Barrier.” Trends in Ecology and Evolution 37 (2): 109–12. https://doi.org/10.1016/j.tree.2021.11.003.https://doi.org/10.1016/j.tree.2021.11.003

Krummel, Daniel, and Patrick Siegfried. 2021. “The Dark Side of Samsung’s Value Chain: The Human Costs of Cobalt Mining ‘Blood, Sweat and Cobalt.’” Journal of Geoscience and Environment Protection 9 (2): 182–203. https://doi.org/10.4236/gep.2021.92011.https://doi.org/10.4236/gep.2021.92011

Kudiabor, Helena. 2024. “How AI-Powered Science Search Engines Can Speed Up Your Research.” Nature, October 10, 2024. https://www.nature.com/articles/d41586-024-02942-0https://www.nature.com/articles/d41586-024-02942-0

Lehuedé, Sebastián. 2025. “An Elemental Ethics for Artificial Intelligence: Water as Resistance Within AI’s Value Chain.” AI & Society: Journal of Knowledge, Culture and Communication 40 (3): 1761–74. https://doi.org/10.1007/s00146-024-01922-2.https://doi.org/10.1007/s00146-024-01922-2

Liang, Weixin, Mert Yuksekgonul, Yining Mao, Eric Wu, and James Zou. 2023. “GPT Detectors Are Biased Against Non-native English Writers.” Patterns 4 (7): 100779. https://doi.org/10.1016/j.patter.2023.100779.https://doi.org/10.1016/j.patter.2023.100779

Liu, Kang, Quanyin Tan, Jiadong Yu, and Mengmeng Wang. 2023. “A Global Perspective on E-waste Recycling.” Circular Economy 2 (1): 100028. https://doi.org/10.1016/j.cec.2023.100028.https://doi.org/10.1016/j.cec.2023.100028

Luccioni, Sasha, Yacine Jernite, and Emma Strubell. 2024. “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” In FAccT’24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 85–99. https://doi.org/10.1145/3630106.3658542.https://doi.org/10.1145/3630106.3658542

Luccioni, Sasha, Emma Strubell, and Kate Crawford. 2025. “From Efficiency Gains to Rebound Effects: The Problem of Jevons’ Paradox in AI’s Polarized Environmental Debate.” FAccT’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 76–88. https://doi.org/10.1145/3715275.3732007.https://doi.org/10.1145/3715275.3732007

Ma, Lai. 2024. “Generative AI for Academic Publishing? Some Thoughts About Epistemic Diversity and the Pursuit of Truth.” KULA: Knowledge Creation, Dissemination, and Preservation Studies 7 (1): 1–5. https://doi.org/10.18357/kula.287.https://doi.org/10.18357/kula.287

Mazzucato, Mariana. 2024. “The Ugly Truth Behind ChatGPT: AI Is Guzzling Resources at Planet-Eating Rates.” The Guardian, May 30, 2024. https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment.https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment

Menzies, A. K., E. Bowles, M. Gallant, et al. 2022. “‘I See My Culture Starting to Disappear’: Anishinaabe Perspectives on the Socioecological Impacts of Climate Change and Future Research Needs.” FACETS 7:509–27. https://doi.org/10.1139/facets-2021-0066.https://doi.org/10.1139/facets-2021-0066

Miracchi Titus, Lisa. 2024. “Does ChatGPT Have Semantic Understanding? A Problem with the Statistics-of-Occurrence Strategy.” Cognitive Systems Research 83:101174. https://doi.org/10.1016/j.cogsys.2023.101174.https://doi.org/10.1016/j.cogsys.2023.101174

Moorkens, Joss, Sheila Castilho, Federico Gaspari, Antonio Toral, and Maja Popović. 2024. “Proposal for a Triple Bottom Line for Translation Automation and Sustainability: An Editorial Position Paper.” JoSTrans: The Journal of Specialised Translation 41:2–25. https://doi.org/10.26034/cm.jostrans.2024.4706https://doi.org/10.26034/cm.jostrans.2024.4706

Muldoon, James, Mark Graham, and Callum Cant. 2024. Feeding the Machine: The Hidden Human Labor Powering A.I. Bloomsbury.

Noy, Shakked, and Whitney Zhang. 2023. “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence.” Science 381 (6654): 187–92. https://doi.org/10.1126/science.adh2586.https://doi.org/10.1126/science.adh2586

Nygaard, Lynn P. 2019. “The Institutional Context of ‘Linguistic Injustice’: Norwegian Social Scientists and Situated Multilingualism.” Publications 7 (1): Article 1. https://doi.org/10.3390/publications7010010.https://doi.org/10.3390/publications7010010

Palser, Eleanor R., Maia Lazerwitz, and Aikaterini Fotopoulou. 2022. “Gender and Geographical Disparity in Editorial Boards of Journals in Psychology and Neuroscience.” Nature Neuroscience 25:272–79. https://doi.org/10.1038/s41593-022-01012-w.https://doi.org/10.1038/s41593-022-01012-w

Ramírez-Castañeda, Valeria. 2020. “Disadvantages in Preparing and Publishing Scientific Papers Caused by the Dominance of the English Language in Science: The Case of Colombian Researchers in Biological Sciences.” PLoS ONE 15 (9): e0238372. https://doi.org/10.1371/journal.pone.0238372.https://doi.org/10.1371/journal.pone.0238372

Reyes Ayala, Brenda, Ryan Knudson, Jianping Chen, Gaohui Cao, and Xinyue Wang. 2018. “Metadata Records Machine Translation Combining Multi-Engine Outputs with Limited Parallel Data.” Journal of the Association for Information Science and Technology (JASIST) 69 (1): 47–59. https://doi.org/10.1002/asi.23925.https://doi.org/10.1002/asi.23925

Schwartz, Roy, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. “Green AI.” Communications of the ACM 63 (12): 54–63. https://doi.org/10.1145/3381831.https://doi.org/10.1145/3381831

Seghier, Mohamed L. 2023. “Using ChatGPT and Other AI-Assisted Tools to Improve Manuscripts Readability and Language.” International Journal of Imaging Systems and Technology 33 (3): 773–75. https://doi.org/10.1002/ima.22902.https://doi.org/10.1002/ima.22902

Shchemeleva, Irina. 2021. “‘There’s No Discrimination, These Are Just the Rules of the Game’: Russian Scholars’ Perception of the Research Writing and Publication Process in English.” Publications 9 (1): Article 1. https://www.mdpi.com/2304-6775/9/1/8.https://www.mdpi.com/2304-6775/9/1/8

Shterionov, Dimitar, and Eva Vanmassenhove. 2023. “The Ecological Footprint of Neural Machine Translation Systems.” In Towards Responsible Machine Translation: Ethical and Legal Considerations in Machine Translation, edited by Helena Moniz and Carla Parra Escartín, 185–213. Springer.

Sibeko, Johannes, and Mmasibidi Setaka-Bapela. 2024. “Challenges in Intellectualizing Sesotho for Use in Academic Publications.” Journal of Electronic Publishing 27 (1). https://doi.org/10.3998/jep.5408.https://doi.org/10.3998/jep.5408

Soler, Josep, and Kathrin Kaufhold, eds. 2025. Language and the Knowledge Economy: Multilingual Scholarly Publishing in Europe. Routledge.

Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. “Energy and Policy Considerations for Deep Learning in NLP.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–50. Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1355.https://doi.org/10.18653/v1/P19-1355

Szadkowski, Krystian. 2023. Capital in Higher Education: A Critique of the Political Economy of the Sector. Palgrave Macmillan.

Thorp, H. Holden. 2023. “ChatGPT Is Fun, but Not an Author.” Science 379 (6630): 313. https://doi.org/10.1126/science.adg7879.https://doi.org/10.1126/science.adg7879

Townsend, Justine, Faisal Moola, and Mary-Kate Craig. 2020. “Indigenous Peoples Are Critical to the Success of Nature-Based Solutions to Climate Change.” FACETS 5 (1): 551–56. https://doi.org/10.1139/facets-2019-0058.https://doi.org/10.1139/facets-2019-0058

UNESCO. 2021. UNESCO Recommendation on Open Science. https://unesdoc.unesco.org/ark:/48223/pf0000379949_eng.https://unesdoc.unesco.org/ark:/48223/pf0000379949_eng

Urquieta, Claudia, and Daniela Dib. 2024. “U.S. Tech Giants Are Building Dozens of Data Centers in Chile. Locals Are Fighting Back.” Rest of World, May 31, 2024. https://restofworld.org/2024/data-centers-environmental-issues/.https://restofworld.org/2024/data-centers-environmental-issues/

Van Noorden, Richard, and Jeffrey M. Perkel. 2023. “AI and Science: What 1,600 Researchers Think.” Nature 621:672–75. https://doi.org/10.1038/d41586-023-02980-0.https://doi.org/10.1038/d41586-023-02980-0