Skip to main content
Article

Organized Futures: Speculative Design for More Just and Joyful Scholarly Infrastructure

Author
  • Amanda Wyatt Visconti orcid logo (University of Virginia)

Abstract

This article explores my process for designing just and joyful scholarly infrastructure by performing a representative speculative experiment: drafting a digital humanities scholarship review clinic shaped by a “what if . . . ?” framing. I describe and analyze possible practices and policies toward running an affirmative experience of review for written genres of digital humanities scholarship, including conference abstracts, grant proposals, digital project reviews, and blog posts: a “Water Your Work” clinic focused on strengthening and supporting both scholarship and scholar. I share how I work through these ideas at a mid-stage, rather than only their eventual polished outcome, to increase how publicly and transparently we practice the design processes that might help us grow more just, caring, and effective scholarly communities. I conclude by exploring next steps for improving the thought experiment and testing it in the real world, including through analysis of relevant existing fields of study and values systems.

Keywords: digital humanities, scholarly infrastructure, community design, speculative design, peer review, digital scholarship evaluation

How to Cite:

Visconti, A. W., (2025) “Organized Futures: Speculative Design for More Just and Joyful Scholarly Infrastructure”, The Journal of Electronic Publishing 28(1). doi: https://doi.org/10.3998/jep.6020

377 Views

57 Downloads

Published on
2025-01-27

Peer Reviewed

What if you looked forward to the peer review of your work?

What if it were safe and effective to ask for feedback on your scholarly writing unguardedly, from your earliest rough drafts onward?

What if scholarly feedback and review were truly designed as gifts: an exciting opportunity for the author, wholly generative and supportive of you, your work, and what your work might do for its readers?

Watering your work: a speculative experiment in affirming review

What if we define the writing review experience we want to see, then work backward to realize that vision?

When I characterize my design approach as “speculative,” I use that term loosely, acknowledging it has more specific uses in fields and practices including sci-fi writing and design fiction. My use emphasizes the impact of starting design work from a “what if …?” framing. Rather than work starting from the present and only looking forward in small iterative bites, my process initially focuses on what could or should be before working backward to chart a path to that desired future. When analyzing the current state of things, I do so to fuel my conviction in where I want to go rather than as a weight dragging down what futures seem possible.

Speculative scholarly design influences within my fields of experimental humanities and cultural heritage (e.g., libraries, archives, museums) scholarship include:

  1. Jessica Marie Johnson: Johnson’s work includes LifexCode: DH Against Enclosure, “a grammar of refusal and a language of freedom for the [digital] humanities” (DH) that “incubates community accountable, decolonial, antiracist projects and praxis.” Johnson wields innovative forms such as “microlabs” and “bridges” to organize toward speculative futures while resisting restrictive boundaries.

  2. Élika Ortega and Stewart Varner: Ortega’s writing planted the framing of “what DH could be” in my head, early in my involvement with the eternal “define DH” conversation; her scholarship models how to take an active part in moving us toward a better and more just DH.2 Varner’s writing riffed on Ortega’s framing to speculate on how several specific “what if DH was …?” could direct our field toward various futures.

  3. Marisa Parham: Parham’s scholarship includes co-direction of the Immersive Realities Labs for the Humanities, a collective whose work includes an array of projects applying critical speculative methods building digital humanities futures with projects like .break. dance, a time-based web experience designed “in honor of the many ways Black diasporic peoples use memory, performance, and speculative prayer to recode time and space.”

  4. Cassius Adair: Adair’s AU: Alternate University CFP and its design provocations apply the speculative freedom of fan fiction to the work of imagining, demanding, and building a more just higher ed.3

  5. Ronda Grizzle: Grizzle’s scholarship includes years of teaching academic colleagues how to design effective, values-driven team charters to power collaborative DH projects (some examples here). My work is also influenced by her expertise in DH research project design and management, where she deftly balances the aspirational and the achievable, and in life coaching (e.g., using speculative design to envision and plan academic and personal work).

  6. Crystal Luo and Brandon Walsh: Luo and Walsh’s presentation as part of the “Digital Humanities Pedagogy and Labor” roundtable at the 2022 “DH Unbound” conference discussed the intertwined possibilities for the digital humanities and collective action in academia toward imagining better collective futures, framing DH as a particularly generative environment for advancing labor, care, and justice in higher ed and beyond.

  7. Laura Miller: Miller’s internal leadership, community design, and infrastructure work around digital scholarship and librarianship at the University of Virginia Library and the Scholars’ Lab has been a direct and frequent model for my scholarly infrastructure design thinking, including through her expertise strategizing exciting possible futures that honor and support the goals of all involved.

  8. Johanna Drucker: Drucker’s SpecLab: Digital Aesthetics and Projects in Speculative Computing shares a variety of digital humanities projects that take speculative leaps to advance intersections of technology and humanities work.

  9. Felecia Caton Garcia and Heather Pleasants: Garcia and Pleasants’ 2021 Digital Pedagogy Lab course “What If? Speculative Fiction and the Future of Education” applied speculative fiction approaches to pedagogy that helped me see and intervene in how unnecessarily disjointed my creative and administrative work had become.

These clear-eyed approaches incorporate attention to current and past injustice, using the excitement of the worlds we want to build to fuel the efforts needed to reach those worlds.

Inspired by these models, I’m approaching how we review scholarly writing by designing a format that I can eventually test and iterate, starting from not what is but what I urgently believe could be. A speculative framing helps me not only seek low-hanging fruit we can pick to improve right now. It also supports my attending to situations so out of alignment with our stated goals that we need to stop and try something different right now—even if we don’t have all the answers yet toward making something more than slightly better. With academic review, we don’t need to pause until we have a solution that obliterates all toxicity. We only need to increase attention to ways it is toxic and follow that awareness by experimenting with other, possibly better approaches.

A useful approach in my speculative toolkit is separating the current ways some concepts are realized, from the underlying motivations and values (something I refer to as “instantiations,” to emphasize they are just some of multiple instances of how you might pursue those guiding values). Often, there are many ways for us to pursue, realize, communicate our scholarly values, goals, and questions—but we sometimes calcify into only doing and publishing our thinking in a limited number of ways. For example, to achieve my award-winning, weirdly shaped literature dissertation I de-emphasized what currently seemed to “count” as a dissertation in the humanities (a proto-monograph) and re-emphasized the goals of the humanities dissertation process.4 Some of these dissertational goals were: demonstrating ability to build on past relevant scholarly conversations, building forward on that foundation, and contributing to current relevant scholarly discussions.

None of those goals necessitate the use of the proto-monograph format to communicate and achieve those contributions, so I was able to successfully argue for my dissertation to not include written chapters and instead consist of various digital humanities scholarly formats (including code, user experience research, and social media writing) as the work to be formally evaluated for my dissertation defense. I used this same approach to explore where my dissertation’s participatory digital edition of James Joyce’s Ulysses fit into the field of textual scholarship.5 I analyzed historical schools of editing thought for their underlying goals and identified that when and how those goals were realized today weren’t the only possibilities for fully realizing those textual scholarship goals. This process helped me situate my digital edition development as a legibly scholarly extension of textual scholarship; it also helped me identify fundamental goals of textual scholarship that I could adjust my research to engage with more meaningfully.

Part of the work of creating better futures is communication. If we maintain awareness that the traditional, usual, “best” ways of doing scholarship aren’t necessarily the only ways, or the best ones for our particular research project, we can better communicate with colleagues whose different methodologies might otherwise hide mutual scholarly goals and values.

Rejected foundations for improving scholarly writing feedback; or “Rigor? Eww!”

What if academic feedback mechanisms weren’t acceptable channels for our cruelest pseudo-intellectual impulses?

What if peer review systems weren’t encouraging siphons for white supremacist logics?6

I know from exhausting experience that any conversation about academic futures designed to be more just, caring, generative, inclusive, and affirming frequently summons that “R word” (academic Rigor), as though (tellingly) visions of a more just peer review process must mean we’re dropping our otherwise-constant scholarly attention to effectively furthering knowledge. I do not want what currently is in the state of scholarly review to weigh down my imagination of what could be. But to share my working process effectively, I convey my interpretation of what Rigor-prioritizing scholarly review currently looks like:

  • concepts of creativity, expertise, intelligence commonly wielded as a weapon to dismiss the ideas of already systemically harmed groups;

  • a “you know it when you see it” that just so happens to only apply to unjustly limited kinds of knowledges;

  • shorthand for refusing to engage with a colleague’s ideas; and

  • a failure to participate in mutual knowledge-making by refusing to strengthen other scholars’ writing and selves.

I do not want to sidetrack into defining, defending, or defenestrating a strawperson—or even a steelperson—definition of Rigor here.7 I hope to avoid a common argumentative distraction, through my tool of identifying core goals to compare them against current instantiations of those goals. Conversations purporting to debate the worth of a concept (e.g., scholarly rigor) go nowhere when they fail to surface conflicting definitions of the term at issue. For example: “rigor” generally means shitty gatekeeping to me, but to someone else it labels a laudable commitment to doing our best thinking. We lose time when we argue about a term for which we don’t hold a shared definition; shorthand is only efficacious when it points us at the same entity.

Our shared concerns are:

  1. How do we do our best thinking? And then,

  2. How do we best share that thinking so others can: learn from it, connect us to adjacent thinking we aren’t aware of, help us improve and iterate on our thinking, and build on our work with further work of their own?

Some might say the answer to achieving our “best” thinking and sharing is the concept of academic rigor; but when “rigor” means the failures I list above, our assessments of how to achieve shared concerns of knowledge-making diverge:

I don’t give a shit about rigor.

I do give a shit about some things “rigor” could mean (but too regularly—which is to say, ever—doesn’t). I’d respect “rigor” that only meant practices we challenge ourselves to perform toward the best communal knowledge outcomes. For example, such rigor could emphasize that when we share our scholarship, the more we communicate the contexts of that scholarship, the better others can assess, learn, and build on it. Such context can include the length and nature of my personal background and experiences related to a topic; my intent, the context in which I accomplished and shared the work, the audience I had in mind; how certain I am in my claims and why, the depth and breadth of my familiarity with adjacent past work, the scope of my claims; and whose work I am building on. Sharing and attending to knowledge’s context impacts how knowledge is built: Why do you think what you think, and what do I think of your thinking? Put another way: How do I contextualize and assess the substance of what you say?8

I want to be clear that when I say “expertise,” I mean everything about you and your experiences that shapes what you think and do, how you know what you think you know, and how that context might inform how others weigh what you say. Academia has shorthands for some forms of expertise, such as “faculty role in relevant department” or “graduate degree in relevant topic.” Academia lacks shorthand for and accurate assessment of other forms of expertise, such as the breadth of other lived experiences, practice, cultural knowledges that inform thinking. Often, academics treat professional roles and social positions as more meaningful equivalents of expertise than those roles should necessarily imply, given the biases and limits impacting who can gain such signifiers. When I say “expertise,” I mean an accurately expansive understanding of the factors that contribute to how we think.

To help you follow the speculative experiment I share later in this piece, I am trying to give you the context from whence these vibes of mine flow: I’m not especially interested in “criticism” interpreted as “focus on finding flaws.” In rejection as a built-in, assumed-beneficial feature of intellectual labor. In a zero-sum game of gatekeeping scholarship, where insisting on a relative “this could be better than it is, and is therefore not worthy as-is” regularly prevents some of our “best” thinking from reaching wider conversations. I’m trying to be part of making the world I want to live in, and to encourage others to consider what such a project—a process of scholarly review that advances us all, no destructive edge included—would make possible. It is not worth continuing an approach that can’t improve scholarship without routinely making scholars question if they can or should participate in scholarship at all—without cutting enthusiasm and curiosity, without weighing down and pushing out people excited about the project of communal knowledge building. An inflexible conception of good intellectual practice as necessarily destructive has no place in our scholarly toolkit.

I don’t need a “chin up, we all have to deal with paper rejection”—I need you to reckon with the fact that “rejection” is an entirely optional concept whose sole purpose is as the exclusivity mechanism in a prestige hierarchy. I don’t want to play your broken game. I regularly review for and contribute to building cooperative systems of understanding where rejection doesn’t exist. your gladiatorial review process is legacy tech, it’s embarrassing to assume I want anything to do with it.

—Jonny (; a.k.a. Jonny L. Saunders), July 9, 2023 (https://neuromatch.social/@jonny/110684493228473104)

I’m psyching myself up to, like Saunders, refuse the “broken game” of collectively improving our thinking, writing, making through systems that enforce outdated and violent conceptions of knowledge.

If I cared to spend more time dissecting how concepts of rigor impact scholarship, I might try to gather specific written definitions of the word, and parse where those don’t necessarily need to overlap with the unfortunate ways such uses are deployed or impact—such as critiques equivalent to rooting out moral and intellectual failings, and reviewers as judges above reviewees. I would explore parallels with academic citation, where the most abundant and generous practice also happens to be the most accurate— and therefore is also the approach most productive of future good learning and scholarship. I could give more due to aspects of current steelperson definitions of rigor and review that can lead to some positive outcomes. I would consider how to investigate my guess that many of us already have a Reviewer 29 right in our own heads more vocal than is productive.

I’m not putting my time into those endeavors. There is no cosmic scale of equal tearing down and building up we must carefully balance. Why do we cling to a system where the design encourages us to feel smart at another person’s expense, to feel like we’re building something communally useful by tearing things down? The abundance of gleeful affect accompanying destructive critique, rather than regret, is a brilliant red flag: these reviewers are unable to accurately participate in our mutual projects of building knowledge and caring for our colleagues.

Affect is one of the places where opportunities for combined justice and joy are easy to see: If we frame review as a gift to both reviewee and reviewer, can we find more opportunities to take delight and pleasure in the process? This article’s conclusion discusses some areas of existing work (e.g., writing pedagogy, literary critique, scholarly communications scholarship of peer review, community mediation and facilitation) that explore overlapping questions of affect and knowledge-making impact. As you read the speculative design draft below, I encourage you to consider the emotional vibes the suggested process might raise, including affects related to joy, suspicion, confidence, paranoia, and hope.

Even if impeccable in theory, a system that in practice reduces who takes part in communal knowledge-making, whose voices get amplified, and how early and regularly we share our works-in-progress sucks. A system that furthermore notoriously leaves scholars stressed, angry, dejected, despairing of continuing work doesn’t need defense or some light updates; we need to put that time and effort toward trying different approaches. Experimenting toward a process that doesn’t reduce knowledge or participating knowledge-makers does not mean worse scholarship! I’m done entertaining that as something deserving further attention in this piece.

Engaging in debate and co-creation can be productive, but they aren’t mandatory origins of the design process. We can just imagine and make good things. We can just design an approach that achieves what we want to see and then analyze how our pilot works in practice. But I only partially practice this belief in this piece: I wrote this section to help frame the rest of the piece against assumptions about effective scholarly review, when I could have just launched into my speculative design draft and hoped readers would pick up on the deliberate absence of certain common scholarly assumptions and practices. Or I could have skipped directly to that speculative design draft but included throughout comments addressing possible erroneous assumptions.

I believe scholars and our in-progress work benefit more from seeing our strengths reflected back to us through others’ unique perspectives, hearing questions and ideas we can respond to when we iterate on our thinking, and gaining co-conspirators who can point out some of the many possible tools for advancing the goals of our work. By instead separating out my rejection of a status quo from an imagined better approach into two different sections of this article, I hope to communicate about both present and possible future while avoiding derailing a vision for something possibly better with interwoven arguments for why we should envision anything different in the first place.

Interlude: fermentation

amanda: Yall, I am trying to stretch a bread baking metaphor … any ideas?

ronda: There must be dozens of puns around sourdough terminology.

Using fermenting to talk about ideas growing is a thing, right?

Starter, mother, discard

Descriptions of the characteristics of well kneaded dough

—private social media, February 14, 202410

Sourdough was one major way bread got a rise, until baker’s yeast became easily available in the last couple hundred years.11 Though some regional sourdough traditions produce a markedly sour-flavored bread, my understanding is that this is not always a result of sourdough starters—and that in some cases, bakers employ additives to make an otherwise mild-flavored loaf meet the sour expectations of the general public. I think of academic review norms: how much of the degenerative stress and hurt of the process is just there because we expect it, or it’s what we underwent in the past, so it must have some value intrinsically tied into beneficial outcomes.

Does growing the strength of our work need to taste sour?

Or can communal watering of our work be both successful and sweet?

I regularly see the answer—that we don’t need to assume or add sourness to advance good scholarship—in the work of current and past Scholars’ Lab team colleagues (Arin Bennett, Jeremy Boggs, Chris Gist, Ronda Grizzle, Zoe LeBlanc, Shane Lin, Drew MacQueen, Laura Miller, Will Rourk, Ammon Shepherd, and Brandon Walsh) with one another and our larger research community. The speculative design draft that follows is thinking empowered by their examples and built on their existing successful practices.

Draft plan for “Water Your Work” affirming review clinic

I chose my title of “organized futures” to reflect my focus on the often-hidden work of scholarly infrastructure that relies on organization as method: in the sense of union organizing—the hard conversations, phone banking, spreadsheet-making of unions and other collective solidarity structures—and in administrative meanings such as “herding cats” (getting people to move, forward and together), organizing information, generally making an emergent better entity (like a lab, research team, or department) out of separate people and activities. Organizing isn’t shiny, and its innovations and scholarship get ignored. Organizing is necessary: communities, even those that appear to “come together organically,” require at least choices about time, place, activity, and outreach to potential community participants—and often much more work.

This section shifts my tone to that nitty-gritty organizing, administrative, “okay, how do we make this happen” work. My context is my job as director of a digital humanities research center (the Scholars’ Lab) in a university library. Our mission statement describes our work:

The Scholars’ Lab cultivates an interdisciplinary research community for critical, creative, and just explorations of the intersections of technology and culture. We offer students, faculty, staff, and local community members consultation and collaboration, teaching and training, and supportive spaces for anyone curious about pushing scholarly boundaries through creative, computational research practices. Our “people over projects” motto means our mission is creating and empowering colleagues as capable digital scholarly practitioners in their own rights; and our most important outcome has been cultivating just, caring, and inspiring scholarly community in support of that mission. [As opposed to focusing on bringing in grants and publishing tidily packable digital humanities projects.] We are foremost a space for learning together—about anything—by trying stuff.

The speculative design below is built on the significant work of current and past Scholars’ Lab staff and community members, who’ve made a place in line with the kinds of curious, creative, and caring approaches to scholarship I explore in this piece. Where I consider how to better “water” our community’s research, I’m working ground that has already been well tilled (we think about how to meet such values regularly, as part of our programs) and well watered (a culture of affirmation and the joy of getting to teach others).12

This section shifts us from abstract reflection to speculative practice. I present initial notes describing how we might implement an initial pilot of the “Water Your Work” review clinic that aims toward scholarly writing review outcomes as described above: outcomes that are desirable, energizing, and not just for the scholarship being reviewed, but equally for its author. I’ll describe approaches to the various pieces of running an actual review clinic:

  1. how we communicate that we’re offering an unusual sort of review to potential authors and help authors navigate barriers to considering undergoing a non-mandatory review experience;

  2. how we help the review group reflect on the nature of affirmative review and come to consensus on terminology, practices, and goals;

  3. a set of guiding prompts for reviewer reference and inspiration (vs. repeating habits from past, non-affirmative review);

  4. how the reviewers synthesize and improve on individual review feedback as a group and deliver a single, navigable review document to the author; and

  5. guidelines for authors being reviewed: how to help reviewers do useful and supportive review and what to expect from the process.

Much of this section is deep in the weeds of specific practices, such as lists of prompts we’d give the author and reviewers to aid in resetting assumptions and focusing on the author’s expressed review goals. By discussing both abstract goals and what these might look like in a particular review, I’m hoping to make it easier to identify which approaches harmonize goals with practice and which practices would benefit from changes more in alignment with our motivating values.

At a high level, these practical steps should all align with two guiding principles:

  1. The review experience should be an affirming one, communicating the value of both the scholarship and its author to our knowledge community.

  2. The review experience should be shaped toward the author’s goals for their scholarship.

The author will be encouraged to provide reviewers with a framing understanding of their needs and hopes for the review and the context in which they expect to eventually share their work. A frequent reviewer fail mode is to review toward the paper the reviewer would like to see, following potential paths most inspiring and interesting to the reviewer’s areas of study, rather than focusing on what the author wishes to achieve. We don’t want to review perpendicularly to the author’s goals. To achieve a more affirming review, we may ask questions such as:

  • What do you need from us at this stage of your work? Are there specific aspects where you actively want support, or do you want a more general assessment of how readers may experience your writing?

  • Should we respond to this as if it’s finished and ready to submit, or should we assume it’s in an earlier drafting stage? For example, if your writing is in an earlier drafting stage and you know what you want to do to polish the piece, you might not want us to identify or belabor points you already expect to improve yourself before publication.

  • Are there types of feedback you aren’t interested in? For example, feedback on grammar, wordsmithing, and formatting can be unhelpful and patronizing when they aren’t desired.

  • What are your plans for the completed writing? Is there a specific place you’re looking to submit it, or where it’s already been accepted? After this piece, is your broader research and learning agenda aimed at going deeper on this topic, wider over related areas, or something else?

You’ll see more of such lists of prompts in the following draft plan of the review. I hope that foregrounding the above questions, with their attention to the author as expert on their desires and context, gives you a sense of the spirit in which these practical plans are conceptualized.

Pre-clinic outreach

Succinctly conveying that your writing clinic is explicitly designed to be a positive experience is at odds with years of ingrained assumptions about feedback and review—as demonstrated from the multiple paragraphs it took me above to frame this speculative plan’s approach. Working within the confines of poster, newsletter, and social media size restrictions, you could directly say: “We’re trying something new that’s designed to produce better scholarship while also leaving you feeling like the strong scholar with exciting things to say that you are. If this sounds like something you’d like to see in the world, please come help us practice and improve.”

Writing support pitched as Prizing Rigor doesn’t have to fight “is rigor even the right metric,” and yet it still can have the same outreach struggles of making authors see such feedback as helpful and professionally safe to use. There’s a barrier of scholars not wanting to be seen as attending or needing a support service, based on the (incorrect) view of scholarship as something done entirely solo, with any “help” or partnership diminishing the estimation of personal excellence involved. There’s also a barrier inside of us making it difficult to jump at opportunities to help us grow our thinking and writing. Many reviewing approaches are actively harmful, especially to folks already most harmed by constant individual and systemic white supremacy culture. Separate from the realities of how bad review can be, some folks consciously feel they should be able to go it alone. I still notice myself just subconsciously thinking consultations and asking for feedback and advice aren’t things I do—even though I value collaboration, believe my knowledge work is networked into that of many others, and am literally writing an article trying to help people seek such support!

In response, we could focus our framing on what the reviewers gain from the experience: help us improve our knowledge of the variety of work being written on campus; help us practice improving our writing feedback. We can emphasize our goals: helping you tell us what you want. We could give several quick examples of what goals we’re ready to help with, to help scholars see themselves participating. For example:

  • “Are you curious about getting involved in the international DH community? Even if you’ve got years of experiences applying to conferences, applying to a DH conference might feel different—we can help you ID the right conference for your interests, provide feedback toward pitching your work for a DH framing, and encourage you to not feel like you need to identify as a digital humanist to get more involved.”

  • “Have you run out of energy and excitement around something you’re writing? Let us try to reflect back to you your draft’s strengths, unique approaches, and sections we’re most excited to read further writing about.”

Some level of anonymity for authors using the clinic could help potential authors over the initial threshold of using it. An important scholarly practice we need to model is that we always credit scholarly work, and in this case that includes the work of the reviewers; does that necessitate that the name of the author and the shape of their reviewed work (and thus the scope of the reviewer’s labor) also be made public, or is a middle ground possible?

Initially, we could promise authors semi-anonymity (unless any authors agree to be attributed by name or more specific descriptions): the draft itself would only be seen by the staff reviewers at the time of the review, and the nature of a given review process might be discussed internally among SLab staff when useful to developing our work. At the end of each academic year, SLab could share in abstracted terms about any reviews that happened over the past year; e.g., “In the last academic year, we performed:

  • 4 reviews

  • for 3 authors (1 author went through a second review iteration for the same piece)

  • from the UVA architectural history, French, and Library departments;

  • authors were UVA faculty, graduate students, and academic staff;

  • the genres of writing reviewed included a conference long paper abstract for a disciplinary conference, a poster for a digital humanities conference, and a journal article already accepted to a peer-reviewed disciplinary journal; and

  • authors gave us the following quotes to share about the process: [include any quotes from authors that make the service sound useful to others].”

We could assess this after a pilot year and move toward more public description of who was involved with what work and goals if that seems possible. We would talk with authors about what seems fair and also safe to them. We could consider some kind of named role for participating authors, such as “2024 Scholars’ Lab Scholarly Communication Futures Fellows,” that would thank them for their role in increasing our knowledge and improving our review practices, frame their participation as the brave and smart scholarly work it is, and also more accurately credit Scholars’ Lab and its reviewers for their work. We could also consider building this into existing Scholars’ Lab programs where we do review and feedback, such as our fellows and for-credit teaching.

I hope those approaches could counter hesitation to be literally seen working with us and also assist with how widely word of mouth might help us publicize the review clinics. Most of the writing feedback Scholars’ Lab is asked to provide is in response to our frequent offers to students to review scholarly or professional materials, especially the students already working with us regularly (e.g., through fellowships, internships, wage roles, independent studies, and for-credit classes). In addition to that outreach’s effect, I imagine gaining familiarity with how we read and respond to others in writing and conversation builds trust in how we might treat other work they might share with us. I’d also talk to community members to see what made folks who did seek our feedback comfortable doing that and what might make folks who haven’t interested in doing so.

Finally, to improve outreach I’ll continue to work on what this creation is called, especially around the “clinic” term (and adjacent terms like “check up”) with its connotations of the medical system and its brokenness and harms, inequitable or incorrect power/knowledge doctor-patient dynamics, or that something is wrong with you or your work. Ronda Grizzle suggested “pop-up,” citing these brief, often unannounced small offerings by chefs or designers, with their relevant connotations of ephemerality and desirable but not otherwise available goods/services.13 “Pop-up” also has delightful resonances with this special issue’s CFP’s metaphors around fungi and collective thriving. “Event,” “workshop,” and “service” all also have undesirable or confusing connotations, but like “clinic” have some level of familiarity to academics, pointing at there being some kind of direct scholarly benefit, in a way other terms might not. I suspect I’ll think further through using the “pop-up” term in a legible way or identify some other non-academic practice that’s metaphorically relevant (e.g., potlucks, brake-light fix clinics, co-op bike repair sheds, pep talks, seed libraries).

Reviewer preparation: reflection & consensus

For an initial pilot of the clinic, I’d work with Scholars’ Lab colleagues I’ve already seen practice supportive and generative feedback; eventually, we might consider inviting in other colleagues interested in affirming review. Regardless of familiarity, we could refresh our commitment to a particularly affirming review experience by reading and annotating these guidelines as a group, then discussing our annotations and possibly incorporating them back into the guidelines themselves.

For reviewers who have already recently read and discussed these clinic guidelines, we could instead use an external set of reviewing guidelines (e.g., Reviews in DH’s “Reviews”) to read and discuss what aspects we might borrow for our own reviewing practices.

Alternatively, we could go around the circle of reviewers and ask each to share two to three adjectives or phrases describing what they tend to look for when reviewing scholarship. For example, reviewers might share concepts like “smart,” “cohesive,” and “grounded in the literature.” As a group, we then unpack what such terms mean to us. Do we agree they are applicable to our affirmative project? Are there ways of satisfying these concepts beyond academic norms, that we should stay aware of while reading? For example, “grounded in existing work” could mean a page of citations to journals and books written in the past three years, or it could mean a description of the author’s fieldwork they’re drawing on as lived experience, with context such as who they worked with and who they modeled their practices on, to help readers similarly evaluate the author’s claims. Framing our review with an awareness of different ways authors can meet more abstract values (like “grounded in the literature”) could help us counter the urge to evaluate work against one ingrained format. This conversation should not be treated as a search for the “right” words; it’s simply a way to notice and reflect on what assumptions we might be bringing to our affirmative work—which to water and which to let go.

Next, we could turn to the specific piece we’ll be reviewing. The author will have provided reviewers with framing information for the review describing what they need from the process, with the option of responding to the prompts shared earlier in this draft plan (“Draft plan for ‘Water Your Work’ affirming review clinic,” 3rd paragraph). The reviewer group could compare what they’re hearing the author wants with the “how I generally evaluate scholarship” phrases and adjectives we’ve just discussed. We would identify if any of those usual approaches need to be altered to meet the author’s goals. For example, a reviewer could have told us they look for confident language when reviewing writing. If an author tells us they don’t value communication styles that overly perform certainty, and that they want to explore early ideas they’re not yet confident in, we’ll want to make sure the reviewers are explicitly comfortable with honoring that scholarly goal. Conversations like these, where we raise assumptions about review and then compare them with a specific author’s expressed needs, would give us a chance to identify if anyone isn’t on board with the core purposes of affirming review. If we can’t agree, that’s fine and not a comment on their skills or opinions, but it does mean that they cannot be a reviewer for this clinic.

Reviewer teaching, discussion, and practice are ongoing necessities. I’m familiar with a number of scholars (including me) who care about the values and goals discussed here but still struggle to always act in accordance with those beliefs. For example, I often find myself thinking of ways I would make things better, for my sense of “better,” but such input is not always germane, wanted, or needed. Ideally, this clinic is an opportunity for reviewers to practice and share ideas about being their best and most supportive scholarly selves. For the purposes of piloting this review in the near term, however, I’d likely focus on assembling only colleagues I’ve observed habitually review with generative intent and outcome rather than working with people who haven’t had as much practice performing affirmative review.

Reviewer preparation: prompts as guidelines

Over time, we could add useful reviewer prompts to a Google Doc. I’ve previously blogged personal notes on how I approach scholarly review, and I’d reread these before each clinic.14 These notes include prompts such as:

  1. Consider framing my feedback through my personal experience of reading the abstract; for example:

    1. “This submission makes me wonder … ”

    2. “This submission taught me … ”

    3. “I look forward to seeing how the author will … ”

  2. Explicitly indicate where my expertise/skillset is absent, or lags behind the author’s, so they know that context for my comments.

  3. Note that “support the author doing their best work” is thinking that can as easily underlie a bad review.

    1. E.g., don’t get stuck on how this could be a different project of more personal interest to me—evaluate this project at hand.

    2. I should be careful about my assumptions of what is best or what I think the publication venue should accept versus what the author wants to achieve and how and what we know of the publication venue’s goals. My interests and biases can act as gatekeepers, and I am helping strengthen a particular piece of work, not judging its worth.

  4. Remember that a description of research can be a separate layer from the work itself.

    1. E.g., if I’m confused by the methodology described in a conference abstract, this could as much be a matter of the description needing to be expanded as it could be an issue with the methodology itself. I can only comment on the writing and its description of such work.

  5. Do your comments explicitly identify what scope of the writing they apply to?

    1. Often, you can achieve strengthened writing through a few strategic sentences that clarify a point or even adjust an argument’s framing across multiple pages. It can aid feedback reception if each comment is clear about where and how much of the paper you’re suggesting changes to.

    2. Comments that refer to huge swaths of the piece, or make sweeping claims without pointing to specific, multiple examples that cause you to see that pattern, can be flags that your feedback isn’t based on strong evidence and/or fails to envision and convey what a strengthened paper could look like.

    3. Establishing what’s sound and strong in the piece can help the author retain energy for using your feedback well.

  6. Attend to how I frame my idea and how it might be received.

    1. E.g., is my idea something the author truly “should” do to meet their stated goals, using words like “should” or “must”? Or can I be clearer about the context for my thought, instead using phrasing along the lines of: “have you considered this approach …,” “double-checking: is this [summarize your impression of a sentence] what you mean?,” “another option could be …,” “could you say more about this idea here?”

We could use headings and a linked table of contents to help such prompts thematically, so future reviewers can pull out those best for specific use cases. For example, the following selection of potential reviewing prompts best matches a conference abstract draft intended for a large DH conference such as the Alliance of Digital Humanities Organizations’ (ADHO.org) or the Association for Computers and the Humanities (ACH.org):15

  1. What do you see as the abstract’s theme or argument, in one sentence?

    1. How easy was this for you to identify and word? If it wasn’t easy, does that seem due to your unfamiliarity with the areas of work or due to how the author organized the information? If the latter, is a different organization urgently needed, or is “easy summation” not a necessary characteristic here?

  2. List three strengths of the abstract.

    1. Strengths can include things that made you interested in attending the talk to learn more, thought were worded well, or found innovative.

  3. Did you learn something you didn’t know before from reading the abstract?

    1. Let the author know, if so.

  4. Evaluate how the author’s approach fits their stated goal for the abstract.

    1. One approach is to list a couple points each on aspects that support their goal vs. could use additional iteration. (Suggest different or additional approaches only if those seem like a potentially better fit to the author’s stated goal.)

  5. Highlight where the abstract fits with the conference’s stated themes.

  6. Does the abstract clearly imply who may be interested in attending the talk?

    1. A common approach is including a sentence such as “This talk will be of interest to attendees who …,” then listing a couple higher level fields or topics your abstract fits beneath. You might use this model to reflect back to the author: “I think your abstract would be interesting to attendees who do [xyz] kinds of research.” This could help the author ID if there are relevant areas they could more visibly highlight in the abstract and affirm their scholarly contribution impacts knowledge more broadly than they may assume.

In the reviewing group, we could discuss the restrictions and opportunities of the format in question, to help adapt our reviewing. For example, conference abstract reviewing should acknowledge if the writer has very little space (only 250 words, in the case of some ACH conferences), so any suggestions for adding material should assist with IDing how to reduce the rest of the abstract without dropping aspects the reviewer thinks important to retain. This also avoids the reviewer suggesting additions to the abstract that the author has very likely thought of and plans to include in an accepted paper, separating “this feels missing from your abstract” from “in your fuller paper, I’ll be interested to read about … ”

Guidelines and past successful prompts are tools toward cultivating written scholarship, but they are not magic incantations appropriate to every text, nor does using such prompts guarantee your review meets the goals of the guidelines. Attempting to perfectly describe what you want to see and not see in a review does not necessarily convey those wishes, guarantee they’ll be understood (even through good-faith attempts), or address ingrained reviewer and reviewee behaviors. Though some guidelines and prompts may have wide reusability, others are only effective for limited genres or contexts or writing. My goal in including example prompts is not to create a massive list or whittle down to some essential checklist, but to provide examples of ways of approaching and discussing work that can help reviewers meet authors’ expressed goals for a review.

Completing the review

After the reviewer cohort has discussed and internally established how it will review, and the author has provided their context and goals for our review, the reviewers will each read and annotate a private Google Doc copy of the writing. The reviewer group then meets to discuss their impressions and feedback, in a process modeled on the Scholars’ Lab staff committee review of graduate fellowship applications led by Brandon Walsh.16 Walsh provides these annual committees with a rubric, reviewer guidelines including aspects to especially attend to, and an initial framing meeting helping the committee all approach the review in the same way.

Any reviewer comments we decide aren’t appropriate to our goals will be dropped, and then all reviewers’ remaining Google Doc (GDoc) comments are merged into a single reviewer GDoc representing the group’s feedback. Where multiple reviewers had overlapping ideas, we summarize these to convey their strength (e.g., “multiple reviewers felt this was an extremely compelling argument”).

We then email the link to the GDoc to the author, with our thanks for sharing their work, the names and emails of each review, and with a paragraph highlighting several of the reviewed work’s strengths directly in the email body. We offer to meet and suggest several possible times to do so, should the author like to discuss our comments or iterate further with our support. We mention that the author is welcome to reach out to any individual reviewer, if that feels more comfortable than replying to the whole group. Finally, we ask the author to update us if they eventually make the work public (e.g., a conference talk or journal article) so that we can read and share their work.

Reviewee (author) guidelines:

At the start of the clinic, we briefly describe the above plan to the author/reviewee, so they know what to expect, if it matches their goals, and how to best steer us toward what they need. We commit to respecting the privacy of the author’s writing, ideas, and our comments about them and identify to the author the names of the people who will review their work.

We share the timeline we need to follow and why this is needed in order to fit the work in with reviewers’ other duties and commitments. Any clinic requires some set length of time to identify the reviewers who can participate; schedule times they can read, annotate, and meet; and return the review to the author. We commit to putting sincere time and effort into the process at the level the author desires, but this means authors do need to put some forethought into any writing with deadlines (e.g., conference submissions, grant proposals, writing for CFPs) so that reviewers can fully attend to the review without impeding other commitments and work-life balance.

We ask the author to provide us with the framing information described in the “Reviewer preparation: reflection & consensus” section above. At a line editing level, norms such as style, grammar, form, and other writing aspects can differ depending on both author and audience contexts; for example, we don’t want to suggest ways to “formalize” an author’s voice when they do not declaratively desire this. At a broader level, understanding the author’s context and goals will help us contextualize and review toward that author’s plans, giving examples of possible approaches they might wish us to take, such as supportively preparing them to face a known panel of skeptical or negative reviewers, helping them polish wording, or identifying patterns of strength in their writing style and to help them notice and wield those strengths more consciously.

We want to prepare the author as best as possible for receiving the feedback in a way that feels helpful to them. In the future, I’m hoping to follow up on Walsh’s suggestion that we explore how to better coach authors on receiving feedback.17 Feedback reception is a major factor in the success of this generative process, but many authors have evolved their approach to receiving feedback in response to largely negative critique, not affirmative review.

Even in these pre-review conversations, we model a supportive tone enthused to read their work and confident it has strengths. We ask the author how feedback usually feels to them, and if there’s anything we can do toward presenting our thoughts in a way that keeps the author feeling good about themselves and their work. We can also prepare them for how much feedback to expect. Is more always better to them, or would providing fewer comments more focused on overarching ideas better help them edit? Does the feedback approach they’ve asked for match their preferred thickness of annotation? For example, if they ask for intense inline proofreading and word polishing, there will be many edits per sentence; if they want a general “Am I on the right track?” check-in, they may just wish to receive one summary paragraph as the entire feedback. If an author sees from the start of the review process onward that our priority is them and their work, this familiarity might help the author read our eventual written feedback in a similarly positive tone.

Conclusion: iterating on a speculation

I explore next steps building on the above speculative draft experience through comparison with existing digital humanities review guidelines, identifying adjacent scholarly areas I’ll turn to for further reading and some final reflections around my goals for review and scholarly values.

Reading my draft plan against other reviewing guidelines

Drafting the “Water Your Work” clinic plan above helped me clarify my goals and also identify where I should turn next to improve my thinking and practice. I’ll begin by more closely reading several DH-adjacent writing review guidelines.

Editors Jennifer Guiliano and Roopika Risam’s Reviews in Digital Humanities publishes peer-reviewed project statements by the creators of digital humanities projects, paired with peer-reviewed reviews of those projects and the framing statement.18 Their guidelines for reviewing include:

  1. acknowledge the word limits on the review and structure the list of required review components to be addressable within those limits;

  2. clearly list the aspects the reviewer should comment on, using framing concepts (summary, assessment, analysis, evaluation, identification) to indicate the desired approaches; and

  3. just as clearly identify what reviews should not include.

The last is a good window into the kinds of bad review procedures my clinic speculation pushes back on: “ad hominem arguments” and “attacks of any kind, including not being the project the reviewer would have developed.”19 In the future, I’m interested in exploring other reviewing guidelines to identify similar don’ts—reflecting on what those say about academic reviewing norms.

Rereading the Association for Computers and the Humanities (ACH) 2023 Conference Reviewer Guidelines, I observe the review rubric uses framing concepts to clarify the reasoning and desired appearance of evaluated aspects, as does Reviews in DH.20 For example, rather than “includes bibliography,” “Engagement with Relevant Scholarship” is the goal, and the guidelines further describe what that does and doesn’t mean: “The proposal explicitly engages with relevant scholarship and offers context within the current state of the fields in which it engages. Formal citations (in the author’s preferred style) are only required when using direct quotation.”

I note the adjectives used to convey goals (e.g., “helpful,” “constructive”) and the clarity about what a reviewer does (strengthen abstract, make recommendations to committee) and does not do (make final judgment on inclusion of the submission). As I continue exploring the possibilities for generative scholarly review, a word frequency analysis of various DH review guidelines could help me identify commonalities in what my field values. This analysis could also help me identify terms a clinic reviewer group might want to discuss, to identify any discrepancies in how we define such terms.

The ACH 2024 reviewer guidelines were adapted from the DH 2020 Reviewer Guidelines created by Laura Estill and Jennifer Guiliano, offering an example of how generative and accurate review practices can spread good practices.21 The DH 2020 guidelines include some reviewer tips, including the following I’ll be including in my clinic review toolkit: “If you disagree with the entire premise of the submission, be considerate of the work completed by the submitter when outlining your rationale. It is not appropriate to disagree without providing evidence to support one’s position.” While the clinic’s focus on supporting author-requested goals means we won’t be disagreeing with their entire premises, the approach applies to all feedback: strengthening a colleague’s writing via review means suggesting ways to improve, rather than vaguely claiming something is not good enough.

I was also heartened to see recommendations overlapping with ones I arrived at myself, in the clinic draft plan above. For example, “If you find a proposal wholly compelling, please still give one or two sentences complimenting its strengths.” We are scholars; we all have strengths; frankly, it is a failure in the intellectual strength of reviewers if they are only able to analyze writing enough to identify weaknesses, not possibilities and successes. Estill and Guiliano provide a better articulation of positive idea framing than what I attempted above (“Reviewer preparation: personal preparation,” item #5): “Use positive language and affirmative statements rather than negative statements (e.g., ‘This submission could be improved by considering the following: … ’ rather than ‘You are missing X’).” Where my description risks sounding like we’re inaccurately cushioning a message, Estill and Guiliano’s advice simply directs: this is how we deliver this information.

My thinking and reviewing has benefited from the work of all my Scholars’ Lab staff colleagues. While our internal lab guidelines and rubrics aren’t publicly linkable at this time, I recommend Walsh’s piece “Questions to Ask When Applying” to give you a sense of how he leads staff review of applications for our graduate fellowship.22 His piece makes transparent what we’re looking for in applications and how we assess them; it also emphasizes that we have more strong applicants than we can accept and that an application’s outcome should not be taken as any negative comment on a scholar’s work or intellect. Similarly, his internal evaluation materials start from a place recognizing abundance: we have a variety of excellent proposals from scholars all deserving additional resources. Our eventual decision doesn’t need to be one managing to rank apples, oranges, and bananas against one another on some definition of “best”; rather, we look at factors like whose project plans most match the realities of what the program offers (including whether staff with the relevant expertise have enough available time to mentor).

Learning from adjacent scholarly areas

This speculation helped me identify where to next read and engage with existing scholarship:

The writing and reviewing. I picture this clinic focusing on DH-adjacent work genres such as proposals for conferences, grants, and funding; assessing digital project plans and evaluating built digital projects; and research blog and social media writing. While these DH formats can differ from essays and articles, they still share fundamental components such as argument, structure, evidence, voice, allotment of any word limits, and audience awareness. I’m experimenting in an area where I do not have deep past scholarly engagement (providing composition feedback) to practice research where I do have strong expertise (e.g., DH evaluation and assessment, scholarly infrastructure building, and knowledge community design). To improve aspects of my plan where I’m trying to argue and teach about writing, I could refresh on my past composition pedagogy training by engaging with humanities areas including rhetoric and the pedagogical differences between teaching and providing feedback on writing; modes of literary interpretation and the affect of reading and interpretation;23 and generous approaches to reading against, reading with, and otherwise expanding the ways we critique text.

The writers and reviewers. I am more interested and better conversant in the infrastructural and community design aspects of this research—that is, the writers and reviewers rather than the writing and reviewing. Instead of iterating on my clinic’s particular responses to written work in future writing, I am more likely to push my thinking further around the human, collective unit and labor aspects of my role (digital scholarship research centers, communities, infrastructure) as well as that work’s governing values (social justice, collection action, joy). Focusing on digitalhumanities-adjacent writing felt more wieldy for this speculative exercise. Now, I want to reread the literature on reviewing digital and public humanities work with a different lens: not just how these guidelines shape what’s recognized and counted for their specific contexts (e.g., journals, tenure cases, conferences) but also their direct impact on the reviewers reading and applying those guidelines in terms of community outcomes.

I might apply another speculative exercise: For a given guideline, what’s the best interpretation and outcome it imagines? What’s the worst scholarly and community outcome it allows (by direct choice or omission)? I’ll also turn to work around community-aware efforts like the scholarship of scholarly communications and the improvement of academic metrics. For example, Humane Metrics Humanities and Social Sciences (HuMetricsHSS) is an initiative developing and advocating for values-driven academic measures, assessment, and credit and the better scholarly futures these could empower. The eLife’s initiative limits the pool of what pieces they review, but for those pieces they follow a no-rejection model emphasizing qualitative assessment, reducing conformance with a binary of accept-vs.-reject.24 Beyond the guidelines for a particular journal or conference, larger systems like those HuMetricsHSS intervenes in also shape how we review colleagues’ scholarship.

I frequently turn to my interests in collective action, transformative justice, and abolition for ideas of how to shape my digital scholarship efforts (which sometimes even lets me contribute back to those larger projects at the same time!).25 Thinking about these justice-seeking approaches has already yielded several prompts for my future research in building just and joyful spaces of knowledge-making:

Collective action: What can we achieve if we build it together? What could solidarity of author and reviewer look like? How might bread-and-butter union tools such as collaborative spreadsheet creation, meeting scheduling, and organizational mapping help us iterate and extend the reach of our generative scholarly approaches? What does it mean to “be organizable” in respect to peer review systems, when traditional ones typically rely heavily on unpaid labor from a profession no longer mostly paid to provide such work?

Transformative justice: Traditional writing/project review contains features analogous to our deeply flawed legal justice system, from the reviewer-as-judge model, to the emphasis on seeking out flaws and failures (which will always be findable, if that’s what you’re looking for). Can we model how we review our colleagues’ work on transformative justice efforts that transfer dialogue, reactions, accountability to local community contexts; borrow its tools of facilitation and mediation; and reach for its vision of communally beneficial outcomes?

Might we notice other assumptions subtly structuring how we interact with others’ scholarship? For example, we let “I have completed this work for the purposes of this form of making it public” elide into “this work is complete and done,” rather than conceiving even of written instances of research as part of an ongoing stream of knowledge-making and iteration. The less we calcify our work toward A Final Form, the less implicit pressure there is to judge whether a given version of your research is Cosmically and Morally Enough rather than evaluate how to help it best convey your current thinking for some specific time and audience.

Abolition: You don’t need to know how we’ll ever get to a state of something better than our legal justice system to understand the urgency of working toward something built toward fair, just, and caring outcomes. Still, many justice-seekers have trouble understanding and joining abolition movements. As traditional scholarly review relies on justice system models such as the moral and hierarchical power of judges, we might look to abolitionist and transformative justice work to correspondingly understand how to reach potential co-conspirators in a more just and joyful academia, who are held back by parallel concerns with the distance between here and our goal state or fear that abolition means removing current systems without a plan for the consequences of that change.

If you think our current level of diversity, innovation, care, and justice in academia is better than it was a century ago, have you considered why “as things are right now” should magically be the stopping point for improving how we evaluate and iterate on colleagues’ scholarship? Do we agree that things aren’t ideal, even if we don’t agree on if or how to make things better? If you magically knew that we collectively worked hard and succeeded at reaching a more humane and effective form of scholarly review in the next x years, how would you guess we did that? And would that encourage you to take steps now to get us to that future? Abolition turns “things shouldn’t be this way” into “we will no longer allow things to be this way,” a reframing that makes the work of imagining and acting toward a better system both more urgent and more reachable.

What might we lose with this approach? What might we lose without this approach?26

What if we firmly escort Reviewer 2 out of the editorial process—yet suffer no loss to scholarly potential?

I originally wrote this section as an exploration of “what we might lose with this approach” (to scholarly review). When designing digital scholarship, I try to notice what unintentional harm my work could do.27 This approach is particularly helpful when combined with an awareness of power structures and systemic inequities, frequently asking “For whom?” and noticing how that group does or doesn’t overlap with who you can and can’t speak for.

As an example, I’m currently in the early stages of a text analysis project that explores interviews with trans folks, and all the interview transcripts are already public online. My work might increase the amount of visibility without safety already focused on trans folks, increasing the likelihood the interviewees might be targeted for harassment, or my documenting how to perform the text analysis could be used as a pseudo-scholastic cover for someone who wants to learn to “read” the same transcripts but with a harmful agenda and interpretational bias. Possible harmful outcomes do not always mean you should not proceed at all with a project, but they are a reason to carefully think through what your work could make more possible and consider what you can do to diminish or avoid possible negative outcomes.

Similarly, I wanted to analyze my speculative review plan for undesirable impacts: who the review approach wouldn’t help, or could hurt, and the pitfalls of adjacent practices (such as replacing the “mentor” model with an “advocate” one)28 that could help us avoid similar failures of intent vs. impact, consent, and listening.29 For example, women, gender minority, BIPOC, and other under-resourced and systemically harmed colleagues frequently undergo more destructive review; so, I considered how I’d want us to handle an author request for reviewers to model harsh judgment, to help an author prepare for it.

I might identify skeptical or destructive comments as external imaginings (“here are some ways I think other people might attack your writing … ”). I could pair such feedback with recommendations based on the writing’s strengths (“… and you already answer this convincingly at this later point” or “… and I know you’ve got the expertise to teach them, for example by expanding this paragraph with more direct examples”). If I was going to support this kind of predictively defensive author goal, I would want to attend throughout to affirming:

  • the reviewers and author are partners working together to prepare for possible critiques

  • the reviewers are certain of the author’s ability to adequately respond to such critique, and here to help them imagine it, evaluate whether it’s an appropriate or necessary matter to dignify with the effort of a response, and develop their thinking further should the hypothetical critique suggest generative response

  • the goal is to help future readers and reviewers learn and build on the author’s work better, not to counter or satisfy every thought or quibble with their work

I might do those things. But I won’t. I don’t regret taking the time to think carefully about how my plan could do harm. But I don’t think anything approaching “okay, I’ll pretend to be mean” has a place in the future I’m trying to build, even if interpreted as productively as the ideas in the previous paragraph (which I’ll note I only managed to write by taking up half its length to re-anchor the whole idea to generative goals—a little red flag). “What could we lose with this approach?” might not be the wording I need to use in the future: there is no zero-sum game; there doesn’t need to be something we lose at all. Article reviewer Brandon Walsh helped me reframe my thinking: “If you’re worried about what we could lose, what does that say about your assumptions about academia?”30

This doesn’t mean that we ignore the degree to which we force losses on our colleagues every time we let a toxic system continue; it does mean that without keeping an eye on the goal—just and joyful communal knowledge-making for everyone who wants to participate—we’re not going to get there. We make no better world by slapping a light veneer of “I’m supportive of your work, but I’ll be harsh to prepare you for how other people might be” over the same old, ineffective review practices. I see that my analysis backfired, offering destructive “rigorous” review a backdoor back into my scholarly feedback design.

Destructive critique (whether delivered firsthand or as hypotheticals) simply has no place in my clinic goals. Drawing that line does not hinder scholars from seeking such an experience elsewhere, but it does confirm I will be no part of continuing to nurture it inside my future-building work. This speculation doesn’t intend a future where we routinely “water your work … except if you know others will try to burn it down, so you want us to try preemptive arson to help you build a fire extinguisher.” That’s just the state of things as they already are, in some of the relatively better current review environments. That’s failing to communicate what review could and should be, that generative review improves scholarship and prepares it for sharing more widely.

In trying to tread carefully and think through what authors might want us to provide, I ended up doing the work I want to reject and treating destructive quibbles as worth significant labor, giving aid to something I actually don’t want to nurture in any form:

“That’s just the kind of harsh review we should prepare for.” When supporting students readying for the job market, there’s a balance to informing them of unjust but likely aspects (e.g., how dress can impact hiring) without condoning or replicating those aspects through our advice.31 Part of that balance involves asking what students already know and believe. With the example of professional dress, if they’re already aware, the repetition from me only serves to strengthen the feeling that everyone must continue to pass on even practices we believe are unjust. If you thought that dress rules were just and good, would your delivery of this opinion differ in any way beyond subtracting “of course, I don’t think this should matter”?

I see a similar tension in reviewing academic review: as much as we can try to build just and joyful places for communally improving our scholarship locally, much academic writing ultimately will face evaluators or at least audiences with different conceptions of effective and just intellectual engagement. Some scholars think participating in harsh review “is just part of the life of the mind” or necessary to help prepare colleagues for future harsh reviewers. Some scholars have only experienced kernels of useful feedback wrapped in destructive critique and don’t imagine there’s a way to keep the former and ditch the latter delivery mechanism. I do not believe we should replicate these bad approaches ourselves, for any of these reasons.

“[Your] response could be ‘Why do you feel as though enthusiastic support is mutually exclusive from respecting you as a thinker?’ ”

—Brandon Walsh, April 3, 2024 (article draft review comment)

“That’s just how you show you respect them as a peer scholar.” Once when I was invited to a doctoral candidate’s defense, I responded to the email noting I’d be there to cheer them on and wouldn’t do any harsh pop quizzing. Their response was that they are an expert on this area and wanted to be treated that way and given opportunities to show that expertise. I realized I hadn’t communicated that my goal for attending the defense was mutual learning and celebration of their work or that I meant to attend to my contributions staying generative, not the too-common attacks for the sake of showing off or establishing my own expertise. We do not need to do or be anything to deserve others’ respect. Because I was already familiar with the candidate’s work, they had already established their strong expertise in my eyes, and I was prepared to see them rock their defense (and they did!).

This conversation also helped me realize that something I assumed implicit isn’t necessarily so: that I would interact with a colleague’s work with attention that honors the work as written and its values, including if I disagreed with it. A generative approach does not mean I would not engage with the work as I would with the work of other colleagues, challenging them to continue learning and sharing that learning in the confidence they wanted to and could. A generative approach does not mean I impute inaccurate strengths to their work.32 Enthusiasm for noticing and amplifying the strengths and good in our work does not mean an inability to discern or compare differences; a commitment to appreciating, supporting, empowering your colleagues simply does good.

I’m used to the privilege of working with people whose generative feedback pushes scholarship forward without violence, and I know that isn’t the norm—so I spend most of my breath on the possibilities of wholly generative approaches. That choice could sound like flawed practice to scholars who haven’t often gotten to see non-destructive approaches advancing knowledge. We don’t fix that, we don’t make that no longer the norm, if we don’t put our whole scholarly minds and hearts into “watering” the work of our colleagues: building strength without doing harm, finding ways for both scholar and scholarship to flourish. Asserting the rightness of that agenda, spreading its practice, means more scholars who stay with the field, who share their thinking early and often, whose knowledge multiplies and benefits everyone. Sharing possible places or paths to improve a scholar’s work, if they request that, can happen through building up rather than tearing apart.

Bethany Nowviskie’s description of critical care, in the context of the digital humanities, resonates:

I think we’d better foster digital cultural heritage communities of practice if we more selfconsciously built them as extensible, self-perpetuating networks of care. That’s not merely the kind thing to do. Please do not mistake it for something idealistic and motherly and sweet. I offer care as a hard-nosed survival strategy, and as a strategy to increase the reach and grasp (which is at the root of the word “capacity”—the “capture”) of the humanities. We must take practical steps to prevent fatigue at the individual and community level in digital humanities and cultural heritage fields and to promote, in Noddings’ terms, the kind of happy yet critical engrossment—in each other and in the stuff—that I’ve seen develop in the best collaborative teams.33

Care isn’t just kind; it’s effective at inspiring, creating, communicating knowledge work, in part by making our scholarly communities inclusive, livable, sustainable.

Calling back to my planned next research steps above: care is an active, not passive, state. The hesitancy allowing destructive practice a way back into our practice is a reformist, rather than an abolitionist, outcome. The ills of our current system— including the realities that colleagues will likely encounter destructive conceptions and enactments of academic Rigor elsewhere, even if you succeed in reducing them in your local community—are not ills of the future we build to accommodate. We don’t pretend they don’t exist or that we’re beyond needing to care and prepare for them, but they are bugs and not features of the status quo, and as such should be unpacked rather than allowed to stand as necessities.

Concluding values for speculative scholarly design

“It is our job to encourage each other.”

—Motto on a pencil Jeremy Boggs gave the author, quoting Fred Rogers

Our peer review can be part of our colleagues equipping themselves with confidence in their strengths and potential rather than a stress test of their intellectual armor.

We are here to confirm our colleagues in their strengths.

We are here to give them a chance to experience what academia could feel like if we shook loose what no longer (never) serves us about how we engage with one another’s work. There are ways we can help them prepare for unrepresentative levels of criticism they might face, without condoning or repeating such failed approaches.

Do not wait to tell people they are valued—and “people” includes the scholars, thinkers, writers in your life. Regularly, abundantly, joyfully reflect back to them their strengths, their worth, the many ways large and small they make the world a better one. Through their creativity and their words, in addition to their actions; through what you see them noticing and attending to; through how they frame or amplify knowledge.

We can do urgent, difficult, risky knowledge work while treating ourselves and others in ways that solely empower both the people and their work. If this piece resonated with you, I hope you experiment more with letting the future you dream directly shape the actions you take toward getting there.

Author Biography

Dr. Amanda Wyatt Visconti (they/them) directs the Scholars’ Lab, an experimental and digital humanities research center at the University of Virginia with a “people over projects” focus. An appointed officer of the U.S.-based DH scholarly organization, the Association for Computers and the Humanities, they also founded and run the Digital Humanities Slack. Previously a tenure-track DH professor at Purdue University, they hold a Literature Ph.D. with a unique, no-chapters dissertation; and an Information M.S. specializing in DH human-computer interaction. Visconti regularly shares research in all stages on social media (bsky.app/profile/literaturegeek.bsky.social) and by blogging (LiteratureGeek.com). Their interdisciplinary interests include designing just and caring digital scholarship infrastructure, collective spaces of learning and solidarity, and zine materiality.

Notes

  1. The title “Water Your Work” is inspired by Kathleen King’s piece “To Teach, to Grow, to Garden,” a reflective teaching statement that emphasizes a just and generative approach to scholarship (“a gardener, not an enforcer”), and by the metaphors of ecological growth and attention from Katina Rogers’ CFP for this special issue.
  2. As @elikaortega: “I think I’ll start my Fall Intro_to_DH with a ‘From What is DH?’ to ‘What can DH be?’ session. Inspired by #whatifDH2016 and #whatifDH2017,” Twitter, July 6, 2015. Author accessed from personal screenshot of post (no public link available).
  3. While the original website is no longer live, two pages of the website giving a sense of the site are available in archival form at http://web.archive.org/web/20211208193538/https://aualternateuniversity.com/ and http://web.archive.org/web/20200516201125/https://aualternateuniversity.com/cfp. Unfortunately, the pages of most interest to speculative approaches to academic infrastructure design—“Provocations” and “Fanfiction?”—do not have an archival copy that I could locate.
  4. “ ‘How Can You Love a Work, If You Don’t Know It?’: Critical Code and Design towards Participatory Digital Editions.” This dissertation is the website http://dr.amandavisconti.com.
  5. See “Section 4: Reimagining Editions” in my dissertation white paper: https://dr.amandavisconti.com/AmandaVisconti_DigitalDissertationWhitepaper_OfficialSubmission.pdf.
  6. Influences in my understanding the white supremacist underpinnings of a wider variety of social injustice include Tricia Hersey’s Nap Ministry, for identifying and pushing back on racist capitalist logics related to productivity, labor, and rights we shouldn’t need to earn, and Tema Okun’s “White Supremacy Culture” article and the remix/expansion “(divorcing) WHITE SUPREMACY CULTURE: Coming Home to Who We Really Are” for discussions of how approaches anchored in perfectionism, urgency, and scarcity, and other concepts do the work of white supremacy: https://www.whitesupremacyculture.info.
  7. A “strawperson” is when you define the opposing side’s viewpoint as something weak and/or easily vanquishable by you on all points; a “steelperson” is when you try to make the strongest and most charitable definition of an opposing side’s viewpoint (and then think through why you still oppose it, usually).
  8. An example of context mattering: Twitter conversations in which an expert shares an opinion, then randos stridently disagree. It isn’t unusual for those disagreers to frame their claims as coming from a place of relationally greater expertise, without having performed the basic contextual work of checking the original poster’s profile to understand their expertise and possible intended audience. Exchanges in which the original poster clearly has the more appropriate expertise in the matter, and the replier simply failed to consider that context, are popularly amplified (perhaps for the satisfaction of a non-gaslightable microagression: proof someone did not consider that someone else might know what they’re talking about). For example, maybe the original poster is the author of the superhero comic being interpreted, or the scientist behind the debated astrophysics study, and the disagreer has no special knowledge or experience thinking about these areas.
  9. “Reviewer 2” colloquially means a very mean reviewer. I don’t have a particular instance in mind, but a common academic joke revolves around “Reviewer 1” for a written piece loving your work, “Reviewer 2” hating it and having no problem saying so at length, and the oddness of balancing these two very different assessments. For example, Ryan Cordell made a recent Bluesky post encouraging us to “Write like reviewer #2 is on vacation.”
  10. Ronda Grizzle, Discord message (private server) with author, February 14, 2024. In addition to this pun pre-eminence, see also Grizzle’s scholarship cited earlier in this article and in the references list.
  11. Rising is the process that can make some breads incorporate air and increase height and total volume rather than remain flat like a pan of brownies.
  12. The Scholars’ Lab charter cites the xkcd comic “Ten Thousand” (https://xkcd.com/1053/) as a motivating reference. In the comic, a person has not heard what happens when you combine Diet Coke and Mentos; the reaction of their companion is not disbelief or derision for their not knowing something yet but instead delight to get to be part of introducing someone to something new to them.
  13. Ronda Grizzle, reviewer comment on private rough draft, Google Doc, April 5, 2024.
  14. These prompts include adapted pieces of my writing previously published as part of my “Personal Guidelines for DH Journal and Conference Reviewing” research blog post at https://literaturegeek.com/2019/12/02/writing-DH-conference-journal-reviews on December 2, 2019.
  15. Again, from https://literaturegeek.com/2019/12/02/writing-DH-conference-journal-reviews.
  16. See https://scholarslab.lib.virginia.edu/blog/questions-to-ask-when-applying/.
  17. Brandon Walsh, reviewer comment on private rough draft, Google Doc, April 3, 2024.
  18. https://reviewsindh.pubpub.org
  19. Ad hominem arguments are those directed at the author and their choices rather than evaluating the author’s writing.
  20. See https://ach2023.ach.org/en/reviewer-guidelines/. These are described as adapted from the DH 2020 Reviewer Guidelines created by Laura Estill and Jennifer Guiliano.
  21. https://dh2020.adho.org/reviewer-guidelines
  22. https://scholarslab.lib.virginia.edu/blog/questions-to-ask-when-applying/
  23. Reviewer Brandon Walsh suggests Rita Felski’s work exploring a post-critique approach to literary reading and interpretation, including her Uses of Literature (Blackwell, 2008) and (with co-editor Elizabeth S. Anker) Critique and Postcritique (Duke University Press, 2017).
  24. Suggested by anonymous Reviewer 2 of this article: eLife, “eLife’s New Model: What Is a Reviewed Preprint?,” Inside eLife (blog), January 16, 2024, last accessed August 12, 2024, https://elifesciences.org/inside-elife/14e77604/elife-s-new-model-what-is-a-reviewed-preprint.
  25. See, e.g., the public interface to my zine collection, especially the zines tagged with “Collective & alternative power & group structures” or “Activism, resistance, abolition.”
  26. Or something similarly speculative; not claiming this clinic draft is The One Way.
  27. See, e.g., L. M. Sacasas’ “Do Artifacts Have Ethics?” at https://thefrailestthing.com/2014/11/29/do-artifacts-have-ethics/ for a list of questions to guide this kind of ethical design review.
  28. A trend toward identifying some of the flaws of professional/scholarly mentorship with the term “mentor” (e.g., the topdown mentor knows more and is always right dynamic; people excited to feel good about themselves by taking on the mentor role without having the consent of the “mentee”; “mentors” attacking colleagues who decline or eventually cease to want a mentor), using terms like “advocate” to point at models focused on if and what the colleague in question asks for (e.g., to be suggested for professional opportunities, to have help amplifying their work).
  29. I changed this clinic’s initial title (an “Affirmation Clinic”) to “Water Your Work” to better shift our framing away from “we give what we the reviewers conceive of as (author-desired?) affirmation” and more toward what any given author communicates as their goals and preferred terminology for feeling their work has been communally supported.
  30. Walsh, reviewer comment.
  31. See, e.g., Brandon Walsh and my piece “How We Talk and Write about DH Jobs,” Scholars’ Lab (blog), October 9, 2019, accessed April 21, 2024, https://scholarslab.lib.virginia.edu/blog/dh-cover-letters/.
  32. I was going to say “this doesn’t mean I’m pulling punches,” but scholars: WHY ARE YOU PUNCHING PEOPLE and why is that metaphorically presented as effective?!
  33. See Bethany Nowviskie’s “on capacity and care” blog post on their personal research blog, October 4, 2015, last accessed August 12, 2024, https://nowviskie.org/2015/on-capacity-and-care (emphasis added).

References

Directly cited in article

Adair, Cassius. AU: Alternate University. CFP website. Accessed April 21, 2024. https://aualternateuniversity.com.https://aualternateuniversity.com

Boggs, Jeremy. Gift to author (pencil engraved with quoted text). Summer 2022.

Cordell, Ryan (@ryancordell.bsky.social). “Write like reviewer #2 is on vacation.” Bluesky, May 13, 2024, 9:42 a.m. https://bsky.app/profile/ryancordell.bsky.social/post/3ksesbtayxh2z.https://bsky.app/profile/ryancordell.bsky.social/post/3ksesbtayxh2z

Digital Humanities 2020 Organizing Committee. “Reviewer Guidelines.” Accessed April 5, 2024. https://dh2020.adho.org/reviewer-guidelines/.https://dh2020.adho.org/reviewer-guidelines/

Drucker, Johanna. SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009.

eLife.“eLife’sNewModel:WhatIsaReviewedPreprint?”InsideeLife(blog), January 16, 2024. Lastaccessed August 12, 2024. https://elifesciences.org/inside-elife/14e77604/elife-s-new-model-what-is-a-reviewed-preprint.https://elifesciences.org/inside-elife/14e77604/elife-s-new-model-what-is-a-reviewed-preprint

Garcia, Felecia Caton, and Heather Pleasants. “What If? Speculative Fiction and the Future of Education.” Digital Pedagogy Lab course, Summer 2021. Website defunct on April 5, 2024.

Grizzle, Ronda. Discord message (private server) with author, February 14, 2024. Grizzle, Ronda. Private Google Doc comments on rough draft of article. April 5, 2024.

Grizzle, Ronda, et al. “Praxis Program Charters.” Praxis Program scholarly project website. Accessed April 21, 2024. https://praxis.scholarslab.org/charter/. Citation of group-authored works focuses on Grizzle’s particular impact teaching graduate collaborators to envision and co-author these digital humanities team charters.https://praxis.scholarslab.org/charter/

Guiliano, Jennifer, and Roopika Risam, eds. “Instructions for Authors: Reviews.” Reviews in Digital Humanities. Accessed April 5, 2024. https://reviewsindh.pubpub.org/for-authors; https://doi.org/10.21428/3e88f64f.https://reviewsindh.pubpub.org/for-authorshttps://doi.org/10.21428/3e88f64f

Hersey, Tricia. The Nap Ministry. Project website. Accessed April 5, 2024. https://thenapministry.com. HuMetricsHSS: Humane Metrics Initiative. Project website. Accessed April 27, 2024. https://humetricshss.org.https://thenapministry.comhttps://humetricshss.org

Johnson, Jessica Marie. LifexCode: DH Against Enclosure. Scholarly lab website. Accessed April 21, 2024. https://www.lifexcode.org.https://www.lifexcode.org

King, Kathleen. “To Teach, to Grow, to Garden.” Scholars’ Lab (blog), January 9, 2024. Accessed April 15, 2024. https://scholarslab.lib.virginia.edu/blog/to-teach-to-grow-to-garden/. Fall 2023 conversation with the author also influenced my reading of this piece.https://scholarslab.lib.virginia.edu/blog/to-teach-to-grow-to-garden/

Luo, Crystal, and Brandon Walsh. “Digital Humanities Pedagogy and Labor.” Roundtable presentation at 2022 DH Unbound conference (virtual), May 17–19, 2022. https://dh-abstracts.library.virginia.edu/works/12288. Live talk attended by the author.https://dh-abstracts.library.virginia.edu/works/12288

Miller, Laura. Public-facing ephemeral and private internal scholarship while co-directing the University of Virginia Library Scholars’ Lab. Conversation and collaboration with, and teaching and research observed by, the author, 2017–2024.

Munroe, Randall. “Ten Thousand.” xkcd (webcomic). Last accessed August 12, 2024. https://xkcd.com/1053.https://xkcd.com/1053

Nowviskie, Bethany. “on capacity and care.” nowviskie.org (blog), October 4, 2015. Last accessed August 12, 2024. https://nowviskie.org/2015/on-capacity-and-care.nowviskie.orghttps://nowviskie.org/2015/on-capacity-and-care

Okun, Tema. “(divorcing) WHITE SUPREMACY CULTURE: Coming Home to Who We Really Are.” WhiteSupremacyCulture.Projectwebsite.AccessedApril5,2024.https://www.whitesupremacyculture.info.https://www.whitesupremacyculture.info

Ortega, Élika. Personal website. Accessed April 21, 2024. https://elikaortega.net.https://elikaortega.net

Ortega, Élika (@elikaortega). “I think I’ll start my Fall Intro_to_DH with a ‘From What is DH? to What can DH be? session. Inspired by #whatifDH2016 and #whatifDH2017.” Twitter, July 6, 2015, 9:40 a.m. From the author’s personal archived screenshot of the post (public tweet link no longer available).

Parham, Marisa, Sheila Chukwulozie, Andrew W. Smith, and Lauren Tuiskula. .break. dance. Digital project, last updated July 4, 2019. Accessed April 21, 2024. https://irlhumanities.org/projects/breakdance.https://irlhumanities.org/projects/breakdance

Parham, Marisa, et al. Immersive Realities Labs for the Humanities. Scholarly lab website. Accessed April 21, 2024. https://irlhumanities.org/projects.https://irlhumanities.org/projects

Rogers, Katina. “Call for Papers: Special Issue, ‘On Gathering: Exploring Collective and Embodied Modes of Scholarly Communication and Publishing.’ ” Journal of Electronic Publishing, October 15, 2023. Accessed April 5, 2024. https://journals.publishing.umich.edu/jep/news/106/.https://journals.publishing.umich.edu/jep/news/106/

Sacasas, L. M. “Do Artifacts Have Ethics?” Technology, Culture, and Ethics (blog), November 29, 2014. Accessed April 5, 2024. https://thefrailestthing.com/2014/11/29/do-artifacts-have-ethics/.https://thefrailestthing.com/2014/11/29/do-artifacts-have-ethics/

Saunders, Jonny L. (@jonny@neuromatch.social). Mastodon post, July 9, 2023. Accessed April 5, 2024.

https://neuromatch.social/@jonny/110684493228473104.https://neuromatch.social/@jonny/110684493228473104

Scholars’ Lab. “About Scholars’ Lab.” Scholars’ Lab website. Accessed August 12, 2024. https://scholarslab.lib.virginia.edu/about/.https://scholarslab.lib.virginia.edu/about/

Scholars’ Lab. “Charter.” Scholars’ Lab website, June 2016. Last accessed August 12, 2024. https://scholarslab.lib.virginia.edu/charter/.https://scholarslab.lib.virginia.edu/charter/

Varner, Stewart. “What DH Could Be.” Stewart Varner: Scholarship, Libraries, Technology (blog), January 10, 2016. Accessed April 21, 2024. https://web.archive.org/web/20240515064754/https://stewartvarner.net/2016/01/what-dh-could-be/.https://web.archive.org/web/20240515064754/https://stewartvarner.net/2016/01/what-dh-could-be/

Visconti, Amanda Wyatt. “‘How Can You Love a Work, If You Don’t Know It?’: Critical Code and Design towards Participatory Digital Editions.” PhD diss., University of Maryland, 2015. http://dr.amandavisconti.com. See “Section 4: Reimagining Editions.”http://dr.amandavisconti.com

Visconti, Amanda Wyatt. “Personal Guidelines for DH Journal and Conference Reviewing.” Literature Geek (blog), December 2, 2019. Accessed April 5, 2024. https://literaturegeek.com/2019/12/02/writing-DH-conference-journal-reviews.https://literaturegeek.com/2019/12/02/writing-DH-conference-journal-reviews

Visconti, Amanda Wyatt, ed. “Zines: Zine Library.” Accessed April 8, 2024. https://airtable.com/appY7WyBFjSzLXQd6/shr3DDj5X1uNPUzyn. Online interface to zine collection catalogue.https://airtable.com/appY7WyBFjSzLXQd6/shr3DDj5X1uNPUzyn

Visconti, Amanda Wyatt, and Brandon Walsh. “How We Talk and Write about DH Jobs.” Scholars’ Lab (blog), October 9, 2019. Accessed April 21, 2024. https://scholarslab.lib.virginia.edu/blog/dh-cover-letters/.https://scholarslab.lib.virginia.edu/blog/dh-cover-letters/

Walsh, Brandon. Private Google Doc comments on rough draft of article. April 3, 2024.

Walsh, Brandon. “Questions to Ask When Applying.” Scholars’ Lab (blog), May 30, 2023. Accessed April 21, 2024. https://scholarslab.lib.virginia.edu/blog/questions-to-ask-when-applying.https://scholarslab.lib.virginia.edu/blog/questions-to-ask-when-applying

Scholarly influences not directly cited in article

Alpert-Abrams, Hannah. “Open letter to my friend, who I love, who wants to get a PhD in literature.” Hannah Alpert-Abrams (blog), on Medium, July 19, 2020. Accessed April 5, 2024. https://halperta.medium.com/open-letter-to-my-friend-who-i-love-who-wants-to-get-a-phd-in-literature-87392dabec16.

Abrams’ smart and direct written scholarly advocacy inspires my work; in particular, this passage was in my mind while writing the section denouncing rigor: “What I can tell you is that you already have everything you require to write, to teach, to be a voice for your community. Fuck credentials. You are already extraordinary and powerful and valuable in this world.”

Boggs, Jeremy, Ronda Grizzle, Laura Miller, and Brandon Walsh.

These colleagues read my accepted abstract and shared kind comments that energized my first drafting.

Dombrowski, Quinn.

Dombrowski’s work models how DH-adjacent scholarly projects can incorporate effective justice advocacy and joy, play, and creativity, with the dual presence of justice and joy work strengthening the whole. Examples include the Textile Makerspace at Stanford (with co-makers), Saving Ukrainian Cultural Heritage Online (co-coordinated with Anna Kijas and Sebastian Majstorovic; with support from many additional volunteers), and The Data-Sitters Club (co-datasitting with Lee Skallerup Bessette, Katia Bowers, Maria Sachiko Cecire, Anouk Lang, Roopika Risam, and Mark Algee-Hewitt; and others).

Grizzle, Ronda. Feedback on rough draft of this article. April 2024.

Feedback on the first draft of the article, including proofing, argument improvements, ideas for expansion, and helping me notice strengths in the article.

Ladhani, Sheliza, Mairi McDermott, and Stephanie Tyler. Formal article review. April–June 2024.

Feedback comments, suggestions for expansions and additional directions, and line edits on this article that were wholly in keeping with what this article imagines— brilliant and generous gifts that improved my thinking and contributed to this article’s published form.

Reviewer 2. Quote from formal review comments on this article. May 2024.

The anonymous Reviewer 2 of this article suggested a resource I then cited in this article.

Rogers, Katina. “Call for Papers: Special Issue, ‘On Gathering: Exploring Collective and Embodied Modes of Scholarly Communication and Publishing.’ ” Journal of Electronic Publishing, October 15, 2023. Accessed April 5, 2024. https://journals.publishing.umich.edu/jep/news/106/.

In addition to how Rogers’ CFP inspired and shaped this article (as mentioned above), Rogers’ reading of my abstract also contributed to this article.

Scholars’ Lab current and former staff and faculty team colleagues (Bennett, Arin, Jeremy Boggs, Chris Gist, Ronda Grizzle, Zoe LeBlanc, Shane Lin, Drew MacQueen, Laura Miller, Will Rourk, Ammon Shepherd, and Brandon Walsh).

Staff colleagues on the Scholars’ Lab team during my current role there (April 2017– present) have regularly influenced my thinking and doing through their models of kind, just, joyful, smart scholarship. Much of my thinking about generative review and better futures for academia was developed through conversation and practice with them, by observing their models, and through their feedback on my work at all stages. Walsh, Brandon. Feedback on and before rough draft of this article. April 2024.

Slack messaging with me when I was initially developing this idea for a blog post. Feedback on the first draft of the article, including proofing, argument improvements, ideas for expansion, helping me notice strengths in the article, and identification of other scholarship I could read and build on.

Encouraging me to “zine that shit” and lean harder on the quotes, stories, non-traditional ways of communicating and arguing that felt effective for my goals and also allowed by the CFP.