“Issues of Labor, Credit, and Care in Peer-to-Peer Review Processes”

What follows is the text of a presentation I gave at the MLA 2019 Convention in Chicago, Illinois, on January 5, 2019 in Session 613: Getting Credit in Digital Publishing and Digital Humanities Projects

Thank you to Anne Donlon for organizing this session and to Harriett Green for chairing it. I’m excited to be with you today to talk about issues of credit, care, and labor in peer-to-peer review processes. My original title focused only on credit and labor, but I realized as I was writing it up that a full accounting of labor and credit necessitated attention to the subject of care, as well.

In this talk, I’m going to:

  • Begin by discussing a few models of peer review and peer-to-peer review
  • I will then discuss issues of labor as they play out in such review processes
  • I will then show how peer-to-peer reviews can be structured with care to ensure that participant labor is valued and respected
  • And I will end by talking about issues of credit and the overall goals of p2p review

Peer Review and Peer-to-Peer Review

I’m choosing to focus on peer review, and network-enabled peer-to-peer for a few reasons:

  • it is a space of scholarly communication in the academy where we see technology used to alter existing conventions of academic work;
  • second, peer-to-peer review is not often discussed in terms of credit and labor, so it seemed a useful topic to explore in a session that deals more broadly with the way we value the work that we and our colleagues do;
  • third, evolving forms of peer-to-peer review have been used in a variety of prominent digital humanities publishing projects in recent years, making it a subject of engaging interest;
  • and fourth, I’ve experimented with multiple forms of peer-to-peer review myself and have some thoughts to share about them

Before progressing further, I want to take a moment to contextualize my discussion of peer review within the context of contemporary DH work on scholarly communication. Here, Kathleen Fitzpatrick is my guiding light; her work in Planned Obsolescence: Publishing Technology and the Future of the Academy (2011) historicizes peer review and charts the way it is changing in the era of networked scholarship.

Fitzpatrick builds on the work of Mario Biagioli to exhume the history of peer review and its entrenchment in systems of censorship and disciplinary control. Fitzpatrick notes the many weaknesses of double-blind peer review, in which journal articles and book manuscripts are circulated to reviewers in such a way that neither the identity of the author nor the identity of the reviewer is disclosed. Although double-blind peer review has often been implemented as a way of eliminating bias in the reviewing process, Fitzpatrick argues that the anonymous space of the double-blind peer review is ripe for abuse, manipulation, and uncharitable communications by reviewers and editors.

Fitzpatrick poses what she calls “peer-to-peer” review as an alternative to double-blind pre-publication review. In peer-to-peer review, projects are reviewed openly by a community of respondents, whose replies to the text are visible to the author and to each other.

Examples of recent publications that have used this kind of process include:

Fitzpatrick’s own ­Planned Obsolescence:

Jason Mittell’s Complex Television

Digital Pedagogy in the Humanities, edited by Rebecca Frost Davis, Kathy Harris, Jentery Sayers, and me

Catherine D’Ignazio and Lauren Klein’s Data Feminism

And Jeff Maskovsky’s edited collection Beyond Populism: Angry Politics and the Twilight of Neoliberalism

And in the sciences, there are a variety of other models of pre-publication peer review, including ArXiv, F1000 Research, and PLOS One

[Here I spoke extemporaneously about an article published the night before in The New York Times, “The Sounds That Haunted U.S. Diplomats in Cuba? Lovelorn Crickets, Scientists Say,” which was based on a paper published in Biorxiv. The NYT article noted that the paper had not yet been submitted to a scientific journal, but it was already receiving attention in the mainstream press:

You’ll notice a bunch of platforms are commonly used across these examples:

  • CommentPress, a theme and plugin for WordPress
  • Github, a site for sharing code that has also been used for peer review
  • PubPub, a new platform from MIT Press
  • And Manifold, a new publishing platform from the University of Minnesota Press and my team at the CUNY Graduate Center

Beyond these peer-to-peer models and platforms lie a set of hybrid options, some of which I’ve explored myself in my collaborative publications. In the Debates in the Digital Humanities book series from the University of Minnesota Press, for instance, all volumes undergo a private community review in which authors review each other’s work, followed by an intensive editorial review process. Special volumes in the series then receive a more traditional blind review administered by the Press.

The community review of the DDH volumes is semi-public. The review site itself is private and viewable only by contributors to the volume.

Reviewers can see author names and authors can see the names of reviewers. All authors can review any piece in book, though they are specifically assigned to one or two pieces themselves.

In early volumes, we simply opened pieces up for general review; for more recent volumes, we have been asking reviewers to leave comments throughout but to reply to a specific set of evaluative questions at the end of the piece.

This process is followed by a revision request in which the editors take account of the feedback, ask authors to revise

Labor

This is all a lot of work. How are we to value the labor of peer-to-peer review?

To begin, we have to acknowledge the situation within which we are working – the way that the internet, and technology more generally, can exacerbate the processes of deskilling and the devaluing of labor.

As Trebor Scholz says in Digital Labor: The Internet as Playground and Factory:

“Shifts of labor markets to the Internet are described [in this book] as an intensification of traditional economies of unpaid work”

and

“each rollout of online tools has offered ever more ingenious ways of extracting cheaper, discount work from users and participants”

Scholz and others in that book are obviously talking about the commercial internet, especially as it intersects with social media – the way, for instance, that newspaper sales fell when social media platforms such as Twitter and Facebook became a primary space for news consumption.

Of course, there is a clear difference between the business model of a pre-internet content business such as newspaper publishing and the process of academic peer review, which generally does not involve financial compensation (depending on how much one values a few hundred dollars worth of university press books – typical compensation in the academy for reviewing a full book manuscript).

But there are clear connections to the knowledge economy more generally and to issues of crowdsourced labor.

As we ask our colleagues to participate in open community reviews, we need to avoid a situation in which the work of public peer-to-peer review essentially becomes a site of alienated labor. Probably the most dangerous potential for that to happen occurs when work that has gone through open peer review winds up being published by for-profit entities such as Elsevier. In such cases, the labor of peer-to-peer review would certainly resemble the vampiric capital discussed by Marx.

In order to prevent such futures, we might turn to the practices and rhetorics of care as articulated in recent years by a range of scholars such as Bethany Nowviskie, Stephen Jackson, and Lauren Klein, among others.

As Nowviskie puts it in her forthcoming piece “Capacity Through Care,” care can become part of the “design desideratum” for the DH systems we build; we can use it to ensure that the demands of public or semi-public peer review protect the affective and intellectual labor of the participants in the review.

So, how, then, do we structure p2p review processes with care?

Here are some initial thoughts, and I look forward to hearing yours during the Q&A.

Provide review structures

  • Contrast with completely open peer review
  • Offer specific evaluative questions

Create guidance documents or codes of conduct

  • Need to voice expectations of community norms for the review
  • For examples, you can look at the Code of Conduct on D’Ignazio and Klein’s Data Feminism book, which links to other resources
  • We’ve used the following guidance in DDH

Make conscientious review assignments

  • When setting up assignments, consider power and rank differentials, areas of specialization, and other factors to help structure fair and responsible reviews

Offer reporting mechanisms

  • Things can and will go wrong. Provide space for backchannel conversations with editors. Develop flagging features for comments

Credit

Part of structuring a review with care involves providing credit to those who lend their labor to it. A number of publication venues have experimented recently with this, such as The Journal of Cultural Analytics

And digital humanities practitioners have been discussing issues of credit at both the professional level and in the classroom. Here we can turn to the Collaborator’s bill of rights, which resulted from a 2011 NEH workshop titled Off the Tracks led by Tanya Clement and Doug Reside, and the student collaborator’s bill of rights , which was developed at UCLA by Haley Di Pressi, Stephanie Gorman, Miriam Posner, Raphael Sasayama, and Tori Schmitt, with additional contributors. Each of these documents show how credit on a project can and should be structured to provide adequate and fair credit to everyone involved in a project.

Community

As we think about models of peer-to-peer review, we need to think about how issues of credit and labor can make it sustainable.

But we also need to think about what it is that we are laboring on – and here I will build a bit on what Kathleen Fitzpatrick said earlier today in the “Transacting DH” panel – that as important as credit is for individual participants, we need to go behind it.

Peer-to-peer review is grounded in community investment and participation

People participate in community reviews when their friends/colleagues are invested in it or when they are intellectually compelled to take part

We have to stop imagining that simply making projects open will make peer-to-peer review viable

We have to go beyond the idea that simply giving people credit, or gamifying community peer review in some way, will make the work sustainable.

Ultimately, what makes peer-to-peer review work is when people have a real link to the people or content involved.

the labor of peer-to-peer review, then, isn’t towards an individual text but to a community. What we need to start taking stock of is community value .

This involves investment in open-source, open access publishing spaces where people have autonomy over their work (Humanities Commons/MLA Commons/Commons in a Box/Manifold/DH Debates, etc)

Ultimately, the labor involved in peer-to-peer review is labor that helps us work towards a better academy, one grounded in Generous Thinking, as Kathleen Fitzpatrick has been arguing for] – in the development of what she calls “community-owned infrastructure.”

We should do this not just because it is the right thing to do, but also because it will produce stronger, more effective peer reviews.

But this is hard work, and the work of community development isn’t particularly glamorous.

It is and can be gratifying, though, and it is labor that matters. It is, quite literally, a credit to the profession; but we have to ensure that the work itself is valued accordingly.

Leave a Reply

Your email address will not be published. Required fields are marked *