Altering expectations of our students and ourselves in a time of global pandemic

Across the multiple teams I work on and academic departments I chair, I am trying to reduce expectations of what we, our teams, our students, and our colleagues can be expected to do while in the middle of the COVID-19 global pandemic.

In case it might be useful to other department chairs, here is the message I sent out to program faculty yesterday, having been inspired by Chris Long:

Altering expectations of students in a time of pandemic

Dear MA in DH and MS in Data Analysis and Visualization Faculty,
Thank you all so much for the work you are doing to move your courses online. I know it is not easy, especially when there are so many options, when the move is so sudden, and when there is so much going on.


I wanted to take a moment and urge all of you teaching this semester — including myself! — to adjust expectations of our students for the rest of the semester. We simply cannot expect our students to do the same levels of reading, writing, coding, analysis, and thinking during a global pandemic as they were doing during a regular semester. Our students may have loved ones suffering from sickness; they may be working stressful jobs; they are living in new, more intense conditions; they are working through highly increased levels of anxiety; they are almost certainly juggling multiple responsibilities, as are we all. They are also learning new ways to communicate with classmates and with you as the university undertakes a massive shift online. 


At this moment, and certainly for the next month, I ask each of you to adjust your course requirements and reduce the amount of required reading, writing, analysis, and coding our students need to do. This may feel wrong, as if we are giving out students short-shrift, but I firmly believe that we need to actively adjust to this new, tumultuous reality. Should you wish, please find ways to design your curriculum that lays out possibilities for students who want to do more, but don’t require that. Make some readings and assignments optional where possible.
Executive Vice Chancellor José Luis Cruz has shared the following advice to campus administrations about distance education:


More on the Scope of Distance Learning”As we have indicated before, the objective of an academic continuity plan is to help safeguard the academic term from the standpoint of our student’s academic progress and their financial aid and support our faculty and student’s research efforts as we work to address the challenges posed by COVID-19.  As far as distance learning is concerned, the U.S. Department of Education has indicated that at a minimum, our faculty must be able to “communicate to students through one of several types of technology – including email – … and [that] instructors must initiate substantive communication with students, either individually or collectively, on a regular basis.”


“CUNY has asked faculty and staff to be prepared to accommodate affected students to the greatest reasonable extent. This includes, among other things: facilitating enrollment after the deadline, fast-tracking academic advising, permitting make-up exams and extending due dates of final assignments and projects. In this same spirit, and understanding the minimum USDE requirements to meeting learning outcomes, we’d urge faculty members to privilege asynchronous over synchronous distance learning approaches. Because the reality is that our faculty and students lives and daily schedules will surely be disrupted in the days to come — especially now that the NYC Department of Education has itself indicated a move to distance learning.” (emphasis added)


It is up to all of you, as faculty, to decide what is realistic for your students, but I encourage you to undertake conversations about what is possible with your students. One possibility might be to have your class find ways to interact with and address this moment; my own class started the CUNY Distance Learning Archive; I suspect it will become a project that will occupy us for the rest of the semester, which is not what I expected at the beginning. I am planning to reduce our readings as a result.

Above all, I urge you to exhibit compassion and understanding towards our students, and to worry less about grades and the like than you might under normal conditions. There is a movement afoot to turn most classes to pass/fail; I support that personally.


Please do not hesitate to get in touch with questions and concerns. These are not easy or uncomplicated issues. Thank you for everything you are doing at this moment to support our students, and please take time to take care of yourself, as well.
Best,

Matt

Thinking Through DH: Proposals for Digital Humanities Pedagogy

This presentation was given as a keynote address at the 2019 Digital Humanities Summer Institute at the University of Victoria on Friday, June 7, 2019. I am deeply grateful to Luke Waltzer for his insightful feedback on earlier drafts of this talk.

Slide deck

[SLIDE]

My talk today has four parts:

  • Introduction and acknowledgments
  • Centering pedagogy in DH
  • Five proposals
  • Looking forward

[SLIDE]

[SLIDE]

I want to begin with the University of Victoria territory acknowledgement — we acknowledge with respect the Lekwungen speaking peoples on whose traditional territory the university stands and the Songhees, Esquimalt, and WSÁNEĆ peoples whose historical relationships with the land continue to this day.

[SLIDE]

I would like to thank:

* The DHSI team: Ray Siemens, Alyssa Arbuckle, Jannaya Friggstad Jensen, and DHSI staff and student volunteers

* The ADHO Pedagogy SIG Conference and its conveners

* My own GCDI team at the Graduate Center, especially those here today — Patrick Smyth, Mary Catherine Kinniburgh, Filipa Callado, Rafael Davis Portela, Kristen Hackett, Krystyna Michael, and Stephen Zweibel, as well as our honorary member, Jonathan Reeve.

* My own teachers and mentors, including my 3rd grade teacher Mr. Burgraff, my high school English teacher Scott Mosenthal, my college mentors Robert D. Richardson Jr. and Annie Dillard, and my graduate school teachers Joan Richardson, Bill Kelly, Steve Brier, Louise Lennihan, and George Otte. 

* I’d also like to thank my current GC colleagues Luke Waltzer, Lisa Rhody, and Cathy Davidson

[SLIDE]

I’m delighted and honored to talk to you today about digital pedagogy and DH pedagogy, situated as we are in the one of the premier teaching spaces for DH in the academy. I want to take a moment and acknowledge what Ray and his team have built over the years: a fantastic, inclusive, welcoming, wide-ranging summer curriculum that has helped thousands of DHers learn everything from how to set up a DH department to how to scrape the web using Python to how to embed social justice in DH work. I know how hard and taxing it is to set up and run even a small event, workshop, or class, much less a set of events as expansive and multifaceted as DHSI — and to do so, year after year. In many ways, I feel that there is not much I can tell you about DH pedagogy that you are not already learning and witnessing in your sessions here this week and in the structure that Ray and his colleagues have set up.

I’ve been asked to speak about digital pedagogy on the eve of the ADHO Pedagogy SIG Conference. I’d like to use my time with you today to speak about some of the ways I’ve been thinking about DH, and DH pedagogy, and to ask you to think with me about its present state and its future possibilities. Some of the material I’ll be presenting here today is drawn from a few forthcoming publications — first, Thinking Through the Digital Humanities, which aims to offer an introduction to the field of digital humanities, and second, Digital Pedagogy in the Humanities, a forthcoming publication that I have co-edited with Rebecca Frost Davis, Katherine D. Harris, and Jentery Sayers, which aims to present, in concrete form organized through a keyword approach, the actual building blocks and source materials of digital pedagogy — syllabi, assignments, resources, and rubrics. 

Today, I hope I can bring into focus some questions that teachers of DH face. I hope to explore with you some provisional answers and some spurs to further thought to help us move forward.

In “The Scandal of Digital Humanities” — written as a response to the LA Review of Books piece on neoliberalism and the digital humanities — Brian Greenspan argued that one of the reasons DH tends to become a target for criticism is that it lays bare central aspects of the university’s fun(He doesn’t say this, but I would add “normally kept hidden from tenure-track faculty”). Greenspan writes:

 [SLIDE]

If anything, DH is guilty of making all too visible the dirty gears that drive the scholarly machine, along with the mechanic’s maintenance bill. [. . . ] DH doesn’t so much pander to the system (at least not more than any other field) as it scandalously reveals the system’s components, while focusing critical attention on the mechanisms needed to maintain them. And that’s precisely its unique and urgent potential: by providing the possibility of apprehending these mechanisms fully, DH takes the first steps toward a genuinely materialist and radical critique of scholarship in the 21st century. 

In this talk, I want to extend Greenspan’s argument to the realm of pedagogy and posit that, in fact, in the same way that DH as a whole makes the infrastructure of the academy more visible, DH pedagogy in particular highlights and makes available for analysis the metacognitive and epistemological work of knowing in the digital sphere. The radical potential that Greenspan speaks of — the possibility of a materialist critique of scholarship in the 21st century — NEEDS to be extended to pedagogy, and without it, it is bereft. The classroom is a unique space in the academy — a space of possibility, tension, labor, experiment, argument, struggle, and insight — and we should spend more time thinking about it. 

In most of this talk, I’ll be talking about digital humanities pedagogy — ie, how we teach students what DH is, how we structure and teach DH classes, and how we teach students to use DH methods. Though my focus is on pedagogy within DH rather than DH pedagogy within the humanities, I think the connections between the two are strong, and that people who begin exploring digital pedagogy in one class inevitably find that the work bleeds over into much of their teaching. I also want to acknowledge my various subject positions — as a CIS white male tenured faculty member, first, and second as someone who now predominantly teaches graduate students, though my formative early years as a faculty member focused on undergraduate teaching at a nonselective, poorly resourced four-year urban college without a major in literature, my field of interest and training. Though in fact, I attribute my own turn towards digital humanities to working in a department without conventional field-based tenure requirements.

In arguing that we need to center pedagogy in our discussions of DH, I follow a long line of colleagues who have made similar arguments, including Steve Brier, Luke Waltzer, Katherine D. Harris, Lisa Spiro, Rebecca Frost Davis, Diane Jakacki, Brian Croxall, Anne McGrail, Erin Glass, and many others. Across work done by all of these scholars, we see a continued and welcome focus on the classroom and a a set of reminders — which we apparently need to hear, again and again! — that we cannot forget the classroom as a central space for DH work. Despite our best efforts, the values of the university work, again and again, to denigrate and devalue teaching and service in favor of research. To buck that bias, built into the heart of the modern research university, we need to articulate how and why teaching matters, how the work of the classroom is the work of the academy.

[SLIDE]

And so one thing I want to do today is to make and reinforce the argument that DH Pedagogy is DH Work. In my own self-descriptions on institutional degree program websites and the like, I list digital pedagogy as an academic area of specialization. If you spend time thinking seriously and critically about your teaching, I encourage you to do the same. We know that one of the challenges scholars of pedagogy face is that teaching is often treated as a distant second in importance to research throughout the academy. If DH is to be part of the process of reshaping and restructuring the academy, I believe we have to foreground the place of pedagogy in the field of DH particularly and in the academy more generally. What this means, in practice, is that we need to fight to have our pedagogical work taken seriously and counted by our institutions and our colleagues, that we need to publish on our teaching, and also that we need to #citepedagogy, a theme I’ll return to later in the talk. But what I want to say now, ESPECIALLY to all of the people here at DHSI experimenting with DH for the first time, is that your incorporation of DH work into your classroom matters and is, in and of itself, DH. Pedagogy certainly was the entrance point for me to DH.

Today, I will talk about how we can:

1) center DH pedagogy in our understanding of, and explanations of, DH;
2) use the insights it offers to better explain the significance of our own methods; and
3) extend the insights of DH pedagogy to the academy more broadly.

In so doing, I believe that we can extend the classroom as a space of critical reflection and liberatory possibility. 

Of course, teaching is not denigrated in all parts of the academy. Indeed, in some institutions — and I am thinking particularly here of community colleges and large urban public institutions like the City University of New York — teaching is valued in serious ways. I believe we need to look to such spaces; as I argued in a 2012 presentation at the MLA titled “Whose Revolution? Towards a More Equitable Digital Humanities”:

As digital humanists, the questions we need to think about are these: what can digital humanities mean for cash-poor colleges with underserved student populations that have neither the staffing nor the expertise to complete DH projects on their own? […] What is the digital humanities missing when its professional discourse does not include the voices of [members of such institutions]? How might the inclusion of students, faculty, and staff at such institutions alter the nature of discourse in DH, of the kinds of questions we ask and the kinds of answers we accept? What new kinds of collaborative structures might we build to begin to make DH more inclusive and more equitable?

A key component of my argument in 2012, and now, this should be an opportunity of self-interest for DH — the field can only gain by calibrating itself towards a wider group of practitioners across all institutions and levels of the academy.

In the time since I gave that paper, and since I published a section on “Teaching Digital Humanities” in the 2012 edition of Debates in the Digital Humanities, we’ve seen many developments that answer that call. Anne McGrail received a NEH Office of Digital Humanities grant to run a summer institute on teaching DH at community colleges; the Debates in the Digital Humanities series includes a forthcoming volume on “Institutions, Infrastructures at the Interstices,” co-edited by Anne McGrail, Angel David Nieves, and Siobhan Senier, that looks at implementations of DH across a broad range of institutional contexts; Roopika Risam, in her new book, discusses the place of pedagogy in DH as she analyzes the field through the lens of postcolonial theory; the new Debates in the Digital Humanities includes several essays on pedagogy, including one by Jack Norton on the pressures of teaching DH at a community college; colleagues at my own institution, the CUNY Graduate Center, led by the Futures Initiative and the Teaching and Learning Center, have launched the Humanities Alliance, which focuses on preparing doctoral students to teach at community colleges; and many other efforts have similarly helped refocus the conversation in and around DH on the ways that the DH classroom offers exciting possibilities for the field.

To continue this conversation, and to extend it, I think we need to embrace the practices of critical pedagogy as articulated by Paolo Freire, bell hooks, Ira Shor, and others. In practicing critical pedagogy, we can resist the banking model of education outlined by Freire and focus on creating classroom spaces where students grapple actively with power dynamics and work towards what bell hook, building on Freire, calls ‘“education as the practice of freedom.” A central part of this work involves a focus on the knowledge that students bring with them into the classroom. It also, as bell hooks points out, be exciting and fun.

Fundamental to my own approach to pedagogy is John Dewey’s contention that learning needs to be connected to experience, and that our classes provide a starting point for the real practices of learning that will sustain educational process over the long term. As he writes in one of my favorite pedagogical passages:

[SLIDE]

“Perhaps the greatest of all pedagogical fallacies is the notion that a person learns only the particular thing he is studying at the time. Collateral learning in the way of formation of enduring attitudes, of likes and dislikes, may be and often is much more important than the spelling lesson or lesson in geography or history that is learned. For these attitudes are fundamentally what count in the future. The most important attitude that can be formed is that of desire to go on learning.

(emphasis added)

Fostering the “desire to go on learning” is at the heart of every praise-worthy pedagogical practice I can think of. 

[SLIDE]

In the next section of the talk, I want to discuss five ways that we can productively shape DH pedagogy so that it best positioned to benefit our students and ourselves: 

[SLIDE]

1. DH Pedagogy is best when it is self-reflexive and self-critical about its identity, its assumptions, and its methods

So many of us old DH hands will sigh, raise eyebrows, or scoff when we are asked to define the digital humanities. In 2012, Matt Kirschenbaum was already expressing exhaustion with the question. Though I’m not suggesting we all need a sentence or two defining what DH is, and though I am entirely comfortable with very ambiguous definitions of the field, I do think that the act of explaining what the field encompasses, and how it relates to adjacent fields/sub-fields like new media studies, science and technology studies, critical race studies, and critical infrastructure studies is important and worthwhile. It’s important not because we need to settle in our students’ minds, once and for all, what DH is and what it is not, but rather because we should reflect on the ways we define our field and question how the assumptions we have about it might be productively shaken by consciously questioning them and challenging them. This can be an active and engaged discussion about the future of DH; I think, for instance, of the way that Kim Gallon has suggested that DH, as a field, might be recalibrated though the lens of black studies, when she argues that “any connection between humanity and the digital […] requires an investigation into how computational processes might reinforce the notion of a humanity developed out of racializing systems, even as they foster efforts to assemble or otherwise build alternative human modalities.” Or how Roopika Risam, bringing DH into contact with postcolonial theory, asks us to reconsider universalist claims, particularly around DH tools, when she writes:

[SLIDE]

Despite the wide variety of practices that make up digital humanities on a global scale, the methods and tools that receive the most attention were created by scholars in the Global North. Accordingly, they are based on the values and hierarchies of knowledge of these scholarly communities. The effect of this phenomenon is a specious universalism embedded in digital humanities tools and methods that is in dire need of redress and reinvention.

If DH, as a field, is not pushing itself to critically question its own assumptions and practices, it will not thrive. And I suggest that the classroom is the single best place to do that work, especially if we can create classrooms where our students actively negotiate the shape of the field of DH with us. 

[SLIDE]

2. Focus on students, center student research questions, and sustain student well-being

Too many DH initiatives, I believe, focus on faculty as opposed to students. The communities we need to seek to build, I am convinced, are student-based communities rather than faculty-based communities. I love faculty members, and I am one of them, but most faculty members are too pressed for time, and too single-mindedly focused on their careers and personal projects, to give back to communities what they need to survive. It’s not that students aren’t also pressed for time or facing their own issues, which range from the weight of student debt to issues of hunger, but at least in my experience, I have found students to be more willing to share their skills and knowledge with each other on a regular basis, which is a key aspect of building a sustainable community of practice. As a concrete example of this, I would point to GC Digital Initiatives, at the CUNY Graduate Center, where all of our efforts are focused on creating student communities of practice. 

Our classes, too, need to focus on students — which sounds obvious but is often overlooked. In the classes I teach, I ask students to bring to their projects the research questions that interest them, and then to use DH methods to explore them. In almost all cases, this leads to an incredible cornucopia of student projects, ones I never could have come up with on my own. It leads students to work together on issues of shared interest. And, should students not have active research questions, I encourage them to look to their own lives for inspiration — the issues that are of urgent concern to their neighborhoods, their families, and their friends — thus following Freirian pedagogical principles, in which student knowledge and experience is valued and centered in the course. Though discussion and exploration, students can articulate a well-defined and well-scoped DH project that will have direct significance to their interest. And, because they are working on issues that emerge from their own interests rather than on ones handed down from the faculty member, they tend to be more engaged in their work. 

One of the ways I’ve implemented some of these ideas is in the two-course introductory sequence of the new MA in Digital Humanities Program at the CUNY Graduate Center. Informally titled “the Digital Praxis Course,” it is a two-semester introduction to DH offered in the new MA Program in Digital Humanities at the CUNY Graduate Center — is the conceptual centerpiece of the degree program. In the first semester of the program, students explore a landscape view of DH, learning about a variety of methods and approaches. The semester ends not with a term paper, though that is an option, but rather with a project proposal. When students come back during the Spring semester, they choose some of these projects to develop and form small groups to create them. I want to share two such projects, from the most recent cohort of students — our first in the program. Steve Brier and I co-taught the Fall course, and Andie Silva, who I think is here today, taught the Spring semester course. 

[SLIDE]

* Immigrant Newspapers — impressive UX design and historical research in the archives, focusing on recovering immigrant community newspapers in New York City

[SLIDE]

[SLIDE]

* Data Trike — came out of shortcomings students identified in my own class

[SLIDE]

[SLIDE — (slide omitted in online version)]

As we focus on students, I think there is much we can learn from progressive elementary education practices.

In recent years, as my children have gone through the public educational system in New York City, and as I have gotten more involved in the progressive elementary school that they attend, I have become attentive to the lessons that the progressive elementary context has to offer us. I am particularly struck, for instance, at the “whole child” approach that I see in my children’s school, one that focuses on emotional and social well-being; one that embraces anti-racist approaches; one that takes a strong stand against standardized testing; one that focuses on the individual child. In critical pedagogy, we see a similar focus on social and emotional well-being; bell hooks speaks about this when she writes about how in “progressive, holistic education, [or] ‘engaged pedagogy’ [. . . .] teachers must be actively committed to a process of self-actualization that promotes their own well-being,” which is necessary if they are to teach in a manner that empowers students” (15). This work has alignments with the feminist ethos of care that has been a strong part of the DH conversation in recent years. Though it can be challenging, I think that part of the project of DH pedagogy involves structuring care for the whole student into our classes and the degree program. This is especially urgent for DH, where our work actively places students in front of computers, devices, social networks that have as much possibility for exposing them to toxic environments as they do for creating possibilities.

[SLIDE]

3.  Teach the epistemological approaches of our digital tools.

Students should learn about and consider the knowledge models involved in various kinds of DH work, along with the shortcomings and limitations of them, as they begin to experiment with them. I use the term “ways of knowing” in my Thinking Through the Digital Humanities book to describe such knowledge. It’s an approach I am still fleshing out, and I would welcome your input. But, in brief, here are the major ways of knowing that I explore in the book:

[SLIDE]

Each of the items in the list, I suggest, offers a specific way in which the digital humanities can help humanities scholars and students think through their work and ask new questions of our subjects. To take one example, Representation/Enrichment/Dimensionality refers to a set of DH methods that involve presenting texts and histories in digital form that contextualize in new ways or allow us to see them in wider frames. A digital edition of a book that contains an interactive interface, for instance, that presents not just the published text, but also drafts of that text that allow us to see how the author moved from initial conception of the work to the published book, can enrich our understanding of the text, especially when paired with contextual information such as sources used in the book or interactive features that allow online readers to explore these various elements of the text and its paratexts simultaneously. Such materials have been drawn together for decades in variorum print editions, but online environments offer enriched experiences of such materials in ways that can specifically help students understand textual histories. Similarly, a 3D construction of a historical place or event, rendered through an immersive gaming engine, can help students better understand the past as place and come to understand the lived geographies of historical events. 

The benefit of this “ways of knowing” approach is that it ultimately helps students and faculty understand how DH can help them ask new questions and bring new interpretative angles to their work. As we use digital tools, we must reflexively ask how they represent knowledge and what they leave out. A topic modeling tool may allow us to find “topics” that are present across a large number of texts; but as we begin to use that tool, we must think about how the tool conceives of “text,” how it understands and determines what “topics” are, how it presents findings to the user, what it leaves out, and how the results found through the tool may or may not be related to non-digital work in the field. For instance, when scholars attempt to use topic-modeling to explore gender-related issues in, say, 18th-century historical texts, we must ask not only how the tool conceives of “topics,” but also how the tool (and its users) model the concept of “gender” and what the limitations of those models might be (and on this topic I recommend Laura Mandel’s article on the subject in Debates in the Digital Humanities 2019. We should pay attention not just to  kinds of questions digital tools allow us to ask, but also the kinds of inquiries such tools occlude or elide. 

[SLIDE]

4. Foreground social justice critiques of data

From Jessica Marie Johnson’s discussion of the archive of slavery to the Colored Convention Project’s building out of important new databases of knowledge, to Lauren Klein’s arguments about the silences in the archive, to Marisa Parham’s work on blackness and social media, we have seen in recent years a trenchant critique of the supposed objectivity of data and a focus on the biases and assumptions built into so many of the tools we use. Such work fulfills the ultimate pedagogical promise of the digital humanities, fostering as it does a criticality among students as they engage the data. It  fosters an understanding of data that Johanna Drucker describes as not as neutral data but as capta, information everywhere embedded with and affected by the legacies of centuries of colonial practices. Working with our students, early and often, to critique data is one of the ways that we can build self-criticality into our projects and our work.

As a central aspect of our social justice work, I believe we must embrace OER as a central act of DH pedagogy

My own institution has been overtaken by OER fever in recent years, as the State has thrown funding at schools in an effort to reduce textbook costs for students. Despite the irony of the state taking funding away with one hand and giving it to us with the other, I have mostly good things to say about OER. In a system like CUNY, 70% of the undergraduate population comes from families whose annual incomes are below the federal poverty threshold of $30,000. And that is in New York City!! Were we to adjust the poverty threshold for NYC, it would be closer to $50,000. We should not be asking our students to pay for expensive books.

 [SLIDE] 

It is for this reason that at my institution we are building out our installation of Manifold as an OER repository.

DH, of course, has long embraced the principles of open access and open pedagogy. We share our syllabi in social networks and deposit them in open-access, open-source repositories like Humanities Commons. I think we need to do this, but do it more, and on a bigger, more organized scale. As one example, I would point to my colleague Lisa Rhody’s project, the Digital Humanities Research Institute, which has made the entire curriculum of its DH educational work available on Github to be adjusted and forked.

[SLIDE]

5. Foster community and build open-source community infrastructure as the foundational acts of DH pedagogy. 

DH work is best done in the company of others who are experimenting and learning, in a place where experience can be shared and learning can be communal. One begins with a set of materials and asks questions about it; one sees how those questions can be answered and how those answers can be validated; one works through the kinds of choices specific types of methods, software, and tools involve, even as one considers the constrains and limitations of those methods, software, and tools. Along the way, one shares this set of explorative questions and answers with a community of fellow practitioners, in local or virtual contexts, so that assumptions may be questioned and objections raised. What results from this kind of approach is a focus on collaborative knowledge building with more explicit attention to method than we are used to providing in many humanities contexts. 

As we build our communities, we need to build on them on equitable grounds — labor in service to a community needs to be recognized, rewarded, and paid. This is part of the work of respecting and seeing our students and our colleagues; where financial payment is not possible, other kinds of rewards — credit, publicity, acknowledgements, exchanges — should be devised.

We also need to Use Open Source Infrastructure and contribute to it as the work of the class.

In the spirit of DH, we should use, improve, and contribute to open-source projects as part of the work of resisting the incursion of capital into the classroom. Blackboard, of course, is easy to resist, but how many of our DH classes make use of GitHub, Twitter, and Slack? 

The librarian at my children’s elementary school recently asked students to “draw the internet”; here is what they drew. [NOTE: slides omitted in online version. Children’s drawings basically showed that they understood the internet to be composed of a number of corporate logos and services] If we want our students to have the understanding of the internet that inspired so many of us, rather than as a corporate marketplace, we need to do the hard work of using open-source tools, even when they are less polished than their proprietary alternatives. 

[SLIDE]

In my own institution, this has taken the form of the CUNY Academic Commons, an academic social network founded in 2009 that serve the entire 25-campus CUNY community. It is a multi-site WordPress network that, over the years, has contributed thousands of lines of code to the larger projects of WordPress and BuddyPress — contributions from a public educational institution to the larger public good of an open-source project. All of our recent work on the platform has been informed by the experiences (and the complaints and suggestions) of faculty members and students across the system using the platform to for their courses — and in the past year, when we first started encouraging faculty across the system to teach on it, our membership jumped from 9,000 to over 16,000 in the space of eight months. But that growth helps us build a platform in conversation with its community members, offering us the chance to build what Christopher Kelty calls “recursive publics,” community spaces where the people using a platform have a say in how it is developed. 

[SLIDE]

One practical strategy I want to leave you with is the need to #citepedagogy 

[SLIDE]

If we want pedagogy to count, and if we want to center pedagogy within the field of DH, we need to cite it. We need to make syllabi, assignments, and course descriptions part of the scholarly record by citing the colleagues whose work we build on, including citations in our syllabi, and preserving our classroom-based work in repositories such as Humanities Commons. This is an argument that my colleagues Kathy Harris, Rebecca Frost Davis, and I make in our introduction to Digital Pedagogy in the Humanities, and it is one that Kathy, in particular, has been arguing for over the course of many years across a number of contexts, including her own tenure process. I want to propose that we commit as a field to continuing to formalize our citations of pedagogy and that we collectively use the hashtag #citepedagogy to organize that work and to build out resources.

Conclusion

[SLIDE]

In Teaching to Transgress, bell hooks writes that “The classroom remains the most radical space of possibility in the academy.” I firmly believe that is true, and firmly believe that DH, as field, has yet to fully grapple with this fact. In order to think through DH, we need to think through our teaching. And it is in that space of possibility, when computational methods encounter human beings, that DH can reach its fullest, most radiant, most radical, and most surprising potential. 

Thank you.

[SLIDE]

“Issues of Labor, Credit, and Care in Peer-to-Peer Review Processes”

What follows is the text of a presentation I gave at the MLA 2019 Convention in Chicago, Illinois, on January 5, 2019 in Session 613: Getting Credit in Digital Publishing and Digital Humanities Projects

Thank you to Anne Donlon for organizing this session and to Harriett Green for chairing it. I’m excited to be with you today to talk about issues of credit, care, and labor in peer-to-peer review processes. My original title focused only on credit and labor, but I realized as I was writing it up that a full accounting of labor and credit necessitated attention to the subject of care, as well.

In this talk, I’m going to:

  • Begin by discussing a few models of peer review and peer-to-peer review
  • I will then discuss issues of labor as they play out in such review processes
  • I will then show how peer-to-peer reviews can be structured with care to ensure that participant labor is valued and respected
  • And I will end by talking about issues of credit and the overall goals of p2p review

Peer Review and Peer-to-Peer Review

I’m choosing to focus on peer review, and network-enabled peer-to-peer for a few reasons:

  • it is a space of scholarly communication in the academy where we see technology used to alter existing conventions of academic work;
  • second, peer-to-peer review is not often discussed in terms of credit and labor, so it seemed a useful topic to explore in a session that deals more broadly with the way we value the work that we and our colleagues do;
  • third, evolving forms of peer-to-peer review have been used in a variety of prominent digital humanities publishing projects in recent years, making it a subject of engaging interest;
  • and fourth, I’ve experimented with multiple forms of peer-to-peer review myself and have some thoughts to share about them

Before progressing further, I want to take a moment to contextualize my discussion of peer review within the context of contemporary DH work on scholarly communication. Here, Kathleen Fitzpatrick is my guiding light; her work in Planned Obsolescence: Publishing Technology and the Future of the Academy (2011) historicizes peer review and charts the way it is changing in the era of networked scholarship.

Fitzpatrick builds on the work of Mario Biagioli to exhume the history of peer review and its entrenchment in systems of censorship and disciplinary control. Fitzpatrick notes the many weaknesses of double-blind peer review, in which journal articles and book manuscripts are circulated to reviewers in such a way that neither the identity of the author nor the identity of the reviewer is disclosed. Although double-blind peer review has often been implemented as a way of eliminating bias in the reviewing process, Fitzpatrick argues that the anonymous space of the double-blind peer review is ripe for abuse, manipulation, and uncharitable communications by reviewers and editors.

Fitzpatrick poses what she calls “peer-to-peer” review as an alternative to double-blind pre-publication review. In peer-to-peer review, projects are reviewed openly by a community of respondents, whose replies to the text are visible to the author and to each other.

Examples of recent publications that have used this kind of process include:

Fitzpatrick’s own ­Planned Obsolescence:

Jason Mittell’s Complex Television

Digital Pedagogy in the Humanities, edited by Rebecca Frost Davis, Kathy Harris, Jentery Sayers, and me

Catherine D’Ignazio and Lauren Klein’s Data Feminism

And Jeff Maskovsky’s edited collection Beyond Populism: Angry Politics and the Twilight of Neoliberalism

And in the sciences, there are a variety of other models of pre-publication peer review, including ArXiv, F1000 Research, and PLOS One

[Here I spoke extemporaneously about an article published the night before in The New York Times, “The Sounds That Haunted U.S. Diplomats in Cuba? Lovelorn Crickets, Scientists Say,” which was based on a paper published in Biorxiv. The NYT article noted that the paper had not yet been submitted to a scientific journal, but it was already receiving attention in the mainstream press:

You’ll notice a bunch of platforms are commonly used across these examples:

  • CommentPress, a theme and plugin for WordPress
  • Github, a site for sharing code that has also been used for peer review
  • PubPub, a new platform from MIT Press
  • And Manifold, a new publishing platform from the University of Minnesota Press and my team at the CUNY Graduate Center

Beyond these peer-to-peer models and platforms lie a set of hybrid options, some of which I’ve explored myself in my collaborative publications. In the Debates in the Digital Humanities book series from the University of Minnesota Press, for instance, all volumes undergo a private community review in which authors review each other’s work, followed by an intensive editorial review process. Special volumes in the series then receive a more traditional blind review administered by the Press.

The community review of the DDH volumes is semi-public. The review site itself is private and viewable only by contributors to the volume.

Reviewers can see author names and authors can see the names of reviewers. All authors can review any piece in book, though they are specifically assigned to one or two pieces themselves.

In early volumes, we simply opened pieces up for general review; for more recent volumes, we have been asking reviewers to leave comments throughout but to reply to a specific set of evaluative questions at the end of the piece.

This process is followed by a revision request in which the editors take account of the feedback, ask authors to revise

Labor

This is all a lot of work. How are we to value the labor of peer-to-peer review?

To begin, we have to acknowledge the situation within which we are working – the way that the internet, and technology more generally, can exacerbate the processes of deskilling and the devaluing of labor.

As Trebor Scholz says in Digital Labor: The Internet as Playground and Factory:

“Shifts of labor markets to the Internet are described [in this book] as an intensification of traditional economies of unpaid work”

and

“each rollout of online tools has offered ever more ingenious ways of extracting cheaper, discount work from users and participants”

Scholz and others in that book are obviously talking about the commercial internet, especially as it intersects with social media – the way, for instance, that newspaper sales fell when social media platforms such as Twitter and Facebook became a primary space for news consumption.

Of course, there is a clear difference between the business model of a pre-internet content business such as newspaper publishing and the process of academic peer review, which generally does not involve financial compensation (depending on how much one values a few hundred dollars worth of university press books – typical compensation in the academy for reviewing a full book manuscript).

But there are clear connections to the knowledge economy more generally and to issues of crowdsourced labor.

As we ask our colleagues to participate in open community reviews, we need to avoid a situation in which the work of public peer-to-peer review essentially becomes a site of alienated labor. Probably the most dangerous potential for that to happen occurs when work that has gone through open peer review winds up being published by for-profit entities such as Elsevier. In such cases, the labor of peer-to-peer review would certainly resemble the vampiric capital discussed by Marx.

In order to prevent such futures, we might turn to the practices and rhetorics of care as articulated in recent years by a range of scholars such as Bethany Nowviskie, Stephen Jackson, and Lauren Klein, among others.

As Nowviskie puts it in her forthcoming piece “Capacity Through Care,” care can become part of the “design desideratum” for the DH systems we build; we can use it to ensure that the demands of public or semi-public peer review protect the affective and intellectual labor of the participants in the review.

So, how, then, do we structure p2p review processes with care?

Here are some initial thoughts, and I look forward to hearing yours during the Q&A.

Provide review structures

  • Contrast with completely open peer review
  • Offer specific evaluative questions

Create guidance documents or codes of conduct

  • Need to voice expectations of community norms for the review
  • For examples, you can look at the Code of Conduct on D’Ignazio and Klein’s Data Feminism book, which links to other resources
  • We’ve used the following guidance in DDH

Make conscientious review assignments

  • When setting up assignments, consider power and rank differentials, areas of specialization, and other factors to help structure fair and responsible reviews

Offer reporting mechanisms

  • Things can and will go wrong. Provide space for backchannel conversations with editors. Develop flagging features for comments

Credit

Part of structuring a review with care involves providing credit to those who lend their labor to it. A number of publication venues have experimented recently with this, such as The Journal of Cultural Analytics

And digital humanities practitioners have been discussing issues of credit at both the professional level and in the classroom. Here we can turn to the Collaborator’s bill of rights, which resulted from a 2011 NEH workshop titled Off the Tracks led by Tanya Clement and Doug Reside, and the student collaborator’s bill of rights , which was developed at UCLA by Haley Di Pressi, Stephanie Gorman, Miriam Posner, Raphael Sasayama, and Tori Schmitt, with additional contributors. Each of these documents show how credit on a project can and should be structured to provide adequate and fair credit to everyone involved in a project.

Community

As we think about models of peer-to-peer review, we need to think about how issues of credit and labor can make it sustainable.

But we also need to think about what it is that we are laboring on – and here I will build a bit on what Kathleen Fitzpatrick said earlier today in the “Transacting DH” panel – that as important as credit is for individual participants, we need to go behind it.

Peer-to-peer review is grounded in community investment and participation

People participate in community reviews when their friends/colleagues are invested in it or when they are intellectually compelled to take part

We have to stop imagining that simply making projects open will make peer-to-peer review viable

We have to go beyond the idea that simply giving people credit, or gamifying community peer review in some way, will make the work sustainable.

Ultimately, what makes peer-to-peer review work is when people have a real link to the people or content involved.

the labor of peer-to-peer review, then, isn’t towards an individual text but to a community. What we need to start taking stock of is community value .

This involves investment in open-source, open access publishing spaces where people have autonomy over their work (Humanities Commons/MLA Commons/Commons in a Box/Manifold/DH Debates, etc)

Ultimately, the labor involved in peer-to-peer review is labor that helps us work towards a better academy, one grounded in Generous Thinking, as Kathleen Fitzpatrick has been arguing for] – in the development of what she calls “community-owned infrastructure.”

We should do this not just because it is the right thing to do, but also because it will produce stronger, more effective peer reviews.

But this is hard work, and the work of community development isn’t particularly glamorous.

It is and can be gratifying, though, and it is labor that matters. It is, quite literally, a credit to the profession; but we have to ensure that the work itself is valued accordingly.

Response to Critical Infrastructure Studies Panel

The following is a response delivered at the end of the Critical Infrastructure Studies Panel, which took place at the January 2018 Modern Language Association Conference in New York City. Panelists included Tung-Hui Hu, Shannon Mattern, Tara McPherson, and James Smithies. Alan Liu and I co-organized the session.

Susan Leigh Star has made the foundational point that while we often think of infrastructure as a set of mute base layers underlying the systems we use — a group of water pipes, a rack of computer servers, a set of asphalt roads — one person’s invisible infrastructure is another person’s active focus of time, interest, and investment; as she puts it, “for a railroad engineer, the rails are not infrastructure but topic” (380). Our approach to infrastructure must therefore foreground perspective and acknowledge that infrastructure is fundamentally relational and embedded in a net of human activities and concerns.

The papers we’ve just heard make visible a set of embedded human relations around infrastructure in varying ways. Shannon Mattern’s claim that the spaces in which we store and access information mediate our understanding of that information calls attention not just to the shelves, cubbies, cases, and libraries in which we store our work, but also to the human beings designing and accessing those structures. Tung-Hui Hu, by focusing on the affective dimensions of big data, approaches the topic of infrastructure through a perspective that considers the effects it lodges in our bodies, feelings, and minds. Tara McPherson explores how social platforms can act as infrastructures that facilitate or impede the machinations of hate groups. And James Smithies points out that one key role critical infrastructure studies can play is to call attention to the scholarly infrastructures that surround us and how our own research practices intersect with them. 

As we think about what to make of these perspectives and what the implications of critical infrastructure studies might be, we might turn back to Alan Liu’s signal call for this work. In his blog post “Drafts for Against the Cultural Singularity,” taken from an in-progress book, Liu writes that infrastructure offers digital humanities practitioners a key critical possibility, a space within which DHers can “treat infrastructure . . . as a tactical medium that opens the possibility of critical infrastructure studies as a mode of cultural studies.”

For Liu, critical infrastructure studies offers a way for DH practitioners to embrace a critical form of building, one that focuses locally on the creation of scholarly infrastructures in higher education but that can, over time, share the values and practices of the academy with other areas of culture such as “business, law, medicine, government, the media, the creative industries, and NGOs.”

In my brief response today, I want to point out that by pairing critical infrastructure studies with ongoing work in DH and, importantly, with emerging work in the area of critical university studies, as Matt Applegate did the other morning here at the convention and as Erin Glass has been working on in her dissertation on the subject, we have a chance to right our own ship and to enact a form of resistance to capital within higher education that is part of the shift we’d like to see in the larger culture.

To locate this proposition within a concrete set of scholarly infrastructure initiatives, I want to talk about two related projects: the CUNY Academic Commons and the Humanities Commons.

The CUNY Academic Commons is an open-source, faculty-led academic social network established in 2009 for the 24-campus CUNY system. Built on WordPress and BuddyPress, the Commons is used for courses, faculty profiles, publications, CVs, research interest groups, and experiments. It began with no funding, but slowly gained internal funding and now is securely supported on an annual basis by the CUNY Office of Academic Affairs. In 2012, with the help of a grant from the Sloan Foundation, we released the Commons In A Box, a free software project which can be used by any institution to get a Commons site up and running, and next year, with the help of the NEH, we will be releasing the Commons In A Box OpenLab, which will help institutions set up a Commons-based teaching platform.

Soon after we released CBOX, we met with Kathleen Fitzpatrick and her team at the MLA, which soon used CBOX to create first the MLA Commons to link members of the organization, then the Humanities Commons to link members of multiple scholarly organizations, and finally Humanities CORE, an institutional repository tied to the Humanities Commons that helps academics share their scholarship, research data, and syllabi.

Examined through a perspective that combines work in DH, critical infrastructure studies, and critical university studies, we can see that these platforms have helped establish what Christopher Kelty calls “recursive publics,” having been taken up by the communities that they were built for, and that built them. And we can see that the flourishing of these platforms represents an intervention in the enterprise-level IT purchasing practices that determine much of the technology we use in the academy. Efforts like the CUNY Academic Commons and Humanities Commons may seem in some ways small and homegrown, especially when compared to the large sums of money our universities spend on Elsevier subscriptions, but they can have large knock-on effects. The Humanities Commons, for instance, is slowly but surely helping scholars move away from what I would call the academic vulture economy — for instance, proprietary, for-profit, corporate platforms such as Academia.edu that monetize the academic content deposited on them. And, in the wake of New York State giving 4 million dollars to the CUNY system to develop zero-cost courses and open educational resources, the CUNY Academic Commons is beginning to displace corporate OER platforms to become a pedagogical infrastructure that CUNY faculty can use to create, share, and teach with OER materials.

Coming back to Susan Leigh Star, then, and foregrounding the embeddedness of human relations around infrastructure, I want to suggest that the call for critical infrastructure studies can ultimately help us mobilize a critically informed resistance to capital and set of building practices that move the scholarly communications infrastructure of the academy away from corporations and towards the faculty, staff, and students who can build, care for, maintain, and use them.

Works Cited

Kelty, Christopher M. Two Bits: The Cultural Significance of Free Software. Durham: Duke University Press, 2008.

Star, Susan Leigh. “The Ethnography of Infrastructure.” American Behavioral Scientist. 43: 377-391, 1999.

Out of Sync: Digital Humanities and the Cloud

Matthew K. Gold

Out of Sync: Digital Humanities and the Cloud

This is the text of a keynote lecture I gave at DH Congress at the University of Sheffield on September 10, 2016. I’m grateful to Michael Pidd, the University of Sheffield Humanities Research Institute, and the conference organizing committee for inviting me to speak.

* * *

In February 1884, John Ruskin delivered an address to the London Institution titled “The Storm-Cloud of the Nineteenth Century.” Ruskin began with a reference to his somewhat ominous title:

Let me first assure my audience that I have no arrière pensée in the title chosen for this lecture. I might, indeed, have meant, and it would have been only too like me to mean, any number of things by such a title;—but, tonight, I mean simply what I have said, and propose to bring to your notice a series of cloud phenomena, which, so far as I can weigh existing evidence, are peculiar to our own times; yet which have not hitherto received any special notice or description from meteorologists.

Ruskin went on, in his lecture, to do just that — to convey his thoughts on clouds based on his sketches and observations of the sky, though at least some audience members and later critics have seen embedded in his remarks a critique of encroaching industrialization. And so, in a talk focused on clouds — which Ruskin elsewhere described with much beauty and care — we see the seeds of a larger political critique.

Were I to “propose to bring to your notice a series of cloud phenomena” that are, in Ruskin’s words, “peculiar to our own times,” we would begin, most likely, not by looking up to the sky, but rather down at our cell phones. We wouldn’t discuss “storm clouds,” or even “clouds,” but rather “THE cloud,” by which we would refer to the distributed set of services, platforms, and networked infrastructures that string trellised connections between our phones, laptops, desktops, and tablets. We would point to the media systems that have caused us to consign our CDs to closets and to sign up for subscription-based music services such as Spotify and Tidal. We would point to Google drives and docs, Twitter hashtags and Facebook feeds, wifi signals, Bluetooth connections, Github repositories, files synced across DropBox, Box, and SpiderOak. Indeed — the sync — the action of connecting to and syncing with the network, of comparing our local files to those on a remote server and updating them to match — might be the signal action of the cloud-based life. We become dependent on, and interdependent with, the network — always incomplete, awaiting sync, ready to be updated. The cloud produces both security and instability, offering back-up services but keeping us always in need of updates. We look to the cloud not to see an alien sky but rather to recover parts of ourselves and to connect or reconnect to our own work.

My aim in this talk is to spend some time thinking with you about the cloud and about what it portends for the digital humanities. But it’s difficult to talk about the cloud without also talking about infrastructure, in part because of the clear ways in which cloud-based services and models are dependent upon physical conduits, things in the world, that belie the cloud’s supposedly abstract, virtual, and ineffable nature. I’d like to draw the DH community’s attention to a set of conversations that are occurring both in DH and also outside of it–in the realm of media studies, and in particular, the growing area of critical infrastructure studies. In connecting these conversations, I want to encourage us to think about how DH work relates to or should relate to issues of infrastructure — particularly as these issues involve larger concerns that have been raised in the humanities about DH work around issues of instrumentalism and neoliberalism. My premise is that thinking about DH work within an infrastructural context may allow us to both focus on the work that DHers do so well — reflexive and theoretically informed building, making, and critique — and to build or rebuild that human, social, and scholarly communications infrastructure upon sturdier grounds of social justice.

It’s easy to see that an “infrastructural turn” has been growing over the past year in DH and allied fields. We can see it in evidence in the July 2016 King’s College symposium “Interrogating Infrastructure“; in the recent DH2016 panel on “Creating Feminist Infrastructure in the Digital Humanities”; in DH2016 presentations such as James Smithies “Full Stack DH,” which described his project to build a Virtual Research Environment on a Raspberry Pi; and in experiments such as my own team’s DH Box project. In media studies, we see this infrastructural turn in recent publications such as Tung-Hui Hu’s A Prehistory of the Cloud and Nicole Starosielski’s The Undersea Network; in a renewed focus, generally, on the material nature of computers; in the growth of the field of media archaeology; in calls across the academy to pay more attention to the social and political contexts of digital work; and in efforts to recover the diverse histories of early computing. This work encourages us to take account of computational work as enmeshed in the different levels of the “stack,” Benjamin H. Bratton’s term for the set of infrastructures – media infrastructures, data infrastructures, political infrastructures, physical infrastructures and legal infrastructures — that have accreted over time into an accidental whole through what Bratton calls “planetary scale computing.” As more and more DH work moves to the cloud and becomes dependent on networked infrastructure, thinking about the protocols, dependencies, and inter-dependencies of the Stack can help us fruitfully shape our work as it relates to both allied scholarly disciplines and to larger publics.

What role should the digital humanities play in conversations about infrastructure? What particular insights does work in the field have to contribute to them? And to what extent are DHers already doing the work of infrastructure in the academy broadly, and the humanities more specifically? I will argue in this talk that DHers should engage the Cloud and its associated infrastructures critically, thinking about how the emergence of the Cloud, even as it makes possible new forms of networked connection, also foregrounds multiple risks. It’s my belief, as I’ll detail later on in the talk, that DH should step back and re-consider its use of proprietary social networks, and that it should focus on building alternate forms of scholarly publishing and communication infrastructure that help move us away from proprietary networks where every interaction is always already commodified and where the network effect all too often puts marginalized populations at risk.

In his opening remarks at the “Interrogating Infrastructure” event — an event I did not attend, but which I have at least some sense of thanks to his online notes — ­­Alan Liu positioned the topic of infrastructure as a key future direction for the digital humanities. He argued not just that the topic was well-suited to the field, but that it was one which DH was well-positioned to address. Infrastructure, as the set of social and technological systems undergirding many aspects of networked modern life, for Liu, has the “potential to give us the same general purchase on social complexity that Stuart Hall, Raymond Williams, and others sought when they reached for their all-purpose interpretive and critical word, ‘culture.’”

I think Liu is right that DHers can and should pay increased attention to issues of infrastructure and the effects of that infrastructure on the larger communicative and meaning-making networks of contemporary society. And clearly, much work on infrastructure is already blending scholarship in new media studies, science and technology studies, and the digital humanities. I think here of Matthew Kirschenbaum’s work on forensic materiality and software platforms, Lori Emerson’s work on interfaces, Jentery Sayer’s work on prototyping the past, Jussi Parrikka’s work on media archaeology, Simone Browne’s work on surveillance networks and race, and Kari Kraus’s work on speculative design. All of these scholars are already exploring the intersections of infrastructure, platform/material studies, design and new media.

The past year has been notable within the emerging field of infrastructure studies, as scholars in the fields of new media studies and science and technology studies have published a range of books that put the infrastructure of the Cloud into theoretical and infrastructural contexts. Across four of those books — Nicole Starosielski’s The Undersea Network; Tung-Hui Hu’s A Prehistory of the Cloud; John Durham Peters’s The Marvelous Clouds; and Benjamin H. Bratton’s The Stack — we see a range of approaches:

  • An examination of the physical infrastructure underlying virtual networks. Starosielski examines the undersea cables that continue to carry much internet traffic; while Hu looks at how fiber-optic network infrastructure has been “grafted” onto America’s aging railroad track system. In both cases, we see attention paid to the physical infrastructures of the internet that are often overlooked, if not purposefully hidden.
  • An exploration of how power plays across networked interfaces, infrastructures, and protocols — and particularly how the traditional laws of the nation-state become confused and overwritten across the liminal space of the web. Starosielski looks at cable stations across the Pacific, exploring past and present effects of colonized states; Hu examines what he calls the “sovereignty of data,” exploring how we “invest the cloud’s technology with cultural fantasies about security and participation.” And Bratton delineates, as I’ve noted earlier, the various layers of what he calls “the Stack.” In Bratton’s view, computation itself has become a “global infrastructure [that] contributes to an ungluing and delamination of land, governance, and territory, one from the other” (14).
  • A connection, drawn by Peters through what he calls “infrastructuralism,” of computing technologies and the environment. This involves partly a consideration of the effect of computing technology on the environment — what Bratton calls “the ponderous heaviness of Cloud computing” — and partly, through Peters’s book, a consideration of media as environment, as space and place through which we move.

Across all of these works, we see concerns over issues of power, capital and surveillance; the physical and commercial structures through which the phenomenon we refer to as “the network” is built; and the growing sense in which media and networked infrastructures have become constitutive of much of our experience in the world.

The cloud is blurring lines and connecting us in ways that have reshaped conventional boundaries. For instance, As Bratton considers issues of sovereignty, citizenship, the polis, and the network, he ponders the dividing lines between “citizen” and “user,” between subject and state, wondering whether the network itself provides for new understandings of citizenship. He asks:

What if effective citizenship in a polity were granted not according to categoriocal juridical identity but as a shifting status derived from any user’s generic relationship to the machine systems that bind that polity to itself?” In other words, if the interfaces of the city address everyone as a “user,” then perhaps one’s status as a user is what really counts. The right to address and be addressed by the polity would be understood as some shared and portable relationship to common infrastructure. Properly scaled and codified, this by itself would be a significant (if also accidental) accomplishment of ubiquitous computing. From this perhaps we see less the articulation of citizenship for any one city, enclosed behind its walls, but of a “citizen” (Is that even still the right word?) of the global aggregate urban condition, a “citizen-user” of the vast, discontiguous city that striates Earth, built not only of buildings and roads but also perplexing grids and dense, fast data archipelagos. Could this aggregate “city” wrapping the planet serve as the condition, the grounded legitimate referent, from which another, more plasmic, universal suffrage can be derived and designed? Could this composite city-machine, based on the terms of mobility and immobility, a public ethics of energy and electrons, and unforeseeable manifestations of data sovereignty . . . provide for some kind of ambient homeland? If so, for whom and for what? (10, emphasis added)

The questions seething through Bratton’s book — especially those around citizenship, subjectivity, and participation in the techno-sphere – embody the kinds of questions DHers might ask of infrastructure. As enormous forces of capital and computation engender new networked publics around us, to what extent are those publics built on the grounds of equity and social justice? As DHers participate in these new cloud polities, to what extent are we asking Bratton’s question, “for whom and for what,” as we do our work?

DH has always been wildly various and multivalent, and its practices and methods range widely (some see this as a feature; others, as a bug. Count me on the side of those who appreciate DH’s capacious frame). The increasing prevalence of the cloud in our lives and works offers us a chance to intervene in the systems of media and communication developing around us. We can and should ask where and how DH insight might best contribute to scholarly conversations around infrastructure.

One possibility is the work on large-scale text, sound, and image corpora that many DHers — Franco Moretti, Ted Underwood, Tim Hitchcock, Andrew Piper, Richard Jean-So, Mark Algee-Hewitt, Matthew Jockers, Tanya Clement, Lev Manovich, and many others — have been working on, often through larger infrastructural platforms such as the Hathi Trust. Surely, this work involves issues of the cloud, infrastructure, and culture, and surely it builds on methods central to, and perhaps in some ways unique to, DH work. DHers excel at contributing to and taking advantage of this kind of networked infrastructure for scholarly work – look at how a set of national and international scholarly infrastructure projects — such as DARIAH-EU, Compute Canada, Hathi Trust, and Europeana Research — are helping DH researchers do their work at scale and also to participate in larger public conversations.

These platforms, and this type of infrastructural work is important. And they may be the answer for DH as it thinks about cloud-based infrastructure. But – aside from the fact that, as I have argued elsewhere, large-scale text mining too often stands in the public mind as a synecdoche of what DH is and should be — this kind of infrastructural work is sometimes hampered by the complex rights issues that attend cultural heritage materials, and these platforms often have a somewhat problematic relationship to access, offering member institutions one set or quality of resources, and offering the public another. Such platforms often embrace a stance of political neutrality that may be inadequate to the increasing complexity of the cloud. And so — perhaps for those reasons, and perhaps because of the direction of my own work – I’d like to consider other possibilities for DH in the cloud, as well.

Earlier, I noted that the action of the “sync” – the point where the user connects to a cloud-based service such as DropBox, Gmail, iTunes, or Google Docs to upload and download files – as the quintessential act of the cloud. As DHers, we know and recognize that these systems do much more than update files – they check us in with that vast global network of users, update terms of agreement, provide companies with a chance to flag illegal downloads. The sync is as much an act of corporate surveillance as it is an act of routine file maintenance.

As DHers, we know this and to some degree accept this in the same way that we know and accept our participation in proprietary networks like Twitter. It seems at times a cost of living in a cloud-based world.

But when we think about what DH is and what it can be, and of how it might relate to the cloud, we might consider that DHers are, among academics, perhaps best suited to reshape the nature of academic research itself. This work — often described as scholarly communication — has focused on the creation of new publishing interfaces and platforms; on the extension of humanities work to include alternatives to text-based argument; on the use of social media and blogging platforms to share in-process scholarship in public ways; on the consideration of collaborative work in the humanities; and on a reconsideration of what scholarship itself is and should be.

Perhaps the great work of DH is to envision alternate infrastructures for technical and scholarly work that help divorce us from systems of entrenched capital, that help move us away from our shared dependence on the set of proprietary service platforms — Twitter, Facebook, Github, Slack, Academia.edu — that have dominated scholarly communication in the humanities (and digital humanities) to date, and to recognize this shared dependence as such. Perhaps a central mission of DH is to build alternate infrastructures that are premised upon social and political understandings of the cloud, as articulated at least in part through scholarship in new media and science and technology studies.

This kind of work could help us address one of the most ironic gestures we see in current critiques of DH: harsh, outraged attacks on the supposed “neoliberalism” of DH, delivered by scholars through commercial proprietary platforms like Facebook and Twitter, or through online publication venues that use clickbait-y headlines to capture page views in the attention economy. It’s hard to see how the platform of delivery of those critiques does not detract at least a bit from their bite.

And yet if we think we are immune from this problem ourselves, we are wrong — this is an issue that affects not just new-media scholars or conservative humanists; it is undeniably present in the digital humanities community, as well. Yes, we have Humanist-L, DH Q&A, personal academic blogs, and multiple scholarly journals that we use to share work in the field. Yes, we are building new venues for open-access publishing such as The Open Library of the Humanities. Yes, we are building out institutional and inter-institutional methods of conversation and connection such as MLA Commons and MLA CORE, not to mention institution-specific repositories.

But DHers also participate actively and enthusiastically in Facebook, Twitter, Instagram, and Slack, among others. Twitter has, for many, become the de facto meeting ground of the field. And there is an undeniable good here: a strong DH presence on these platforms has enabled DHers to share their work with larger publics. But they also suggest a missed opportunity for scholarly communication and a regrettable participation in the larger systems of capital accumulation that DH could potentially resist.

In “The Scandal of Digital Humanities,” Brian Greenspan’s response to “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” published by Daniel Allington, Sarah Brouillette, and David Golumbia in the L.A. Review of Books, Greenspan argues that digital humanities work is fundamentally aligned against the “strictly economic logic’ of neoliberalism; he notes that much DH work resists the “pressure to commercialize” and in fact involves “either detourning commercial tools and products for scholarly purposes, or building Open Access archives, databases and platforms.” Greenspan remarks sardonically that that is why so many DH projects are “so often broken, unworking or unfinished, and far from anything ‘immediately usable by industry.'”

DH work, as Alan Liu and others have argued, presents to the academy a mode of engagement between the humanities and computational methods and tools that is self-reflexive and empowering. Building on that notion, Greenspan argues that:

DH involves close scrutiny of the affordances and constraints that govern most scholarly work today, whether they’re technical (relating to media, networks, platforms, interfaces, codes and databases), social (involving collaboration, authorial capital, copyright and IP, censorship and firewalls, viral memes, the idea of “the book,” audiences, literacies and competencies), or labour-related (emphasizing the often hidden work of students, librarians and archivists, programmers, techies, RAs, TAs and alt-ac workers).

“If anything,” Greenspan notes, “DH is guilty of making all too visible the dirty gears that drive the scholarly machine, along with the mechanic’s maintenance bill. . . .And that’s precisely its unique and urgent potential: by providing the possibility of apprehending these mechanisms fully, DH takes the first steps toward a genuinely materialist and radical critique of scholarship in the 21st century.”

To the extent that this critique can be baked into the building that digital humanists do — and I do think that that is one of the key aims of DH as a field and practice, particularly in the age of the cloud — Greenspan helps us see DH’s potential contribution to questions of infrastructure. Digital humanities work can indeed help us reposition the infrastructure of scholarship away from the formations we now have in place and towards a more purposeful, and more resistant digital humanism that is grounded not just in non-commercial practices, but in anti-commercial practices. In this way, the strength of DH, its ability to peek into the black box of technological platforms, can be strengthened and can help the academy as it faces the onslaught of techno-capital from all sides. The need for this kind of work is urgent, as the drumbeat of constant reductions in state funding, certainly felt here in England but also in the U.S., force institutions to adopt austerity measures of various kinds.

To some degree, DH is already doing this kind of work—and I don’t want to erase the important contributions of these projects by failing to mention them. For example, we can look at a range of representative projects — Zotero, which has provided an effective alternative to costly bibliographic software; Omeka, which has created an easy way to present cultural heritage objects; Mukurtu, which is designed specifically to take account of diverse cultural attitudes towards the sharing of heritage materials; Scalar, which encourages multi-modal and non-linear argument; Domain of One’s Own, which helps students familiarize themselves with hosting infrastructure and take some measure of control over their online persona; and a few of the projects I have been involved in — Manifold Scholarship, which is creating a new open-source platform for interactive scholarly monographs; Commons In A Box, which provides a free platform for academic social networks; DH Box, which opens DH computing software to communities without technical DH infrastructure; and Social Paper, which is planting seeds that may one day help us move away from Google docs. Across all of these projects are the beginnings of an infrastructure for shared scholarly work that offer alternatives to commercial environments and platforms.

And yet, as Miriam Posner has noted, “the digital humanities [still] actually borrows a lot of its infrastructure . . . from models developed for business applications.” For many, the mere fact that DH involves the kind of technical training that may be very much in line with marketplace demands is evidence of its complicity with the forces of neoliberalism in the academy. How can we ensure that the infrastructure DH builds is self-reflexive infrastructure for scholarly practice and communication; that its builders ask themselves Bratton’s question — for whom and for what — at every turn; that it foreground humanistic research questions and resist the persistent encroachment of capital into higher education?

I don’t have answers, but I can suggest starting points:

  • We need a re-articulation of DH technical practice as essentially reflexive endeavor. DHers tend to approach technological systems by seeking to understand them, to historicize them, to unpack the computational and ethical logics that structure them. This gives DHers a good starting place for building out more ethical tools for scholarly communication. But we need to make this case more powerfully to the public.
  • We need open and robust conversations about inclusive practices. As recent years have shown, DHers need to pay careful attention to the make-up of their own projects and conferences, seeking to counter the forces of structural racism and gender bias. We might move this conversation forward by consciously seeking to expand our project teams and ensure that our projects engage issues diversity and difference.
  • We should expand our notions of what we mean by infrastructure, Jacqueline Wernimont’s argument at DH2016 in her talk “Feminist Infrastructure as Metamorphic Infrastructure.” There, Jacque described a concept of feminist infrastructure that commits to people, that is built upon relational accountability, that embeds ideals of collaboration, collectivity, and care, and that, as Jacque notes about FemTechNet’s charter, foregrounds pedagogies that are anti-racist, queer, decolonizing, trans-feminist, and focused on civil rights.
  • We should continue to build infrastructures and infrastructural conversations that encourage the growth of global DH; Alex Gil’s minimal computing is a wonderful example of this, in that it is an infrastructural philosophy and set of technological platforms — such as Ed, his Jekyll theme designed to produce minimal textual editions. The Gacaca Community Justice archive that we heard about from Marilyn Deegan on Thursday is another wonderful example of this.
  • That we speak more about, and continue to think through, the kind of education and training that many of us provide for our colleagues and students at our universities, and to situate that work within the context of critical pedagogy, ensuring that when we teach our students, we do so by emphasizing humanities values. Our students need to use DH methods to explore and explicate ambiguity rather than to flatten it. I think we do this already, but our academic colleagues sometimes miss this point.
  • That we take seriously the proposition put forward by Geoffrey Rockwell and Stephen Ramsay that for digital projects to be taken seriously, they have to make arguments. And to the extent that they can make arguments in their conception and function, they will help explain what DH is and can be.

And that, I think, is the challenge for DH infrastructure: it needs to make an argument, and it needs to make an argument through its projects, as Tara McPherson argued in 2010. Many of the projects I mentioned above do just that — think of Alex Gil’s minimal computing, of that way it embeds an argument about access and infrastructure into its codebase. Think about James Smithies attempts to build a virtual research environment on a cheap and affordable Raspberry Pi. Consider the DH Box’s team to make DH tools available to institutions that don’t have reliable networked infrastructure. Consider how Commons In A Box offers academics an alternative to Facebook, and how it has been used by scholarly associations such as Modern Language Association to build out alternates to corporate sites like academia.edu.

There are limits, of course. DH Box, though it is available free software, currently runs through Amazon Web Hosting. Domains of One’s Own similarly is a project that is ultimately based on commercial web hosting space. As Tung-Hui Hu reminds us, network infrastructure is often literally laid on top of older commercial infrastructure. It’s hard to live completely off the commercial grid, to live on the bare wires of the network – especially if we want to be involved in larger public conversations. The cloud calls to us to sync with it, and that call is hard to resist.

And there are other challenges. Free software communities, at least those in the US with which I am most familiar, are dominated by white males and are not always welcoming to women and minorities (something that I think and hope is changing through organizations such as PyLadies, Black Girls Code, and similar organizations).

And the work is painful. We are using Twitter, and not Diaspora or App.net, for a reason. The slick, seductive surfaces, the smooth user interfaces of commercial social media platforms are not just hard to resist — they are where other conversations are happening. Removing ourselves from those platforms would cost DHers exposure — and, were more academics to follow — would risk moving academic discourse even farther from the public sphere than it already is.

But as Eben Moglen pointed out in his talk “Freedom in the Cloud” — the talk that inspired a group of NYU undergrads to create the twitter alternative Diaspora — when we use Gmail, it comes with the “free” service of semantic email analysis by Google for Google; that when we get free email and document storage we get a “free” service which is “spying all the time.” That location-based tweets may be used to squash protest. We know this – everyone knows this – but we could do more to combat the force of the commercial cloud.

DH can and will be useful to the humanities and to the academy. But it has the opportunity to consider what the next generation of scholarly communication platforms is and can be. It has the opportunity, and perhaps the responsibility, to approach questions of infrastructure with political and social contexts in mind — to consider, for instance, how its infrastructure can be modeled, to use language from Elizabeth Losh, Jacqueline Wernimont, Laura Wexler, Hong-An Wu upon feminist values, embracing “genuinely messy, heterogeneous, and contentious pluralism” in its design. Or, to return to the cloud, to offer us new visions of what it means to sync with the cloud and with the world. DH can and perhaps should be a primary force for resisting the entrance of capital into the ecosystem of educational institutions, by insisting upon critical engagements with commercial technologies. We can and must interrupt the sync.

Resisting the smooth services of the corporate web — building tools, platforms, and communities that embrace core humanities values of discourse, dialogue, inclusivity, and intellectual exchange — perhaps represent another side of what Miriam Posner has called the “radical, unrealized potential of the digital humanities.”

Were we to engage in that work — and I think we are already doing it, just not as purposefully and mindfully as we might — we would in fact have made a significant contribution to the world and would perhaps help dissipate the storm clouds of our times.

* * *

I’m grateful to Lauren F. Klein, Kari Kraus, and Brian Croxall for their comments on an earlier draft of this paper.

Facts, Patterns, Methods, Meaning: Public Knowledge Building in the Digital Humanities

Note: This talk was delivered as a keynote address at the University of Wisconsin – Madison as part of the Digital Humanities Plus Art: Going Public symposium on April 17, 2015. I am grateful to the organizers of the symposium for generously hosting me and occasioning these thoughts.

 Facts, Patterns, Methods, Meaning: Public Knowledge Building in the Digital Humanities

Things have a way of coming full circle, of beginning where they have ended, and so I want to start today with Ralph Waldo Emerson, a man who thought about beginnings and endings, circles and forms. “The eye is the first circle,” he wrote. “The horizon which it forms is the second; and throughout nature this primary figure is repeated without end” (“Circles”).

Circles select and enfold, but also exclude, demarcating a perimeter, an in and an out. “The eye is the first circle”; it is the lens through which those of us lucky enough to have eyesight are able to perceive the world. And yet the eye both makes sight possible and is bounded by a second circle formed by the horizon of our vision, a circle that discloses both constraints and possibilities. We know that the landscape extends beyond the visible horizon, but we are limited by our own perceptions, even as they make possible all that we know. And this figure, this double act of knowing and unknowing, seeing and unseeing, taking in possibilities and limits in the same glance, is the mark of our experience in the world. There is always more to learn, always more outside the reach of our knowledge, always more beyond the edge of our sight.

Emerson teaches us to be humble in the face of such knowledge. “Every action,” he writes, “admits of being outdone. Our life is an apprenticeship to the truth, that around every circle another can be drawn; that there is no end in nature, but every end is a beginning; that there is always another dawn risen on mid-noon, and under every deep a lower deep opens.”

Perhaps it is telling that Emerson’s figure here involves a depth to be plumbed rather than a height to be climbed. For at this moment in the development of the digital humanities, we are pursuing new paths to knowledge, extending the horizons of our abilities with new tools. This is, obviously, not a teleological or progressive journey. We have exciting new tools and provocative new methods, but they are not necessarily leading us to higher truths. We are not marching along straight and ever-improving lines of progress. But we are producing tools that conform to new directions in our thought, and those tools can usefully reset our perspectives, helping us look with new sight on things we thought we understood. They can give us new vantage points and new angles from which we can explore the depths around us. And, of course, even as they make new sights possible, we remember Emerson and note that they foreclose, or at least obscure, others.

Emerson’s aphorisms provide useful reminders both for digital humanists surveying the field and for scholars observing it from its horizons. In today’s talk, I want to think through states of knowing in the digital humanities, situating our practices within larger histories of knowledge production. My talk has three parts:

  1. A discussion of a few approaches to text analysis and their relation to larger perceptions about what DH is and does, and how DH knowledge is produced;
  2. A discussion of some practitioners who are blending these approaches in provocative ways;
  3. A wider view of experimental knowledge in DH, with the suggestion of a new grounding, based in the arts, for future DH public work.

I want to start by discussing current directions of DH research, and in particular to spend some time poking a bit at one of the most influential and vibrant areas of DH — literary text analysis, the type of DH research that most often stands in as a synecdoche for the larger whole of the digital humanities. I do this despite the fact that focusing on literary text analysis risks occluding the many other rich areas of digital humanities work, including geospatial mapping and analysis, data visualization, text encoding and scholarly editing, digital archives and preservation, digital forensics, networked rhetoric, digital pedagogy, advanced processing of image, video, and audio files, and 3D modeling and fabrication, among others. And I should note that I do this despite the fact that my own DH work does not center on the area of literary text analysis.

One reason to focus on text analysis today is that when we talk about DH methods and DH work in the public sphere, literary text analysis of large corpora in particular is over-identified with DH, especially in the popular press, but also in the academy. There, we often see large-scale text analysis work clothed in the rhetoric of discovery, with DHers described as daring adventurers scaling the cliffs of computational heights. A 2011 New York Times book review of a Stanford Literary Lab pamphlet described, tongue-in-cheek, Franco Moretti’s supposed attempt to appear “now as literature’s Linnaeus (taxonomizing a vast new trove of data), now as Vesalius (exposing its essential skeleton), now as Galileo (revealing and reordering the universe of books), now as Darwin (seeking ‘a law of literary ­evolution’)” (Schulz). All that’s missing, it would seem, is mention of an Indiana-Jones-esque beaten fedora.

If literary text mining operates as a kind of DH imaginary in popular discourse around the field, one point I want to make today is that it is an impoverished version of text analysis, or at the very least a one-sided and incomplete one. As a way of complicating that picture, I want to sketch out two prominent areas of research in DH literary text analysis, one premised (not always, but often) upon scientific principles of experimentation that use analysis of large-scale textual corpora to uncover previously unknown, invisible, or under-remarked-upon patterns in texts across broad swaths of time. Known colloquially and collectively through Franco Moretti’s term “distant reading,” Matthew Jockers’s term “macroanalysis,” or Jean-Baptiste Michel and Erez Lieberman Aiden’s term “culturomics,” this approach is predicated on an encounter with texts at scale. As Franco Moretti has noted in his essay “The Slaughterhouse of Literature” when he described the move towards distant reading:

Knowing two hundred novels is already difficult. Twenty thousand? How can we do it, what does “knowledge” mean, in this new scenario? One thing for sure: it cannot mean the very close reading of very few texts—secularized theology, really (“canon”!)—that has radiated from the cheerful town of New Haven over the whole field of literary studies. A larger literary history requires other skills: sampling; statistics; work with series, titles, concordances, incipits. (208-209)

This is knowledge work at a new scale, work that requires, as Moretti notes, quantitative tools of analysis.

Opposed to this, though less often discussed — is a different form of DH work, one based not on an empirical search for facts and patterns, but rather on the deliberate mangling of those very facts and patterns, a conscious interference with the computational artifact, a mode of investigation based not on hypothesis and experiment in search of proof but rather on deformance, alteration, randomness, and play. This form of DH research aims to align computational research with humanistic principles with a goal not of unearthing facts, but rather of undermining assumptions, laying bare the social, political, historical, computational, and literary constructs that underlie digital texts. And sometimes, it simply aims to highlight the profound oddities of digital textuality. This work, which has been carried on for decades by scholar practitioners such as Jerome McGann, Johanna Drucker, Bethany Nowviskie, Stephen Ramsay, and Mark Sample, has been called by many names — McGann terms it deformative criticism, Johanna Drucker and Bethany Nowviskie call it speculative computing, and Steve Ramsay calls “algorithmic criticism,” and though there are minor differences between all of these conceptions, they represent as a whole a form of DH that, while it is well-known and respected within DH circles, is not acknowledged frequently enough outside of them, especially in the depictions of DH that we see in the popular press or the caricatures we see of DH in twitter flame wars. It is especially unseen, I would suggest, in the academy itself, where scholars hostile to DH work tend to miss the implications of deformative textual analysis, focusing their ire on that side of quantitative literary analysis that seeks most strongly to align itself with scientific practices.

I’ve set up a rough binary here, and it’s one I will complicate in multiple ways. But before I do, I want to walk through some parts of the lineage of each of these areas as a way of grounding today’s conversation.

Digital humanities work in large-scale text analysis of course has roots in longstanding areas of humanities computing and computational linguistics. But it was given profound inspiration in 2005 with the publication of Franco Moretti’s Graphs, Maps, Trees, a text that argued for a new approach to textual analysis called “distant reading” where “distance is […] not an obstacle, but a specific form of knowledge” (1). Moretti’s work, at this time, has a wonderful, suggestive style, a style imbued with possibility and play, a style full of posed but unanswered questions. The graphs, maps, and trees of his title proposed various models for the life cycles of literary texts; the book contains strong statements about the need for the kind of work it does, but it also resists conclusions and does not overly stockpile evidence in support of its claims. As Moretti himself put it, addressing the “conceptual eclecticism of his work, “opening new conceptual possibilities seemed more important than justifying them in every detail.” This was a work of scholarship meant to inspire and provoke, not to present proofs.

Eight years later, in 2013, Matthew Jockers, one of Moretti’s colleagues at Stanford who had by then moved on to a professorship at the University of Nebraska, published Macroanalysis: Digital Methods & Literary History, a text that employed a different register to present its claims, beginning with chapter 1, which is titled “Revolution.” In Jockers’s text, we see a hardening of Moretti’s register, a tightening up and sharpening of the meandering suggestiveness that characterized Moretti’s writing. Where Moretti’s slim Maps, Graphs, Trees was elliptical and suggestive, Jockers’s Macroanalysis was more pointed, seeking to marshal strong evidence in support of its claims. In the book, Jockers suggests that literary studies should follow scientific models of evidence, testing, and proof; he writes, “The conclusions we reach as literary scholars are rarely ‘testable’ in the way that scientific conclusions are testable. And the conclusions we reach as literary scholars are rarely ‘repeatable’ in the way that scientific experiments are repeatable” (6). Clearly, this is a problem for Jockers; he argues that literary scholars must engage the “massive digital corpora [that] offer us unprecedented access to the literary record and invite, even demand, a new type of evidence gathering and meaning making” (8). And as he continues, he deploys a remarkable metaphor:

Today’s student of literature must be adept at reading and gathering evidence from individual texts and equally adept at accessing and mining digital-text repositories. And mining here really is the key word in context. Literary scholars must learn to go beyond search. In search, we go after a single nugget, carefully panning in the river of prose. At the risk of giving offense to the environmentalists, what is needed now is the literary equivalent of open-pit mining or hydraulicking. . .. the sheer amount of data makes search ineffectual as a means of evidence gathering. Close reading, digital searching, will continue to reveal nuggets, while the deeper veins lie buried beneath the mass of gravel layered above. What are required are methods for aggregating and making sense out of both the nuggets and the tailings. . . . More interesting, more exciting, than panning for nuggets in digital archives is to go beyond the pan and exploit the trommel of computation to process, condense, deform, and analyze the deeper strata from which these nuggets were born, to unearth, for the first time, what these corpora *really* contain. (9-10; emphasis mine)

Even forgiving Jockers some amount of poetic license, this is a really remarkable extended metaphor, one that figures the process of computational literary work as a strip-mining operation that rips out layers of rock and soil to reach the rich mineral strata of meaning below, which are then presumably extracted in systematic fashion until the mine is emptied of value, its natural resources depleted. One doesn’t need to be an environmentalist to be a bit uneasy about such a scenario.

What’s really notable to me here, though, is the immense pressure this passage reveals. And I refer not to the pressure Jockers’s computational drills are exerting on the pastoral literary landscape, but rather to what his rhetoric reveals about the increasing pressure on DH researchers to find, present, and demonstrate results. Clearly, between Moretti’s 2005 preliminary thought experiments and Jockers’s 2013 strip-mining expedition, the ground had shifted.

In his 2010 blog post “Where’s the Beef? Does Digital Humanities Have to Answer Questions?” digital historian Tom Scheinfeldt compares the current moment in the digital humanities to eighteenth-century work in natural philosophy, when experiments with microscopes, air pumps, and electrical machines were, at first, perceived as nothing more than parlor tricks before they were revealed as useful in what we would now call scientific experimentation. Scheinfeldt writes:

Sometimes new tools are built to answer pre-existing questions. Sometimes, as in the case of Hauksbee’s electrical machine, new questions and answers are the byproduct of the creation of new tools. Sometimes it takes a while; in the meantime, tools themselves and the whiz-bang effects they produce must be the focus of scholarly attention.

Eventually digital humanities must make arguments. It has to answer questions. But yet? Like 18th century natural philosophers confronted with a deluge of strange new tools like microscopes, air pumps, and electrical machines, maybe we need time to articulate our digital apparatus, to produce new phenomena that we can neither anticipate nor explain immediately.

One can see what Scheinfeldt describes clearly in Moretti’s work: a sense of wonder, showmanship, and play in the new perspectives that computational methods have uncovered. In Jockers, we see a more focused, precise, scientifically oriented apparatus focused on testable, repeatable results. Jockers and Moretti are hardly the only DHers exploring large datasets — practitioners such as Ted Underwood, Andrew Goldstone, Andrew Piper, Tanya Clement, Lisa Rhody, and Ben Schmidt, among many others, come to mind as practitioners, each engaging such work in fascinating ways — but Moretti and Jockers (and their labs) may stand in for a larger group of scholars using similar methods to explore patterns in massive groups of texts.

I’ve said that I would describe two discrete areas of DH literary text analysis work. Having outlined what I would characterize as the area of the field proceeding on proto-scientific assumptions, I would now like to turn to a group of DH thinkers who, while occasionally using similar tools, are focused on forms of computational literary analysis that in many ways take a diametrically opposed path to the digital text by seeking to disrupt and play with the structures of the text.

In their 1999 piece published in New Literary History, Jerome McGann and Lisa Samuels outline their concept of “deformative criticism,” a hermeneutic approach to digital textuality that, rather than seeking to discover the underlying structure of texts through exposition, seeks to “expose the poem’s possibilities of meaning” through techniques such as reading backward and otherwise altering and rearranging the sequencing of words in a text. “Deformative” moves such as these, McGann and Samuels argue, “reinvestigate the terms in which critical commentary will be undertaken” (116). Many critics working in this vein argue that all interpretative readings are deformative, reformulating texts in the process of interpreting them.

In her work, Johanna Drucker has collaborated with Bethany Nowviskie and others to explore what she terms “speculative computing,” which is “driven by a commitment to interpretation-as-deformance in a tradition that has its roots in parody, play, and critical methods such as those of the Situationist International, Oulipo, and the longer tradition of ‘pataphysics with its emphasis on ‘the particular’ over ‘the general'” (>>>>PG#). Drucker goes on to differentiate speculative computing from quantitative processes based on “standard, repeatable, mathematical and logical procedures” by exploring “patacritical” methods, which privilege exceptions to rules and deviations to norms. Speculative computing, according to Drucker, “let’s go of the positivist underpinnings of the Anglo-analytic mode of epistemological inquiry,” creating imaginary solutions that suggest generative possibilities rather than answers. Drucker writes:

Humanistic research takes the approach that a thesis is an instrument for exposing what one doesn’t know. The ‘patacritical concept of imaginary solutions isn’t an act of make-believe but an epistemological move, much closer to the making-strange of the early-twentieth century avant-garde. It forces a reconceptualization of premises and parameters, not a reassessment of means and outcomes. (SpecLab 27)

Drucker frames her approach in opposition to the rationalized, positivistic assumptions of the scientific method, embracing instead randomness and play. This is also the approach that Stephen Ramsay takes in his book Reading Machines, arguing for what he terms “algorithmic criticism,” Ramsay writes that “[text analysis] must endeavor to assist the critic in the unfolding of interpretative possibilities” (SpecLab 10). Whereas Drucker seeks everywhere to undermine the positivist underpinnings of digital tools, creating not “digital tools in humanities contexts,” but rather “humanities tools in digital contexts” (SpecLab 25) Ramsay argues that “the narrowing constraints of computational logic–the irreducible tendency of the computer toward enumeration, measurement, and verification–is fully compatible” with a criticism that seeks to “employ conjecture . . . in order that the matter might become richer, deeper, and ever more complicated” (16). Because the algorithmic critic navigates the productive constraints of code to create the “deformative machine” from which she draws insights, the “hermeneutics of ‘what is’ becomes mingled with the hermeneutics of ‘how to’” (63).

And Mark Sample, in his “Notes Toward a Deformed Humanities,” proposes the act of deformance, of breaking things, as a creative-critical intervention, one that is premised on breaking things as a way of knowing. Sample’s projects — which include Hacking the Accident, an Oulipo-inspired version of the edited collection Hacking the Academy, and Disembargo, a project that reveals Sample’s dissertation as it “emerg[es] from a self-imposed six-year embargo, one letter at a time,” as well as a host of twitter bots that mash together a variety of literary and information sources — all demonstrate an inspired focus on interpretation as performed by creative computational expression.

I’ve discussed two major approaches to literary text analysis today — likely not without some reductive description — but I would like to turn now to the conference theme of “Going Public,” as each of these approaches take up that theme in different ways using platforms, methods, and models to foster more open and public DH communication.

Deformative work is often performed – witness Mark Sample’s twitter bots or generative texts, which operate in real time and interact with the public – at times even forming themselves in response to public speech.

Text mining scholars, with their focus on exploration, discovery, proof, and tool development, are admirably public in sharing evidence and code; just a few months ago, we witnessed one of the most fascinating controversies of recent years in DH, as DH scholar Annie Swafford raised questions about Matthew Jockers’s tool Syuzhet. Jockers had set out to build on Kurt Vonnegut’s lecture “The Shapes of Stories”; There, Vonnegut sketched what he described as the basic shapes of a number of essential story plots; following the arc of the main character’s fortunes, he suggested, we could discern a number of basic plot structures used repeatedly in various works of fiction, such as “Man in Hole,” “Boy Meets Girl,” and “From Bad to Worse.”

Jockers’s blog post described his use of Syuzhet, a package he wrote for the statistical software R, and which he also released publicly on Github. Because the code was available and public, Swafford was able to download it and experiment with it; she charged that the tool had major faults, and the ensuing discussion led to some sharp disagreements about the tool itself and Jockers’s findings.

Though Jockers wound up backing down from his earlier claims, the episode was fascinating as a moment in which in-progress work was presented, tested, and defended. This is of course nothing new in the sciences, but it was a moment in which the reproducibility of claims in DH was tested.

Having described these two areas of DH literary text analysis, one employing scientific models and seeking reproducible results and the other seeking to undermine the assumptions of the very platforms through which digital texts are constructed, I would like to finally complicate that binary and discuss some DH practitioners who are blending these approaches in fascinating ways.

First, I will turn to the work of Lisa Rhody, whose work on the topic of modeling of figurative language aims to investigate the very assumptions of the algorithms used in topic modeling. Topic modeling is a technique employed by Jockers and many others to reveal latent patterns in texts; it uses probabalistic algorithms to display a kind of topic-based guide to language in the text, tracking the play of similar concepts across it. Rhody’s project, as she writes, “illustrates how figurative language resists thematic topic assignments and by doing so, effectively increases the attractiveness of topic modeling as a methodological tool for literary analysis of poetic texts.” Using a tool that was designed to work with texts that contain little or no figurative language, Rhody’s study produces failure, but useful failure; as she writes, “topic modeling as a methodology, particularly in the case of highly-figurative language texts like poetry, can help us to get to new questions and discoveries — not because topic modeling works perfectly, but because poetry causes it to fail in ways that are potentially productive for literary scholars.”

Second, I will highlight the work of Micki Kaufman, a doctoral student in History at the CUNY Graduate Center with whom I’ve had the pleasure of working as she investigates memcons and telcons from the Digital National Security Archive’s Kissinger Collection. In her project “Quantifying Kissinger,” Micki has begun to explore some fascinating ways of looking at, visualizating, and even hearing topic models, a mode of inquiry that, I would suggest, foregrounds the subjective experiential approach championed by Drucker without sacrificing the utility of topic modeling and data visualization as investigative tools. Micki will be presenting on this work in January at the 2016 Modern Language Association Convention in Austin, Texas in a panel called “Weird DH.” I think it’s very promising.

Finally, I want to mention Jeff Binder, another student of mine — a doctoral student in English at the CUNY Graduate Center, whose work with Collin Jennings, a graduate student at NYU, on the Networked Corpus project, which aims to map topic models onto the texts they model, and to compare topic models of Adam Smith’s Wealth of Nations to the index published with the book. What this project produces, in the end, is a critical reflection on topic modeling itself, using it not necessarily to examine the main body of the text but rather to explore the alternate system of textual analysis presented by the book’s index.

I single out these three practitioners among the many wonderful scholars doing work in this area primarily for the fact that their practices, to my mind, unite the two approaches to text analysis that I have described this far. They use the computational tools of the proto-scientific group but in self-reflexive ways that embody the approach of deformative criticism, aiming to highlight interpretative complexity and ambiguity.

Johanna Drucker has argued that many digital tools are premised upon systems that make them poor fits for humanities inquiry:

Tools for humanities work have evolved considerably in the last decade, but during that same period a host of protocols for information visualization, data mining, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities. Positivistic, strictly quantitative, mechanistic, reductive and literal, these visualization and processing techniques preclude humanistic methods from their operations because of the very assumptions on which they are designed: that objects of knowledge can be understood as self-identical, self-evident, ahistorical, and autonomous. (“Humanistic Theory”)

Drucker calls for a new phase of digital humanities work, one that embodies a humanities-based approach to technology and interpretation. She writes:

I am trying to call for a next phase of digital humanities that would synthesize method and theory into ways of doing as thinking. . . .The challenge is to shift humanistic study from attention to the effects of technology (from readings of social media, games, narrative, personae, digital texts, images, environments), to a humanistically informed theory of the making of technology (a humanistic computing at the level of design, modeling of information architecture, data types, interface, and protocols). (“Humanistic Theory”)

By turning, in Drucker’s terms, from data to capta, from the presentation of data as transparent indexical fact to open and explicit acknowledgement of the socially constructed nature of information, and by using new DH tools and methods at times in ways that test the visible and occluded assumptions that structure them, these three junior scholars are moving us along on a new and exciting phase of digital humanities work.

If humanities text-mining work often proceeds according to the scientific method, striving to test hypotheses and create reproducible results, its genealogies lie in the work of natural philosophy and the various microscopes, air pumps, and electrical machines mentioned by Tom Scheinfeldt and described in depth in books like Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump. DH work, in fact, is often framed in terms of this genealogy, with the current moment being compared to the rise of science and experimentation with new tools.

As but one example, Ted Underwood, in response to the Syuzhet controversy and ensuing discussions about experimental methods, tweeted:

 

In the remaining section of this talk, I want to suggest an alternate genealogy for this moment, one that, although it has ties to that same early work of natural philosophy, might help ground digital humanities practice in a new frame. I will return to Emerson for a moment, to his statement that “The eye is the first circle; the horizon which it forms is the second; and throughout nature this primary figure is repeated without end.”

And so I want to explore pre-photographic experimentation with image-making as a way of suggesting new grounding for DH.

In 1839, Henry Louis Daguerre announced the invention of the daguerreotype camera to world, a moment of technical triumph that occluded a larger history of experiment. As the art historian Geoffrey Batchen has shown, when the invention of photography was announced to the world in 1839, the daguerreotype was one of a number of competing photographic technologies. The camera obscura had allowed artists to create replications of the world through a lens for centuries but no one was able to *fix* the image on paper, to make it last, to make it permanent. The project to do so was technical, artistic, and hermeneutic: while experimenters attempted to use different methods and materials to fix camera images on paper and metal, they did so with the confidence that the camera was an instrument of truth, a tool that could help human beings see the world from an unbiased, divine perspective. Daguerre himself was a showman, a painter of theatrical dioramas who had become interested in image-making through that realm.

And in fact, the modern negative-positive photograph descended not from the daguerreotype, but from what was called the calotype, a picture-making technology developed in Britain by William Henry Fox Talbot. While daguerreotypes were one-of-a-kind, positive images that could not be reproduced and that were made using expensive copper plates coated with silver halide and developed over mercury fumes, calotypes were reproducible, negative-positive, paper prints. Daguerreotypes, however, produced much finer gradations of tone and detail than the calotype. As a photographic practice, daguerreotypy grew more quickly in popularity in part because it produced more detailed images, and in part because Talbot restricted the spread of his technology by holding onto his patent license. Daguerre, meanwhile, sold his patent to the French public in exchange for a lifetime pension from the government, but held on to his patent rights in Britain. His announcement in 1839 marked the release of his image-making technology to the world.

In his 2002 examination of Henry Talbot’s work, “A Philosophical Window,” art historian Geoffrey Batchen notes that the specimens of early photography — the failed experiments, the pictures that were not fixed, the images that were faded and obscured, have been viewed by art historians only as indices of technical progression towards the invention of the photographic camera, rather than as art objects in and of themselves. Looking at Talbot’s pictures critically, and taking seriously Talbot as an artist working with a camera, Batchen finds in Talbot a conscious image-maker whose work should have relevance to us today.

 

 

 

Batchen focuses on one of Talbot’s early photographs, “”Latticed Window (with the Camera Obscura) August  1835.” The photograph contains a note: “when first made, the squares of glafs [sic] about 200 in number could be counted, with the help of a lens.”

talbot__latticed_window_taken_with_the_camera_obscura_(1835)1333486167283
Batchen performs a fantastic close reading of this note, highlighting Talbot’s instructions to the viewer, the suggestion that the viewer of the photograph look at it first from afar, and then up close with the aid of a lens. This set of instructions, claims Batchen,

anticipates, and even insists on, the  mobilization of the viewer’s eye, moving it back and forth, up and down, above the image. We are asked to see his picture first with the naked eye and then by means of an optical prosthesis . . . The attempt to improve one’s power of observation by looking through a lens is also a concession that the naked eye alone can no longer be guaranteed to provide the viewer with sufficient knowledge of the thing being looked at. It speaks to the insufficiency of sight, even while making us, through the accompanying shifts of scale and distortions of image that come with magnification, more self-conscious about the physical act of looking. (101-102)

Batchen’s comments here, focusing on scale and perspective, showcasing disorientations produced by new angles of vision, might remind us of Moretti’s discussions of scale, of the need for a new type of seeing to take account of thousands of texts. And indeed, amazingly, Talbot, the progenitor of the modern photograph, was also tied to early computing. He was a friend of Charles Babbage’s, whose Difference Machine is often described as the world’s first mechanical computer. Talbot made photos of machine-made lace and sent them to Babbage. Babbage, Batchen reports, exhibited some of Talbot’s prints in his drawing room, in the vicinity of his Difference Engine, making it likely that early computing and early photography were experienced together in his drawing room (107).

In his discussion of Talbot’s lattice-window photograph, Batchen notes that Talbot indeed narratizes our gaze. He writes:

So, for Talbot, the subject of this picture is, first, the activity of our seeing it, and second, the window and latticed panes of glass, not the landscape we can dimly spy through it. He makes us peer closely at the surface of his print, until stopped by the paper fibres themselves and at the framing of his window, but no further. And what do we see there? Is ‘photography’ the white lines or the lilac ground, or is it to be located in the gestalt between them? (102)

And where, we can ask, is DH to be located? Do we even know the foreground and background between which we can locate its gestalt?

Screen Shot 2015-12-30 at 12.16.23 AM

This, I think, is exactly the question we need to ask as we consider where DH is moving and where it should go.

One thing we can do is think about DH as the gestalt between gazes — not distant reading, not close reading, but the dizzy shape of self-reflexive movement between them. Though technology plays a part, it is critical reflection on that technology that can account for new, provocative digital humanities approaches.

And so, finally, I return to Emerson. The eye is the first circle, the horizon which it forms is the second. Let us plumb the space between.

 

Works Cited

Batchen, Gregory. “A Philosophical Window.” History of Photography 26.2 (2002): 100-112. Print.

Drucker, Johanna. “Humanities Approaches to Graphical Display.” 5.1 (2011): Digital Humanities Quarterly. Web. 19 Apr. 2015.

—–. “Humanistic Theory and Digital Scholarship.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2013. Web. 19 Apr. 2015.

—–. SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009. Print.

Emerson, Ralph Waldo. Essays and English Traits. Vol. 5. New York: P.F. Collier & Son, 1909. Print.

Jockers, Matthew L. Macroanalysis: Digital Methods and Literary History. Urbana: University of Illinois Press, 2013. Print.

—–. “Requiem for a Low Pass Filter.” WordPress. matthewjockers.net. 6 Apr. 2015. Web. 19 Apr. 2015.

—–. “Revealing Sentiment and Plot Arcs with the Syuzhet Package.” matthewjockers.net. WordPress. 2 Feb. 2015. Print.

Moretti, Franco. “The Slaughterhouse of Literature.” Modern Language Quarterly 61.1 (2000): 207-228. Print.

Moretti, Franco, and Alberto Piazza. Graphs, Maps, Trees: Abstract Models for Literary History. London: Verso, 2007. Print.

Ramsay, Stephen. Reading Machines: Toward an Algorithmic Criticism. Urbana: University of Illinois Press, 2011. Print.

Rhody, Lisa M. “Topic Modeling and Figurative Language” Journal of Digital Humanities. Vol. 2, No. 1 (Winter 2012). Web.

Sample, Mark. “Notes toward a Deformed Humanities.” samplereality.com. 2 May 2012. Web. 19 Apr. 2015.

Samuels, Lisa, and Jerome J. McGann. “Deformance and Interpretation.” New Literary History 30.1 (1999): 25–56. Print.

Schultz, Kathryn. “The Mechanic Muse: What Is Distant Reading?” The New York Times 24 June 2011. NYTimes.com. Web. 19 Apr. 2015.

Swafford, Annie. “Problems with the Syuzhet Package.” annieswafford.wordpress.com. 2 Mar. 2015. Web.

Underwood, Ted. “These are such classic history of science problems. I swear we are literally re-enacting the whole 17th century.” 29 March 2015, 10:41 p.m. Tweet. 19 Apr. 2015.

 

Acknowledgements: Thanks to Lindsey Albracht for her help in preparing this web edition of the talk.

Beyond the PDF: Experiments in Open-Access Scholarly Publishing (#MLA13 CFP)

As open-access scholarly publishing matures and movements such as the Elsevier boycott continue to grow, OA texts have begun to move beyond the simple (but crucial!) principle of openness towards an ideal of interactivity. This special session will explore innovative examples of open-access scholarly publishing that showcase new types of social, interactive, mixed-media texts. Particularly welcome is discussion of OA texts that incorporate new strategies of open peer review, community-based publication, socially networked reading/writing strategies, altmetrical analytics, and open-source publishing platforms, particularly as they inform or relate to print-bound editions of the same texts. Also welcome are critiques of the accessibility of interactive OA texts from the standpoint of universal design.

This roundtable aims for relatively short presentations of 5-7 minutes that will showcase a range of projects.

Interested participants should send 250-word abstracts and a CV to Matthew K. Gold at mgold@gc.cuny.edu by March 20, 2012.

Whose Revolution? Towards a More Equitable Digital Humanities

What follows is the text of a talk I gave at the 2012 MLA as part of the Debates in the Digital Humanities panel, which grew out of the just-published book of the same name (more about that in a forthcoming post). Many thanks to my fellow panelists Liz Losh, Jeff Rice, and Jentery Sayers. Thanks, too, to everyone who contributed to the active twitter backchannel for the panel and to Lee Skallerup for archiving it. Finally, I’m grateful to Jason Rhody for his helpful responses to a draft version of this presentation.


“Whose Revolution? Towards a More Equitable Digital Humanities”

The digital humanities – be it a field, a set of methodologies, a movement, a community, a singular or plural descriptor, a state of mind, or just a convenient label for a set of digital tools and practices that have helped us shift the way we perform research, teaching, and service – have arrived on the academic scene amidst immense amounts of hype. I’m sure you’re sick of hearing that hype, so I won’t rehearse it now except to say that the coverage of DH in the popular academic press sometimes seems to imply that the field has both the power and the responsibility to save the academy. Indeed, to many observers, the most notable thing about DH is the hype that has attended its arrival  — and I believe that one of my fellow panelists, Jeff Rice, will be proposing a more pointed synonym for “hype” during his presentation.

It’s worthwhile to point out that it’s harder than you’d think to find inflated claims of self-importance in the actual scholarly discourse of the field. The digital humanists I know tend to carefully couch their claims within prudently defined frames of analysis. Inflated claims, in fact, can be found most easily in responses to the field by non-specialists, who routinely and actively read the overblown rhetoric of revolution into more carefully grounded arguments. Such attempts to construct a straw-man version of DH get in the way of honest discussions about the ways in which DH might accurately be said to alter existing academic paradigms.

Some of those possibilities were articulated recently in a cluster of articles in Profession on evaluating digital scholarship, edited by Susan Schriebman, Laura Mandell, and Stephen Olsen. The articles describe many of the challenges that DH projects present to traditional practices of academic review, including the difficulty of evaluating collaborative work, the possibility that digital tools might constitute research in and of themselves, the unconventional nature of multimodal criticism, the evolution of open forms of peer-review, and the emergence of the kind of “middle-state” publishing that presents academic discourse in a form that lies somewhere between blog posts and journal articles. Then, too, the much-discussed role of “alt-ac” scholars, or “alternative academics,” is helping to reshape our notions of the institutional roles from which scholarly work emerges. Each of these new forms of activity presents a unique challenge to existing models of professional norms in the academy, many of them in ways that may qualify as revolutionary.

And yet, amid this talk of revolution, it seems worthwhile to consider not just what academic values and practices are being reshaped by DH, but also what values and practices are being preserved by it. To what extent, we might ask, is the digital humanities in fact not upending the norms of the academy, but rather simply translating existing academic values into the digital age without transmogrifying them? In what senses does the digital humanities preserve the social and economic status quo of the academy even as it claims to reshape it?

A group of scholars – from both within and outside of the field – have assembled answers to some of those questions in a volume that I have recently edited for the University of Minnesota Press titled Debates in the Digital Humanities. In that book, contributors critique the digital humanities for a series of faults: not only paying inadequate attention to race, class, gender, and sexuality, but in some cases explicitly seeking to elide cultural issues from the frame of analysis; reinforcing the traditional academic valuation of research over teaching; and allowing the seductions of information visualization to paper over differences in material contexts.

These are all valid concerns, ones with which we would do well to grapple as the field evolves. But there is another matter of concern that we have only just begun to address, one that has to do with the material practices of the digital humanities – just who is doing DH work and where, and the extent to which the field is truly open to the entire range of institutions that make up the academic ecosystem. I want to suggest what perhaps is obvious: that at least in its early phases, the digital humanities has tended to be concentrated at research-intensive universities, at institutions that are well-endowed with both the financial and the human resources necessary to conduct digital humanities projects. Such institutions typically are sizeable enough to support digital humanities centers, which crucially house the developers, designers, project managers, and support staffs needed to complete DH projects. And the ability of large, well-endowed schools to win major grant competitions helps them continue to win major grant competitions, thus perpetuating unequal and inequitable academic structures.

At stake in this inequitable distribution of digital humanities funding is the real possibility that the current wave of enthusiastic DH work will touch only the highest and most prominent towers of the academy, leaving the kinds of less prestigious academic institutions that in fact make up the greatest part of the academic landscape relatively untouched.

As digital humanists, the questions we need to think about are these: what can digital humanities mean for cash-poor colleges with underserved student populations that have neither the staffing nor the expertise to complete DH projects on their own? What responsibilities do funders have to attempt to achieve a more equitable distribution of funding? Most importantly, what is the digital humanities missing when its professional discourse does not include the voices of the institutionally subaltern? How might the inclusion of students, faculty, and staff at such institutions alter the nature of discourse in DH, of the kinds of questions we ask and the kinds of answers we accept? What new kinds of collaborative structures might we build to begin to make DH more inclusive and more equitable?

As I’ll discuss later, DH Centers and funding agencies are well aware of these issues and working actively on these problems – there are developments underway that may help ameliorate the issues I’m going to describe today. But in order to help us think through those problems, and in an effort to provoke and give momentum to that conversation, I’d like to look at a few pieces of evidence to see whether there is, in fact, an uneven distribution of the digital humanities work that is weighted towards resource-rich institutions.

Case #1: Digital Humanities Centers

Here is a short list of some of the most active digital humanities centers in the U.S.:

The benefits that digital humanities centers bring to institutions seeking funding from granting agencies should be obvious. DH Centers provide not just the infrastructural technology, but also the staffing and expertise needed to complete resource-intensive DH projects.

There are two other important areas that we should mention and that may not be apparent to DHers working inside DH Centers. The first is the key ways in which DH Centers provide physical spaces that may not be available at cash-poor institutions, especially urban ones. Key basic elements that many people take for granted at research 1 institutions, such as stable wifi systems or sufficient electrical wiring to power computer servers, may be missing at smaller institutions. Then, too, such physical spaces provide the crucial sorts of personal networking that is just as important as infrastructural connection. Finally, we must recognize that grants create immense amounts of paperwork, and that potential DHers working at underserved institutions might not only have to complete the technical and intellectual work involved in a DH project, and publish analyses of those projects to have them count for tenure and promotion, but might also have to handle an increased administrative role in the bargain.

[At this point in the talk, I noted that most existing DH Centers did not spring fully-formed from their universities, but instead were cobbled together over a number of years through the hard and sustained work of their progenitors.]

Case Study #2: Distribution of Grants

Recently, the NEH Office of Digital Humanities conducted a study of its Start-Up grants program, an exciting venture that differs from traditional NEH grant programs in that instead of providing large sums of money to a small number of recipients, it aims to provide smaller starter grants of $25,000 to $50,000 to a wider range of projects. The program allows the ODH to operate in a venture-capitalist fashion, accepting the possibility of failure as it explicitly seeks high-risk, high-reward projects.

The study (PDF), which tracked NEH Digital Humanities Start-Up Grants from 2007-2010, show us how often members of different types of institutions applied for grants. Here is the graphic for universities:

What we see in this graph is a very real concentration of applications from universities that are Master’s level and above. The numbers, roughly, are:

Master’s/Doctoral: 575

BA or Assoc.: 80

Now, those numbers aren’t horrible, and I suspect that they have improved in recent years. And additionally, we should note that many non-university organizations applied for the NEH funding grants. Here is a breakdown of those numbers from the NEH:

What we see here, in fact, is a pretty impressive array of institutional applications for funding – certainly, this is something to build on.

And here are updated numbers of NEH SUG awards actually made – and I thank Jason Rhody, Brett Bobley, and Jennifer Serventi of the NEH ODH for their help in providing these numbers:

Now, there are a few caveats to be made here — only the home institution of the grant is shown, so collaborative efforts are not necessarily represented. Also, university libraries are mostly lumped under their respective university/college type.

Still, we can see pretty clearly here that an overwhelming number of grants have gone to Master’s level and above institutions. And we should be especially concerned that community colleges, which make up the vast number of institutions of higher education in our country, appear to have had a limited involvement in the digital humanities “revolution.”

New Models/New Solutions

Having identified a problem in DH, I’d like to turn now towards some possible solutions and close by discussing some important and hopeful signs for a more equitable future for the digital humanities work.

One of the fun things about proposing a conference paper in April and then giving the paper in January is that a lot can happen in eight months, especially in the digital humanities. And here, I’m happy to report on several new and/or newish initiatives that have begun to address some of the issues I’ve raised today. I’m going to run through them fairly quickly in the hope that many of you are already familiar with them (though I’d certainly be happy to expand on them during the Q&A):

This new initiative seeks to create a large-scale DH community resource that matches newcomers who have ideas for DH projects with experts in the field who can either help with the work itself or serve in an advisory capacity. The project, which is now affiliated with CenterNet, an international organization of digital-humanities centers, promises to do much to spread the wealth of DH expertise. The site has just been launched at this convention and should prove to be an important community-building resource for the field.

  • DH Questions and Answers

Like DH Commons, DH Questions and Answers, which was created by the Association for Computers and the Humanities, offers a way for newcomers to DH to ask many types of questions and have them answered by longstanding members of the field – thus building, in the process, a lasting knowledge resource for DH.

  • THATCamps

These small, self-organized digital-humanities unconferences have been spreading across the country and thereby bringing DH methodologies and questions into a wide variety of settings. Two upcoming THATCamps that promise to expand the purview of the field are THATCAMP HBCU and THATCAMP Caribbean. Both of these events were organized explicitly with the intent of addressing some of the issues I’ve been raising today.

  • The Growth of DH Scholarly Associations

    All of these organizations are actively drawing newcomers into the field. ACH created the above mentioned DH Questions and Answers. NITLE has done excellent public work that is enabling the members of small liberal-arts colleges to be competitive for DH grants. CenterNet is well-positioned to act as an organizational mentor for other institutions.

    These kinds of virtual, regional, and multi-institutional support networks are key, as they allow scholars with limited resources on their own campuses to create cross-institutional networks of infrastructure and support.

    • Continued Commitment to Open Access Publications, Open-Source Tools, and Open APIs

    The DH community has embraced open-access publication, a commitment that has run, in recent years, from Schriebman, Siemens, and Unsworth’s Companion to the Digital Humanities through Dan Cohen and Tom Schienfeldt’s Hacking the Academyto Kathleen Fitzpatrick’s Planned Obsolescence to Bethany Nowviskie’s alt-academy to my own Debates in the Digital Humanities, which will be available in an open-access edition later this Spring. Having these texts out on the web removes an important barrier that might have prevented scholars, staff, and students from cash-poor institutions from fully exploring DH work.

    Relatedly, the fact that many major DH tools – and here the list is too long to mention specific tools – are released on an open-source basis means that scholars working at institutions without DH Centers don’t have to start from scratch. It’s especially crucial that the NEH Office of Digital Humanities states in its proposal guidelines that “NEH views the use of open-source software as a key component in the broad distribution of exemplary digital scholarship in the humanities.”

    These institutes provide key opportunities for DH outreach to academics with a range of DH skills.

    I’d like to close by offering four key ideas to build on as we seek to expand the digital humanities beyond elite research-intensive institutions:

    • Actively perform DH-related outreach at underserved institutions
    • Ask funding agencies to making partnerships and outreach with underserved peer institutions recommended/required practice
    • Continue to build out virtual/consortial infrastructure
    • Build on projects that already highlight cross-institutional partnerships [here I mentioned my own “Looking for Whitman” project]
    • Study collaborative practices [here I mentioned the importance of connecting to colleagues in writing studies]

    While none of these ideas will solve these problems alone, together they may help us arrive at a more widely distributed version of DH that will enable a more diverse set of stakeholders take active roles in the field. And as any software engineer can tell you, the more eyes you have on a problem, the more likely you are to find and fix bugs in the system. So, let’s ensure that the social, political, and economic structures of our field are as open as our code.


    Photo credit: “Abstract #1” by boooooooomblastandruin

DH and Comp/Rhet: What We Share and What We Miss When We Share

What follows is the text of a short talk I gave at the 2012 MLA as part of the session Composing New Partnerships in the Digital Humanities. Many thanks to session organizer Catherine Prendergast, my fellow panelists, and everyone who took part in the discussion in person or through twitter.


Like my fellow panelists, I joined this session because I’d like to see an increased level of communication and collaboration between digital humanists and writing-studies scholars. There is much to be gained from the kinds of partnerships that such collaborations might foster, and much for members of both fields to learn from one another. I suspect that most people in this room today agree upon that much.

So, why haven’t such partnerships flourished? What issues, misconceptions, lapses, and tensions are preventing us from working together more closely?

A shared history of marginalization

Both comp/rhet and the digital humanities scholars have existed at the margins of traditional disciplinary formations in ways that have shaped their perspectives. Writing Studies has a history of being perceived as the service wing of English departments. Beyond heavy course loads, the field is sometimes seen as being more applied than theoretical – this despite the fact that writing studies has expanded into areas as diverse as complexity theory, ecocriticism, and object-oriented rhetoric.

The digital humanities, meanwhile, arose out of comparably humble origins. After years of inhabiting the corners of literature departments, doing the kinds of work, such as scholarly editing, that existed on the margins of English departments, humanities computing scholars emerged, blinking and bit disoriented, into the spotlight as digital humanists. Now the subject of breathless articles in the popular academic press and the recipients of high-profile research grants, DHers have found their status suddenly elevated. One need only look at the soul-searching blog posts that followed Bill Pannapacker’s suggestion at the last MLA that DH had created a cliquish star-system to see a community still coming to terms with its new position.

I bring up these points not to reopen old wounds, but rather to point out that they have a common source: a shared focus on the sometimes unglamorous, hands-on activities such as writing, coding, teaching, and building. This commonality is important, and it’s something, well, to build on, not least of all because we face a common problem as we attempt to help our colleagues understand the work we do.

Given what we share, it’s surprising to me that so many writing-studies scholars seem to misunderstand what DH is about. Recent discussions of the digital humanities on the tech-rhet listserv, one of the primary nodes of communication among tech-minded writing-studies scholars, show that many members of the comp/rhet community see DH as a field largely focused on digitization projects, scholarly editions, and literary archives. Not only is this a limited and somewhat distorted view of DH, it’s also one that is especially likely to alienate writing-studies scholars, emphasizing as it does the DH work done within the very traditional literary boundaries that were used to marginalize comp/rhet in previous decades.

This understanding of DH misses some key elements of this emerging field:

  1. Its collaborative nature, which is also central to comp/rhet teaching and research;
  2. The significant number of digital humanists who, like me, focus their work not on scholarly editions and textual mark-up, but rather on networked platforms for scholarly communication and networked open-source pedagogy;
  3. The fact that the digital humanities are open in a fundamental way, both through open-access scholarship and through open-source tool building;
  4. The fact that DH, too, has what Bethany Nowviskie has called an “eternal September” – a constantly refreshed group of newbies who seem to emerge and ask the same sorts of basic questions that have been asked and answered before. We need to respond to such questions not by becoming frustrated that newcomers have missed citations to older work – work that may indeed be outside of their home disciplines – but rather by demonstrating how and why that past work remains relevant in the present moment.
  5. The fact that there is enormous interest right now in the digital humanities on networked pedagogy. This is a key area of shared interest in which we should be collaborating.
  6. The fact that DH is interdisciplinary and multi-faceted. To understand it primarily as the province of digital literary scholars is to miss the full range of the digital humanities, which involves stakeholders from disciplines such as history, archaeology, classical studies, and, yes, English, and as well as librarians, archivists, museum professionals, developers, designers, and project managers.

    In this sense, I’d like to recall a recent blog post by University of Illinois scholar Ted Underwood, who argued that DH is “a rubric under which a bunch of different projects have gathered — from new media studies to text mining to the open-access movement — linked mainly by the fact that they are responding to related kinds of fluidity: rapid changes in representation, communication, and analysis that open up detours around some familiar institutions.”

To respond to DH work by reasserting the disciplinary boundaries of those “familiar institutions,” as I believe some writing-studies scholars are doing, is to miss an opportunity for the kinds of shared endeavors that are demanded by our moment.

So, let’s begin by looking towards scholars who have begun to bridge these two fields and think about the ways in which they are moving us forward. I’m thinking here of hybrid comp-rhet/DH scholars like Alex Reid, Jentery Sayers, Jamie “Skye” Bianco, Kathie Gossett, Liz Losh, William Hart-Davidson, and Jim Ridolfo, all of whom are finding ways to blend work in these fields.

I’d like to close with some words from Matt Kirschenbaum, who reminds us, in his seminal piece, “What is Digital Humanities and What’s It Doing In English Departments,” that “digital humanities is also a social undertaking.” That is, I think Matt is saying, that DH is not just a series of quantitative methodologies for crunching texts or bunch of TEI markup tags, but rather a community that is in a continual act of becoming. We all need to do a better job of ensuring that our communities are open and of communicating more clearly with one another. This session, I hope, is a start.

An Update

I’m excited to announce that I’ll be joining the CUNY Graduate Center this Fall as Advisor to the Provost for Master’s Programs and Digital Initiatives. My charge there will involve working with the Provost and Associate Provosts to promote and strengthen existing Master’s Programs and to develop new degree programs. I’ll also be collaborating on a variety of digital initiatives with many members of the GC community. It’s an exciting opportunity and I’m looking forward to the work that lies ahead.

While I will continue to teach at City Tech as I take on this new role, I regret to say that I will be unable to continue serving as PI on the U.S. Department of Education “Living Lab” grant. That project has gotten off to a fast and productive start, thanks to the extremely hard work of the entire grant team. In our first year, we’ve had an initial cohort of faculty members participate in a newly designed General Education seminar; we have built the first iteration of the City Tech OpenLab, a socially networked, community-based platform for teaching, learning, and sharing that is currently in a soft-launch; we established the Brooklyn Waterfront Research Center, which has already become part of NYC’s long-term vision for its waterfront; and we have laid the groundwork for numerous other projects that are currently in the pipeline. I am grateful to be leaving the grant in the very capable hands of my friend and colleague Maura Smale, who will be assisted by our excellent Project Coordinator Charlie Edwards and a wonderful team of colleagues. I wish them the very best as they continue the work that we have begun together, and I look forward to remaining involved in the project as it moves forward.