Facts, Patterns, Methods, Meaning: Public Knowledge Building in the Digital Humanities

Note: This talk was delivered at the University of Wisconsin – Madison as part of the Digital Humanities Plus Art: Going Public symposium on April 17, 2015. I am grateful to the organizers of the symposium for generously hosting me and occasioning these thoughts.

 Facts, Patterns, Methods, Meaning: Public Knowledge Building in the Digital Humanities

Things have a way of coming full circle, of beginning where they have ended, and so I want to start today with Ralph Waldo Emerson, a man who thought about beginnings and endings, circles and forms. “The eye is the first circle,” he wrote. “The horizon which it forms is the second; and throughout nature this primary figure is repeated without end” (“Circles”).

Circles select and enfold, but also exclude, demarcating a perimeter, an in and an out. “The eye is the first circle”; it is the lens through which those of us lucky enough to have eyesight are able to perceive the world. And yet the eye both makes sight possible and is bounded by a second circle formed by the horizon of our vision, a circle that discloses both constraints and possibilities. We know that the landscape extends beyond the visible horizon, but we are limited by our own perceptions, even as they make possible all that we know. And this figure, this double act of knowing and unknowing, seeing and unseeing, taking in possibilities and limits in the same glance, is the mark of our experience in the world. There is always more to learn, always more outside the reach of our knowledge, always more beyond the edge of our sight.

Emerson teaches us to be humble in the face of such knowledge. “Every action,” he writes, “admits of being outdone. Our life is an apprenticeship to the truth, that around every circle another can be drawn; that there is no end in nature, but every end is a beginning; that there is always another dawn risen on mid-noon, and under every deep a lower deep opens.”

Perhaps it is telling that Emerson’s figure here involves a depth to be plumbed rather than a height to be climbed. For at this moment in the development of the digital humanities, we are pursuing new paths to knowledge, extending the horizons of our abilities with new tools. This is, obviously, not a teleological or progressive journey. We have exciting new tools and provocative new methods, but they are not necessarily leading us to higher truths. We are not marching along straight and ever-improving lines of progress. But we are producing tools that conform to new directions in our thought, and those tools can usefully reset our perspectives, helping us look with new sight on things we thought we understood. They can give us new vantage points and new angles from which we can explore the depths around us. And, of course, even as they make new sights possible, we remember Emerson and note that they foreclose, or at least obscure, others.

Emerson’s aphorisms provide useful reminders both for digital humanists surveying the field and for scholars observing it from its horizons. In today’s talk, I want to think through states of knowing in the digital humanities, situating our practices within larger histories of knowledge production. My talk has three parts:

  1. A discussion of a few approaches to text analysis and their relation to larger perceptions about what DH is and does, and how DH knowledge is produced;
  2. A discussion of some practitioners who are blending these approaches in provocative ways;
  3. A wider view of experimental knowledge in DH, with the suggestion of a new grounding, based in the arts, for future DH public work.

I want to start by discussing current directions of DH research, and in particular to spend some time poking a bit at one of the most influential and vibrant areas of DH — literary text analysis, the type of DH research that most often stands in as a synecdoche for the larger whole of the digital humanities. I do this despite the fact that focusing on literary text analysis risks occluding the many other rich areas of digital humanities work, including geospatial mapping and analysis, data visualization, text encoding and scholarly editing, digital archives and preservation, digital forensics, networked rhetoric, digital pedagogy, advanced processing of image, video, and audio files, and 3D modeling and fabrication, among others. And I should note that I do this despite the fact that my own DH work does not center on the area of literary text analysis.

One reason to focus on text analysis today is that when we talk about DH methods and DH work in the public sphere, literary text analysis of large corpora in particular is over-identified with DH, especially in the popular press, but also in the academy. There, we often see large-scale text analysis work clothed in the rhetoric of discovery, with DHers described as daring adventurers scaling the cliffs of computational heights. A 2011 New York Times book review of a Stanford Literary Lab pamphlet described, tongue-in-cheek, Franco Moretti’s supposed attempt to appear “now as literature’s Linnaeus (taxonomizing a vast new trove of data), now as Vesalius (exposing its essential skeleton), now as Galileo (revealing and reordering the universe of books), now as Darwin (seeking ‘a law of literary ­evolution’)” (Schulz). All that’s missing, it would seem, is mention of an Indiana-Jones-esque beaten fedora.

If literary text mining operates as a kind of DH imaginary in popular discourse around the field, one point I want to make today is that it is an impoverished version of text analysis, or at the very least a one-sided and incomplete one. As a way of complicating that picture, I want to sketch out two prominent areas of research in DH literary text analysis, one premised (not always, but often) upon scientific principles of experimentation that use analysis of large-scale textual corpora to uncover previously unknown, invisible, or under-remarked-upon patterns in texts across broad swaths of time. Known colloquially and collectively through Franco Moretti’s term “distant reading,” Matthew Jockers’s term “macroanalysis,” or Jean-Baptiste Michel and Erez Lieberman Aiden’s term “culturomics,” this approach is predicated on an encounter with texts at scale. As Franco Moretti has noted in his essay “The Slaughterhouse of Literature” when he described the move towards distant reading:

Knowing two hundred novels is already difficult. Twenty thousand? How can we do it, what does “knowledge” mean, in this new scenario? One thing for sure: it cannot mean the very close reading of very few texts—secularized theology, really (“canon”!)—that has radiated from the cheerful town of New Haven over the whole field of literary studies. A larger literary history requires other skills: sampling; statistics; work with series, titles, concordances, incipits. (208-209)

This is knowledge work at a new scale, work that requires, as Moretti notes, quantitative tools of analysis.

Opposed to this, though less often discussed — is a different form of DH work, one based not on an empirical search for facts and patterns, but rather on the deliberate mangling of those very facts and patterns, a conscious interference with the computational artifact, a mode of investigation based not on hypothesis and experiment in search of proof but rather on deformance, alteration, randomness, and play. This form of DH research aims to align computational research with humanistic principles with a goal not of unearthing facts, but rather of undermining assumptions, laying bare the social, political, historical, computational, and literary constructs that underlie digital texts. And sometimes, it simply aims to highlight the profound oddities of digital textuality. This work, which has been carried on for decades by scholar practitioners such as Jerome McGann, Johanna Drucker, Bethany Nowviskie, Stephen Ramsay, and Mark Sample, has been called by many names — McGann terms it deformative criticism, Johanna Drucker and Bethany Nowviskie call it speculative computing, and Steve Ramsay calls “algorithmic criticism,” and though there are minor differences between all of these conceptions, they represent as a whole a form of DH that, while it is well-known and respected within DH circles, is not acknowledged frequently enough outside of them, especially in the depictions of DH that we see in the popular press or the caricatures we see of DH in twitter flame wars. It is especially unseen, I would suggest, in the academy itself, where scholars hostile to DH work tend to miss the implications of deformative textual analysis, focusing their ire on that side of quantitative literary analysis that seeks most strongly to align itself with scientific practices.

I’ve set up a rough binary here, and it’s one I will complicate in multiple ways. But before I do, I want to walk through some parts of the lineage of each of these areas as a way of grounding today’s conversation.

Digital humanities work in large-scale text analysis of course has roots in longstanding areas of humanities computing and computational linguistics. But it was given profound inspiration in 2005 with the publication of Franco Moretti’s Graphs, Maps, Trees, a text that argued for a new approach to textual analysis called “distant reading” where “distance is […] not an obstacle, but a specific form of knowledge” (1). Moretti’s work, at this time, has a wonderful, suggestive style, a style imbued with possibility and play, a style full of posed but unanswered questions. The graphs, maps, and trees of his title proposed various models for the life cycles of literary texts; the book contains strong statements about the need for the kind of work it does, but it also resists conclusions and does not overly stockpile evidence in support of its claims. As Moretti himself put it, addressing the “conceptual eclecticism of his work, “opening new conceptual possibilities seemed more important than justifying them in every detail.” This was a work of scholarship meant to inspire and provoke, not to present proofs.

Eight years later, in 2013, Matthew Jockers, one of Moretti’s colleagues at Stanford who had by then moved on to a professorship at the University of Nebraska, published Macroanalysis: Digital Methods & Literary History, a text that employed a different register to present its claims, beginning with chapter 1, which is titled “Revolution.” In Jockers’s text, we see a hardening of Moretti’s register, a tightening up and sharpening of the meandering suggestiveness that characterized Moretti’s writing. Where Moretti’s slim Maps, Graphs, Trees was elliptical and suggestive, Jockers’s Macroanalysis was more pointed, seeking to marshal strong evidence in support of its claims. In the book, Jockers suggests that literary studies should follow scientific models of evidence, testing, and proof; he writes, “The conclusions we reach as literary scholars are rarely ‘testable’ in the way that scientific conclusions are testable. And the conclusions we reach as literary scholars are rarely ‘repeatable’ in the way that scientific experiments are repeatable” (6). Clearly, this is a problem for Jockers; he argues that literary scholars must engage the “massive digital corpora [that] offer us unprecedented access to the literary record and invite, even demand, a new type of evidence gathering and meaning making” (8). And as he continues, he deploys a remarkable metaphor:

Today’s student of literature must be adept at reading and gathering evidence from individual texts and equally adept at accessing and mining digital-text repositories. And mining here really is the key word in context. Literary scholars must learn to go beyond search. In search, we go after a single nugget, carefully panning in the river of prose. At the risk of giving offense to the environmentalists, what is needed now is the literary equivalent of open-pit mining or hydraulicking. . .. the sheer amount of data makes search ineffectual as a means of evidence gathering. Close reading, digital searching, will continue to reveal nuggets, while the deeper veins lie buried beneath the mass of gravel layered above. What are required are methods for aggregating and making sense out of both the nuggets and the tailings. . . . More interesting, more exciting, than panning for nuggets in digital archives is to go beyond the pan and exploit the trommel of computation to process, condense, deform, and analyze the deeper strata from which these nuggets were born, to unearth, for the first time, what these corpora *really* contain. (9-10; emphasis mine)

Even forgiving Jockers some amount of poetic license, this is a really remarkable extended metaphor, one that figures the process of computational literary work as a strip-mining operation that rips out layers of rock and soil to reach the rich mineral strata of meaning below, which are then presumably extracted in systematic fashion until the mine is emptied of value, its natural resources depleted. One doesn’t need to be an environmentalist to be a bit uneasy about such a scenario.

What’s really notable to me here, though, is the immense pressure this passage reveals. And I refer not to the pressure Jockers’s computational drills are exerting on the pastoral literary landscape, but rather to what his rhetoric reveals about the increasing pressure on DH researchers to find, present, and demonstrate results. Clearly, between Moretti’s 2005 preliminary thought experiments and Jockers’s 2013 strip-mining expedition, the ground had shifted.

In his 2010 blog post “Where’s the Beef? Does Digital Humanities Have to Answer Questions?” digital historian Tom Scheinfeldt compares the current moment in the digital humanities to eighteenth-century work in natural philosophy, when experiments with microscopes, air pumps, and electrical machines were, at first, perceived as nothing more than parlor tricks before they were revealed as useful in what we would now call scientific experimentation. Scheinfeldt writes:

Sometimes new tools are built to answer pre-existing questions. Sometimes, as in the case of Hauksbee’s electrical machine, new questions and answers are the byproduct of the creation of new tools. Sometimes it takes a while; in the meantime, tools themselves and the whiz-bang effects they produce must be the focus of scholarly attention.

Eventually digital humanities must make arguments. It has to answer questions. But yet? Like 18th century natural philosophers confronted with a deluge of strange new tools like microscopes, air pumps, and electrical machines, maybe we need time to articulate our digital apparatus, to produce new phenomena that we can neither anticipate nor explain immediately.

One can see what Scheinfeldt describes clearly in Moretti’s work: a sense of wonder, showmanship, and play in the new perspectives that computational methods have uncovered. In Jockers, we see a more focused, precise, scientifically oriented apparatus focused on testable, repeatable results. Jockers and Moretti are hardly the only DHers exploring large datasets — practitioners such as Ted Underwood, Andrew Goldstone, Andrew Piper, Tanya Clement, Lisa Rhody, and Ben Schmidt, among many others, come to mind as practitioners, each engaging such work in fascinating ways — but Moretti and Jockers (and their labs) may stand in for a larger group of scholars using similar methods to explore patterns in massive groups of texts.

I’ve said that I would describe two discrete areas of DH literary text analysis work. Having outlined what I would characterize as the area of the field proceeding on proto-scientific assumptions, I would now like to turn to a group of DH thinkers who, while occasionally using similar tools, are focused on forms of computational literary analysis that in many ways take a diametrically opposed path to the digital text by seeking to disrupt and play with the structures of the text.

In their 1999 piece published in New Literary History, Jerome McGann and Lisa Samuels outline their concept of “deformative criticism,” a hermeneutic approach to digital textuality that, rather than seeking to discover the underlying structure of texts through exposition, seeks to “expose the poem’s possibilities of meaning” through techniques such as reading backward and otherwise altering and rearranging the sequencing of words in a text. “Deformative” moves such as these, McGann and Samuels argue, “reinvestigate the terms in which critical commentary will be undertaken” (116). Many critics working in this vein argue that all interpretative readings are deformative, reformulating texts in the process of interpreting them.

In her work, Johanna Drucker has collaborated with Bethany Nowviskie and others to explore what she terms “speculative computing,” which is “driven by a commitment to interpretation-as-deformance in a tradition that has its roots in parody, play, and critical methods such as those of the Situationist International, Oulipo, and the longer tradition of ‘pataphysics with its emphasis on ‘the particular’ over ‘the general'” (>>>>PG#). Drucker goes on to differentiate speculative computing from quantitative processes based on “standard, repeatable, mathematical and logical procedures” by exploring “patacritical” methods, which privilege exceptions to rules and deviations to norms. Speculative computing, according to Drucker, “let’s go of the positivist underpinnings of the Anglo-analytic mode of epistemological inquiry,” creating imaginary solutions that suggest generative possibilities rather than answers. Drucker writes:

Humanistic research takes the approach that a thesis is an instrument for exposing what one doesn’t know. The ‘patacritical concept of imaginary solutions isn’t an act of make-believe but an epistemological move, much closer to the making-strange of the early-twentieth century avant-garde. It forces a reconceptualization of premises and parameters, not a reassessment of means and outcomes. (SpecLab 27)

Drucker frames her approach in opposition to the rationalized, positivistic assumptions of the scientific method, embracing instead randomness and play. This is also the approach that Stephen Ramsay takes in his book Reading Machines, arguing for what he terms “algorithmic criticism,” Ramsay writes that “[text analysis] must endeavor to assist the critic in the unfolding of interpretative possibilities” (SpecLab 10). Whereas Drucker seeks everywhere to undermine the positivist underpinnings of digital tools, creating not “digital tools in humanities contexts,” but rather “humanities tools in digital contexts” (SpecLab 25) Ramsay argues that “the narrowing constraints of computational logic–the irreducible tendency of the computer toward enumeration, measurement, and verification–is fully compatible” with a criticism that seeks to “employ conjecture . . . in order that the matter might become richer, deeper, and ever more complicated” (16). Because the algorithmic critic navigates the productive constraints of code to create the “deformative machine” from which she draws insights, the “hermeneutics of ‘what is’ becomes mingled with the hermeneutics of ‘how to’” (63).

And Mark Sample, in his “Notes Toward a Deformed Humanities,” proposes the act of deformance, of breaking things, as a creative-critical intervention, one that is premised on breaking things as a way of knowing. Sample’s projects — which include Hacking the Accident, an Oulipo-inspired version of the edited collection Hacking the Academy, and Disembargo, a project that reveals Sample’s dissertation as it “emerg[es] from a self-imposed six-year embargo, one letter at a time,” as well as a host of twitter bots that mash together a variety of literary and information sources — all demonstrate an inspired focus on interpretation as performed by creative computational expression.

I’ve discussed two major approaches to literary text analysis today — likely not without some reductive description — but I would like to turn now to the conference theme of “Going Public,” as each of these approaches take up that theme in different ways using platforms, methods, and models to foster more open and public DH communication.

Deformative work is often performed – witness Mark Sample’s twitter bots or generative texts, which operate in real time and interact with the public – at times even forming themselves in response to public speech.

Text mining scholars, with their focus on exploration, discovery, proof, and tool development, are admirably public in sharing evidence and code; just a few months ago, we witnessed one of the most fascinating controversies of recent years in DH, as DH scholar Annie Swafford raised questions about Matthew Jockers’s tool Syuzhet. Jockers had set out to build on Kurt Vonnegut’s lecture “The Shapes of Stories”; There, Vonnegut sketched what he described as the basic shapes of a number of essential story plots; following the arc of the main character’s fortunes, he suggested, we could discern a number of basic plot structures used repeatedly in various works of fiction, such as “Man in Hole,” “Boy Meets Girl,” and “From Bad to Worse.”

Jockers’s blog post described his use of Syuzhet, a package he wrote for the statistical software R, and which he also released publicly on Github. Because the code was available and public, Swafford was able to download it and experiment with it; she charged that the tool had major faults, and the ensuing discussion led to some sharp disagreements about the tool itself and Jockers’s findings.

Though Jockers wound up backing down from his earlier claims, the episode was fascinating as a moment in which in-progress work was presented, tested, and defended. This is of course nothing new in the sciences, but it was a moment in which the reproducibility of claims in DH was tested.

Having described these two areas of DH literary text analysis, one employing scientific models and seeking reproducible results and the other seeking to undermine the assumptions of the very platforms through which digital texts are constructed, I would like to finally complicate that binary and discuss some DH practitioners who are blending these approaches in fascinating ways.

First, I will turn to the work of Lisa Rhody, whose work on the topic of modeling of figurative language aims to investigate the very assumptions of the algorithms used in topic modeling. Topic modeling is a technique employed by Jockers and many others to reveal latent patterns in texts; it uses probabalistic algorithms to display a kind of topic-based guide to language in the text, tracking the play of similar concepts across it. Rhody’s project, as she writes, “illustrates how figurative language resists thematic topic assignments and by doing so, effectively increases the attractiveness of topic modeling as a methodological tool for literary analysis of poetic texts.” Using a tool that was designed to work with texts that contain little or no figurative language, Rhody’s study produces failure, but useful failure; as she writes, “topic modeling as a methodology, particularly in the case of highly-figurative language texts like poetry, can help us to get to new questions and discoveries — not because topic modeling works perfectly, but because poetry causes it to fail in ways that are potentially productive for literary scholars.”

Second, I will highlight the work of Micki Kaufman, a doctoral student in History at the CUNY Graduate Center with whom I’ve had the pleasure of working as she investigates memcons and telcons from the Digital National Security Archive’s Kissinger Collection. In her project “Quantifying Kissinger,” Micki has begun to explore some fascinating ways of looking at, visualizating, and even hearing topic models, a mode of inquiry that, I would suggest, foregrounds the subjective experiential approach championed by Drucker without sacrificing the utility of topic modeling and data visualization as investigative tools. Micki will be presenting on this work in January at the 2016 Modern Language Association Convention in Austin, Texas in a panel called “Weird DH.” I think it’s very promising.

Finally, I want to mention Jeff Binder, another student of mine — a doctoral student in English at the CUNY Graduate Center, whose work with Collin Jennings, a graduate student at NYU, on the Networked Corpus project, which aims to map topic models onto the texts they model, and to compare topic models of Adam Smith’s Wealth of Nations to the index published with the book. What this project produces, in the end, is a critical reflection on topic modeling itself, using it not necessarily to examine the main body of the text but rather to explore the alternate system of textual analysis presented by the book’s index.

I single out these three practitioners among the many wonderful scholars doing work in this area primarily for the fact that their practices, to my mind, unite the two approaches to text analysis that I have described this far. They use the computational tools of the proto-scientific group but in self-reflexive ways that embody the approach of deformative criticism, aiming to highlight interpretative complexity and ambiguity.

Johanna Drucker has argued that many digital tools are premised upon systems that make them poor fits for humanities inquiry:

Tools for humanities work have evolved considerably in the last decade, but during that same period a host of protocols for information visualization, data mining, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities. Positivistic, strictly quantitative, mechanistic, reductive and literal, these visualization and processing techniques preclude humanistic methods from their operations because of the very assumptions on which they are designed: that objects of knowledge can be understood as self-identical, self-evident, ahistorical, and autonomous. (“Humanistic Theory”)

Drucker calls for a new phase of digital humanities work, one that embodies a humanities-based approach to technology and interpretation. She writes:

I am trying to call for a next phase of digital humanities that would synthesize method and theory into ways of doing as thinking. . . .The challenge is to shift humanistic study from attention to the effects of technology (from readings of social media, games, narrative, personae, digital texts, images, environments), to a humanistically informed theory of the making of technology (a humanistic computing at the level of design, modeling of information architecture, data types, interface, and protocols). (“Humanistic Theory”)

By turning, in Drucker’s terms, from data to capta, from the presentation of data as transparent indexical fact to open and explicit acknowledgement of the socially constructed nature of information, and by using new DH tools and methods at times in ways that test the visible and occluded assumptions that structure them, these three junior scholars are moving us along on a new and exciting phase of digital humanities work.

If humanities text-mining work often proceeds according to the scientific method, striving to test hypotheses and create reproducible results, its genealogies lie in the work of natural philosophy and the various microscopes, air pumps, and electrical machines mentioned by Tom Scheinfeldt and described in depth in books like Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump. DH work, in fact, is often framed in terms of this genealogy, with the current moment being compared to the rise of science and experimentation with new tools.

As but one example, Ted Underwood, in response to the Syuzhet controversy and ensuing discussions about experimental methods, tweeted:


In the remaining section of this talk, I want to suggest an alternate genealogy for this moment, one that, although it has ties to that same early work of natural philosophy, might help ground digital humanities practice in a new frame. I will return to Emerson for a moment, to his statement that “The eye is the first circle; the horizon which it forms is the second; and throughout nature this primary figure is repeated without end.”

And so I want to explore pre-photographic experimentation with image-making as a way of suggesting new grounding for DH.

In 1839, Henry Louis Daguerre announced the invention of the daguerreotype camera to world, a moment of technical triumph that occluded a larger history of experiment. As the art historian Geoffrey Batchen has shown, when the invention of photography was announced to the world in 1839, the daguerreotype was one of a number of competing photographic technologies. The camera obscura had allowed artists to create replications of the world through a lens for centuries but no one was able to *fix* the image on paper, to make it last, to make it permanent. The project to do so was technical, artistic, and hermeneutic: while experimenters attempted to use different methods and materials to fix camera images on paper and metal, they did so with the confidence that the camera was an instrument of truth, a tool that could help human beings see the world from an unbiased, divine perspective. Daguerre himself was a showman, a painter of theatrical dioramas who had become interested in image-making through that realm.

And in fact, the modern negative-positive photograph descended not from the daguerreotype, but from what was called the calotype, a picture-making technology developed in Britain by William Henry Fox Talbot. While daguerreotypes were one-of-a-kind, positive images that could not be reproduced and that were made using expensive copper plates coated with silver halide and developed over mercury fumes, calotypes were reproducible, negative-positive, paper prints. Daguerreotypes, however, produced much finer gradations of tone and detail than the calotype. As a photographic practice, daguerreotypy grew more quickly in popularity in part because it produced more detailed images, and in part because Talbot restricted the spread of his technology by holding onto his patent license. Daguerre, meanwhile, sold his patent to the French public in exchange for a lifetime pension from the government, but held on to his patent rights in Britain. His announcement in 1839 marked the release of his image-making technology to the world.

In his 2002 examination of Henry Talbot’s work, “A Philosophical Window,” art historian Geoffrey Batchen notes that the specimens of early photography — the failed experiments, the pictures that were not fixed, the images that were faded and obscured, have been viewed by art historians only as indices of technical progression towards the invention of the photographic camera, rather than as art objects in and of themselves. Looking at Talbot’s pictures critically, and taking seriously Talbot as an artist working with a camera, Batchen finds in Talbot a conscious image-maker whose work should have relevance to us today.




Batchen focuses on one of Talbot’s early photographs, “”Latticed Window (with the Camera Obscura) August  1835.” The photograph contains a note: “when first made, the squares of glafs [sic] about 200 in number could be counted, with the help of a lens.”

Batchen performs a fantastic close reading of this note, highlighting Talbot’s instructions to the viewer, the suggestion that the viewer of the photograph look at it first from afar, and then up close with the aid of a lens. This set of instructions, claims Batchen,

anticipates, and even insists on, the  mobilization of the viewer’s eye, moving it back and forth, up and down, above the image. We are asked to see his picture first with the naked eye and then by means of an optical prosthesis . . . The attempt to improve one’s power of observation by looking through a lens is also a concession that the naked eye alone can no longer be guaranteed to provide the viewer with sufficient knowledge of the thing being looked at. It speaks to the insufficiency of sight, even while making us, through the accompanying shifts of scale and distortions of image that come with magnification, more self-conscious about the physical act of looking. (101-102)

Batchen’s comments here, focusing on scale and perspective, showcasing disorientations produced by new angles of vision, might remind us of Moretti’s discussions of scale, of the need for a new type of seeing to take account of thousands of texts. And indeed, amazingly, Talbot, the progenitor of the modern photograph, was also tied to early computing. He was a friend of Charles Babbage’s, whose Difference Machine is often described as the world’s first mechanical computer. Talbot made photos of machine-made lace and sent them to Babbage. Babbage, Batchen reports, exhibited some of Talbot’s prints in his drawing room, in the vicinity of his Difference Engine, making it likely that early computing and early photography were experienced together in his drawing room (107).

In his discussion of Talbot’s lattice-window photograph, Batchen notes that Talbot indeed narratizes our gaze. He writes:

So, for Talbot, the subject of this picture is, first, the activity of our seeing it, and second, the window and latticed panes of glass, not the landscape we can dimly spy through it. He makes us peer closely at the surface of his print, until stopped by the paper fibres themselves and at the framing of his window, but no further. And what do we see there? Is ‘photography’ the white lines or the lilac ground, or is it to be located in the gestalt between them? (102)

And where, we can ask, is DH to be located? Do we even know the foreground and background between which we can locate its gestalt?

Screen Shot 2015-04-20 at 12.29.04 AM


This, I think, is exactly the question we need to ask as we consider where DH is moving and where it should go.

One thing we can do is think about DH as the gestalt between gazes — not distant reading, not close reading, but the dizzy shape of self-reflexive movement between them. Though technology plays a part, it is critical reflection on that technology that can account for new, provocative digital humanities approaches.

And so, finally, I return to Emerson. The eye is the first circle, the horizon which it forms is the second. Let us plumb the space between.


Works Cited

Batchen, Gregory. “A Philosophical Window.” History of Photography 26.2 (2002): 100-112. Print.

Drucker, Johanna. “Humanities Approaches to Graphical Display.” 5.1 (2011): Digital Humanities Quarterly. Web. 19 Apr. 2015.

—–. “Humanistic Theory and Digital Scholarship.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2013. Web. 19 Apr. 2015.

—–. SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009. Print.

Emerson, Ralph Waldo. Essays and English Traits. Vol. 5. New York: P.F. Collier & Son, 1909. Print.

Jockers, Matthew L. Macroanalysis: Digital Methods and Literary History. Urbana: University of Illinois Press, 2013. Print.

—–. “Requiem for a Low Pass Filter.” WordPress. matthewjockers.net. 6 Apr. 2015. Web. 19 Apr. 2015.

—–. “Revealing Sentiment and Plot Arcs with the Syuzhet Package.” matthewjockers.net. WordPress. 2 Feb. 2015. Print.

Moretti, Franco. “The Slaughterhouse of Literature.” Modern Language Quarterly 61.1 (2000): 207-228. Print.

Moretti, Franco, and Alberto Piazza. Graphs, Maps, Trees: Abstract Models for Literary History. London: Verso, 2007. Print.

Ramsay, Stephen. Reading Machines: Toward an Algorithmic Criticism. Urbana: University of Illinois Press, 2011. Print.

Rhody, Lisa M. “Topic Modeling and Figurative Language” Journal of Digital Humanities. Vol. 2, No. 1 (Winter 2012). Web.

Sample, Mark. “Notes toward a Deformed Humanities.” samplereality.com. 2 May 2012. Web. 19 Apr. 2015.

Samuels, Lisa, and Jerome J. McGann. “Deformance and Interpretation.” New Literary History 30.1 (1999): 25–56. Print.

Schultz, Kathryn. “The Mechanic Muse: What Is Distant Reading?” The New York Times 24 June 2011. NYTimes.com. Web. 19 Apr. 2015.

Swafford, Annie. “Problems with the Syuzhet Package.” annieswafford.wordpress.com. 2 Mar. 2015. Web.

Underwood, Ted. “These are such classic history of science problems. I swear we are literally re-enacting the whole 17th century.” 29 March 2015, 10:41 p.m. Tweet. 19 Apr. 2015.


Acknowledgements: Thanks to Lindsey Albracht for her help in preparing this web edition of the talk.

Beyond the PDF: Experiments in Open-Access Scholarly Publishing (#MLA13 CFP)

As open-access scholarly publishing matures and movements such as the Elsevier boycott continue to grow, OA texts have begun to move beyond the simple (but crucial!) principle of openness towards an ideal of interactivity. This special session will explore innovative examples of open-access scholarly publishing that showcase new types of social, interactive, mixed-media texts. Particularly welcome is discussion of OA texts that incorporate new strategies of open peer review, community-based publication, socially networked reading/writing strategies, altmetrical analytics, and open-source publishing platforms, particularly as they inform or relate to print-bound editions of the same texts. Also welcome are critiques of the accessibility of interactive OA texts from the standpoint of universal design.

This roundtable aims for relatively short presentations of 5-7 minutes that will showcase a range of projects.

Interested participants should send 250-word abstracts and a CV to Matthew K. Gold at mgold@gc.cuny.edu by March 20, 2012.

Whose Revolution? Towards a More Equitable Digital Humanities

What follows is the text of a talk I gave at the 2012 MLA as part of the Debates in the Digital Humanities panel, which grew out of the just-published book of the same name (more about that in a forthcoming post). Many thanks to my fellow panelists Liz Losh, Jeff Rice, and Jentery Sayers. Thanks, too, to everyone who contributed to the active twitter backchannel for the panel and to Lee Skallerup for archiving it. Finally, I’m grateful to Jason Rhody for his helpful responses to a draft version of this presentation.

“Whose Revolution? Towards a More Equitable Digital Humanities”

The digital humanities – be it a field, a set of methodologies, a movement, a community, a singular or plural descriptor, a state of mind, or just a convenient label for a set of digital tools and practices that have helped us shift the way we perform research, teaching, and service – have arrived on the academic scene amidst immense amounts of hype. I’m sure you’re sick of hearing that hype, so I won’t rehearse it now except to say that the coverage of DH in the popular academic press sometimes seems to imply that the field has both the power and the responsibility to save the academy. Indeed, to many observers, the most notable thing about DH is the hype that has attended its arrival  — and I believe that one of my fellow panelists, Jeff Rice, will be proposing a more pointed synonym for “hype” during his presentation.

It’s worthwhile to point out that it’s harder than you’d think to find inflated claims of self-importance in the actual scholarly discourse of the field. The digital humanists I know tend to carefully couch their claims within prudently defined frames of analysis. Inflated claims, in fact, can be found most easily in responses to the field by non-specialists, who routinely and actively read the overblown rhetoric of revolution into more carefully grounded arguments. Such attempts to construct a straw-man version of DH get in the way of honest discussions about the ways in which DH might accurately be said to alter existing academic paradigms.

Some of those possibilities were articulated recently in a cluster of articles in Profession on evaluating digital scholarship, edited by Susan Schriebman, Laura Mandell, and Stephen Olsen. The articles describe many of the challenges that DH projects present to traditional practices of academic review, including the difficulty of evaluating collaborative work, the possibility that digital tools might constitute research in and of themselves, the unconventional nature of multimodal criticism, the evolution of open forms of peer-review, and the emergence of the kind of “middle-state” publishing that presents academic discourse in a form that lies somewhere between blog posts and journal articles. Then, too, the much-discussed role of “alt-ac” scholars, or “alternative academics,” is helping to reshape our notions of the institutional roles from which scholarly work emerges. Each of these new forms of activity presents a unique challenge to existing models of professional norms in the academy, many of them in ways that may qualify as revolutionary.

And yet, amid this talk of revolution, it seems worthwhile to consider not just what academic values and practices are being reshaped by DH, but also what values and practices are being preserved by it. To what extent, we might ask, is the digital humanities in fact not upending the norms of the academy, but rather simply translating existing academic values into the digital age without transmogrifying them? In what senses does the digital humanities preserve the social and economic status quo of the academy even as it claims to reshape it?

A group of scholars – from both within and outside of the field – have assembled answers to some of those questions in a volume that I have recently edited for the University of Minnesota Press titled Debates in the Digital Humanities. In that book, contributors critique the digital humanities for a series of faults: not only paying inadequate attention to race, class, gender, and sexuality, but in some cases explicitly seeking to elide cultural issues from the frame of analysis; reinforcing the traditional academic valuation of research over teaching; and allowing the seductions of information visualization to paper over differences in material contexts.

These are all valid concerns, ones with which we would do well to grapple as the field evolves. But there is another matter of concern that we have only just begun to address, one that has to do with the material practices of the digital humanities – just who is doing DH work and where, and the extent to which the field is truly open to the entire range of institutions that make up the academic ecosystem. I want to suggest what perhaps is obvious: that at least in its early phases, the digital humanities has tended to be concentrated at research-intensive universities, at institutions that are well-endowed with both the financial and the human resources necessary to conduct digital humanities projects. Such institutions typically are sizeable enough to support digital humanities centers, which crucially house the developers, designers, project managers, and support staffs needed to complete DH projects. And the ability of large, well-endowed schools to win major grant competitions helps them continue to win major grant competitions, thus perpetuating unequal and inequitable academic structures.

At stake in this inequitable distribution of digital humanities funding is the real possibility that the current wave of enthusiastic DH work will touch only the highest and most prominent towers of the academy, leaving the kinds of less prestigious academic institutions that in fact make up the greatest part of the academic landscape relatively untouched.

As digital humanists, the questions we need to think about are these: what can digital humanities mean for cash-poor colleges with underserved student populations that have neither the staffing nor the expertise to complete DH projects on their own? What responsibilities do funders have to attempt to achieve a more equitable distribution of funding? Most importantly, what is the digital humanities missing when its professional discourse does not include the voices of the institutionally subaltern? How might the inclusion of students, faculty, and staff at such institutions alter the nature of discourse in DH, of the kinds of questions we ask and the kinds of answers we accept? What new kinds of collaborative structures might we build to begin to make DH more inclusive and more equitable?

As I’ll discuss later, DH Centers and funding agencies are well aware of these issues and working actively on these problems – there are developments underway that may help ameliorate the issues I’m going to describe today. But in order to help us think through those problems, and in an effort to provoke and give momentum to that conversation, I’d like to look at a few pieces of evidence to see whether there is, in fact, an uneven distribution of the digital humanities work that is weighted towards resource-rich institutions.

Case #1: Digital Humanities Centers

Here is a short list of some of the most active digital humanities centers in the U.S.:

The benefits that digital humanities centers bring to institutions seeking funding from granting agencies should be obvious. DH Centers provide not just the infrastructural technology, but also the staffing and expertise needed to complete resource-intensive DH projects.

There are two other important areas that we should mention and that may not be apparent to DHers working inside DH Centers. The first is the key ways in which DH Centers provide physical spaces that may not be available at cash-poor institutions, especially urban ones. Key basic elements that many people take for granted at research 1 institutions, such as stable wifi systems or sufficient electrical wiring to power computer servers, may be missing at smaller institutions. Then, too, such physical spaces provide the crucial sorts of personal networking that is just as important as infrastructural connection. Finally, we must recognize that grants create immense amounts of paperwork, and that potential DHers working at underserved institutions might not only have to complete the technical and intellectual work involved in a DH project, and publish analyses of those projects to have them count for tenure and promotion, but might also have to handle an increased administrative role in the bargain.

[At this point in the talk, I noted that most existing DH Centers did not spring fully-formed from their universities, but instead were cobbled together over a number of years through the hard and sustained work of their progenitors.]

Case Study #2: Distribution of Grants

Recently, the NEH Office of Digital Humanities conducted a study of its Start-Up grants program, an exciting venture that differs from traditional NEH grant programs in that instead of providing large sums of money to a small number of recipients, it aims to provide smaller starter grants of $25,000 to $50,000 to a wider range of projects. The program allows the ODH to operate in a venture-capitalist fashion, accepting the possibility of failure as it explicitly seeks high-risk, high-reward projects.

The study (PDF), which tracked NEH Digital Humanities Start-Up Grants from 2007-2010, show us how often members of different types of institutions applied for grants. Here is the graphic for universities:

What we see in this graph is a very real concentration of applications from universities that are Master’s level and above. The numbers, roughly, are:

Master’s/Doctoral: 575

BA or Assoc.: 80

Now, those numbers aren’t horrible, and I suspect that they have improved in recent years. And additionally, we should note that many non-university organizations applied for the NEH funding grants. Here is a breakdown of those numbers from the NEH:

What we see here, in fact, is a pretty impressive array of institutional applications for funding – certainly, this is something to build on.

And here are updated numbers of NEH SUG awards actually made – and I thank Jason Rhody, Brett Bobley, and Jennifer Serventi of the NEH ODH for their help in providing these numbers:

Now, there are a few caveats to be made here — only the home institution of the grant is shown, so collaborative efforts are not necessarily represented. Also, university libraries are mostly lumped under their respective university/college type.

Still, we can see pretty clearly here that an overwhelming number of grants have gone to Master’s level and above institutions. And we should be especially concerned that community colleges, which make up the vast number of institutions of higher education in our country, appear to have had a limited involvement in the digital humanities “revolution.”

New Models/New Solutions

Having identified a problem in DH, I’d like to turn now towards some possible solutions and close by discussing some important and hopeful signs for a more equitable future for the digital humanities work.

One of the fun things about proposing a conference paper in April and then giving the paper in January is that a lot can happen in eight months, especially in the digital humanities. And here, I’m happy to report on several new and/or newish initiatives that have begun to address some of the issues I’ve raised today. I’m going to run through them fairly quickly in the hope that many of you are already familiar with them (though I’d certainly be happy to expand on them during the Q&A):

This new initiative seeks to create a large-scale DH community resource that matches newcomers who have ideas for DH projects with experts in the field who can either help with the work itself or serve in an advisory capacity. The project, which is now affiliated with CenterNet, an international organization of digital-humanities centers, promises to do much to spread the wealth of DH expertise. The site has just been launched at this convention and should prove to be an important community-building resource for the field.

  • DH Questions and Answers

Like DH Commons, DH Questions and Answers, which was created by the Association for Computers and the Humanities, offers a way for newcomers to DH to ask many types of questions and have them answered by longstanding members of the field – thus building, in the process, a lasting knowledge resource for DH.

  • THATCamps

These small, self-organized digital-humanities unconferences have been spreading across the country and thereby bringing DH methodologies and questions into a wide variety of settings. Two upcoming THATCamps that promise to expand the purview of the field are THATCAMP HBCU and THATCAMP Caribbean. Both of these events were organized explicitly with the intent of addressing some of the issues I’ve been raising today.

  • The Growth of DH Scholarly Associations

    All of these organizations are actively drawing newcomers into the field. ACH created the above mentioned DH Questions and Answers. NITLE has done excellent public work that is enabling the members of small liberal-arts colleges to be competitive for DH grants. CenterNet is well-positioned to act as an organizational mentor for other institutions.

    These kinds of virtual, regional, and multi-institutional support networks are key, as they allow scholars with limited resources on their own campuses to create cross-institutional networks of infrastructure and support.

    • Continued Commitment to Open Access Publications, Open-Source Tools, and Open APIs

    The DH community has embraced open-access publication, a commitment that has run, in recent years, from Schriebman, Siemens, and Unsworth’s Companion to the Digital Humanities through Dan Cohen and Tom Schienfeldt’s Hacking the Academyto Kathleen Fitzpatrick’s Planned Obsolescence to Bethany Nowviskie’s alt-academy to my own Debates in the Digital Humanities, which will be available in an open-access edition later this Spring. Having these texts out on the web removes an important barrier that might have prevented scholars, staff, and students from cash-poor institutions from fully exploring DH work.

    Relatedly, the fact that many major DH tools – and here the list is too long to mention specific tools – are released on an open-source basis means that scholars working at institutions without DH Centers don’t have to start from scratch. It’s especially crucial that the NEH Office of Digital Humanities states in its proposal guidelines that “NEH views the use of open-source software as a key component in the broad distribution of exemplary digital scholarship in the humanities.”

    These institutes provide key opportunities for DH outreach to academics with a range of DH skills.

    I’d like to close by offering four key ideas to build on as we seek to expand the digital humanities beyond elite research-intensive institutions:

    • Actively perform DH-related outreach at underserved institutions
    • Ask funding agencies to making partnerships and outreach with underserved peer institutions recommended/required practice
    • Continue to build out virtual/consortial infrastructure
    • Build on projects that already highlight cross-institutional partnerships [here I mentioned my own “Looking for Whitman” project]
    • Study collaborative practices [here I mentioned the importance of connecting to colleagues in writing studies]

    While none of these ideas will solve these problems alone, together they may help us arrive at a more widely distributed version of DH that will enable a more diverse set of stakeholders take active roles in the field. And as any software engineer can tell you, the more eyes you have on a problem, the more likely you are to find and fix bugs in the system. So, let’s ensure that the social, political, and economic structures of our field are as open as our code.

    Photo credit: “Abstract #1” by boooooooomblastandruin

DH and Comp/Rhet: What We Share and What We Miss When We Share

What follows is the text of a short talk I gave at the 2012 MLA as part of the session Composing New Partnerships in the Digital Humanities. Many thanks to session organizer Catherine Prendergast, my fellow panelists, and everyone who took part in the discussion in person or through twitter.

Like my fellow panelists, I joined this session because I’d like to see an increased level of communication and collaboration between digital humanists and writing-studies scholars. There is much to be gained from the kinds of partnerships that such collaborations might foster, and much for members of both fields to learn from one another. I suspect that most people in this room today agree upon that much.

So, why haven’t such partnerships flourished? What issues, misconceptions, lapses, and tensions are preventing us from working together more closely?

A shared history of marginalization

Both comp/rhet and the digital humanities scholars have existed at the margins of traditional disciplinary formations in ways that have shaped their perspectives. Writing Studies has a history of being perceived as the service wing of English departments. Beyond heavy course loads, the field is sometimes seen as being more applied than theoretical – this despite the fact that writing studies has expanded into areas as diverse as complexity theory, ecocriticism, and object-oriented rhetoric.

The digital humanities, meanwhile, arose out of comparably humble origins. After years of inhabiting the corners of literature departments, doing the kinds of work, such as scholarly editing, that existed on the margins of English departments, humanities computing scholars emerged, blinking and bit disoriented, into the spotlight as digital humanists. Now the subject of breathless articles in the popular academic press and the recipients of high-profile research grants, DHers have found their status suddenly elevated. One need only look at the soul-searching blog posts that followed Bill Pannapacker’s suggestion at the last MLA that DH had created a cliquish star-system to see a community still coming to terms with its new position.

I bring up these points not to reopen old wounds, but rather to point out that they have a common source: a shared focus on the sometimes unglamorous, hands-on activities such as writing, coding, teaching, and building. This commonality is important, and it’s something, well, to build on, not least of all because we face a common problem as we attempt to help our colleagues understand the work we do.

Given what we share, it’s surprising to me that so many writing-studies scholars seem to misunderstand what DH is about. Recent discussions of the digital humanities on the tech-rhet listserv, one of the primary nodes of communication among tech-minded writing-studies scholars, show that many members of the comp/rhet community see DH as a field largely focused on digitization projects, scholarly editions, and literary archives. Not only is this a limited and somewhat distorted view of DH, it’s also one that is especially likely to alienate writing-studies scholars, emphasizing as it does the DH work done within the very traditional literary boundaries that were used to marginalize comp/rhet in previous decades.

This understanding of DH misses some key elements of this emerging field:

  1. Its collaborative nature, which is also central to comp/rhet teaching and research;
  2. The significant number of digital humanists who, like me, focus their work not on scholarly editions and textual mark-up, but rather on networked platforms for scholarly communication and networked open-source pedagogy;
  3. The fact that the digital humanities are open in a fundamental way, both through open-access scholarship and through open-source tool building;
  4. The fact that DH, too, has what Bethany Nowviskie has called an “eternal September” – a constantly refreshed group of newbies who seem to emerge and ask the same sorts of basic questions that have been asked and answered before. We need to respond to such questions not by becoming frustrated that newcomers have missed citations to older work – work that may indeed be outside of their home disciplines – but rather by demonstrating how and why that past work remains relevant in the present moment.
  5. The fact that there is enormous interest right now in the digital humanities on networked pedagogy. This is a key area of shared interest in which we should be collaborating.
  6. The fact that DH is interdisciplinary and multi-faceted. To understand it primarily as the province of digital literary scholars is to miss the full range of the digital humanities, which involves stakeholders from disciplines such as history, archaeology, classical studies, and, yes, English, and as well as librarians, archivists, museum professionals, developers, designers, and project managers.

    In this sense, I’d like to recall a recent blog post by University of Illinois scholar Ted Underwood, who argued that DH is “a rubric under which a bunch of different projects have gathered — from new media studies to text mining to the open-access movement — linked mainly by the fact that they are responding to related kinds of fluidity: rapid changes in representation, communication, and analysis that open up detours around some familiar institutions.”

To respond to DH work by reasserting the disciplinary boundaries of those “familiar institutions,” as I believe some writing-studies scholars are doing, is to miss an opportunity for the kinds of shared endeavors that are demanded by our moment.

So, let’s begin by looking towards scholars who have begun to bridge these two fields and think about the ways in which they are moving us forward. I’m thinking here of hybrid comp-rhet/DH scholars like Alex Reid, Jentery Sayers, Jamie “Skye” Bianco, Kathie Gossett, Liz Losh, William Hart-Davidson, and Jim Ridolfo, all of whom are finding ways to blend work in these fields.

I’d like to close with some words from Matt Kirschenbaum, who reminds us, in his seminal piece, “What is Digital Humanities and What’s It Doing In English Departments,” that “digital humanities is also a social undertaking.” That is, I think Matt is saying, that DH is not just a series of quantitative methodologies for crunching texts or bunch of TEI markup tags, but rather a community that is in a continual act of becoming. We all need to do a better job of ensuring that our communities are open and of communicating more clearly with one another. This session, I hope, is a start.

An Update

I’m excited to announce that I’ll be joining the CUNY Graduate Center this Fall as Advisor to the Provost for Master’s Programs and Digital Initiatives. My charge there will involve working with the Provost and Associate Provosts to promote and strengthen existing Master’s Programs and to develop new degree programs. I’ll also be collaborating on a variety of digital initiatives with many members of the GC community. It’s an exciting opportunity and I’m looking forward to the work that lies ahead.

While I will continue to teach at City Tech as I take on this new role, I regret to say that I will be unable to continue serving as PI on the U.S. Department of Education “Living Lab” grant. That project has gotten off to a fast and productive start, thanks to the extremely hard work of the entire grant team. In our first year, we’ve had an initial cohort of faculty members participate in a newly designed General Education seminar; we have built the first iteration of the City Tech OpenLab, a socially networked, community-based platform for teaching, learning, and sharing that is currently in a soft-launch; we established the Brooklyn Waterfront Research Center, which has already become part of NYC’s long-term vision for its waterfront; and we have laid the groundwork for numerous other projects that are currently in the pipeline. I am grateful to be leaving the grant in the very capable hands of my friend and colleague Maura Smale, who will be assisted by our excellent Project Coordinator Charlie Edwards and a wonderful team of colleagues. I wish them the very best as they continue the work that we have begun together, and I look forward to remaining involved in the project as it moves forward.

Interview with Bob Stein Now Published in Kairos

I’m happy to report that my interview with Bob Stein (computer pioneer, as Wikipedia disambiguates him), titled “Becoming Book-Like: Bob Stein and the Future of the Book,” is now available in the new issue of Kairos: A Journal of Rhetoric, Technology, and Pedagogy.

The title of the interview comes from the following snippet of our conversation (Bob is speaking about a realization he had in 1981 about the future of the book):

The “aha” moment I had was that adding a microprocessor to the mix meant that producer-driven media, like movies and television, were going to be transformed into user-driven media. For me, the crucial thing — and this happened in the process of writing the paper for Britannica — was when I wrestled with the question of “what’s a book?” and “what happens when we make it electronic?” I realized that everything was going to become book-like in the sense of being user-driven and that the ways in which a user interacts with content becomes an important part of her experience.

I love the way that Bob upends conventional wisdom by defining the book as an active, user-driven medium and the way he foresees digital media becoming more, and not less, “book-like” in the future. “Becoming book-like” also points to the many ways in which new media remediate old media.

The interview is presented in CommentPress, a wonderful theme for WordPress developed by Bob’s Institute for the Future of the Book that allows readers to attach comments to specific paragraphs of text. I encourage you to visit the journal and leave your responses in the comments.

On Reading Like a Hawk

ralph waldo emerson Robert D. Richardson, Jr.’s Emerson: The Mind on Fire (1995) is one of my favorite biographies, and not just because I had the good fortune as an undergraduate to study with the author while he was writing the book. In his careful, moving study of Emerson’s life, Richardson charts the intellectual growth of one of America’s finest thinkers with a novelist’s eye for detail and a scholar’s knowledge of historical context, and he does it all in short, elliptical chapters that echo Emerson’s own aphoristic sentences.

One of my favorite subtexts of the biography is Richardson’s interest in Emerson’s reading and writing practices. Both of the following passages from the biography speak to Emerson’s omnivorous consumption of books and his methods for working through them:

Passage 1 (from Chapter 11: Pray Without Ceasing):

Coleridge notes that there are four kinds of readers: the hourglass, the sponge, the jelly bag, and the Golconda. In the first everything that runs in runs right out again. The sponge gives out all it took in, only a little dirtier. The jelly bag keeps only the refuse. The Golconda runs everything through a sieve and keeps only the diamonds. Emerson was not a systematic reader, but he had a genius for skimming and a comprehensive system for taking notes. Most of the time he was the pure Golconda, what miners call a high-grader, working his way rapidly through vast mines of material and pocketing the richest bits. (67)

Emerson, it appears, was digging into data before his time.

Passage 2 (from Chapter 28: A Theory of Animated Nature):

Goethe’s greatest gifts to Emerson were two. First was the master idea that education, development, self-consciousness, and self-expression are the purposes of life; second was the open, outward-facing working method of sympathetic appropriation and creative recombination of the world’s materials.

There is an important corollary to the axiom of appropriate appropriation. Along with Emerson’s freedom to take whatever struck him went the equally important obligation to ignore what did not. Emerson read widely and advised others to do so, but he was insistent about the dangers of being overwhelmed and overinfluenced by one’s reading. “Do not attempt to be a great reader,” he told a young Williams College student named Charles Woodbury. “Read for facts and not by the bookful.” He thought one should “learn to divine books, to feel those that you want without wasting much time on them.” It is only worthwhile concentrating on what is excellent and for that “often a chapter is enough.” He encouraged browsing and skipping. “The glance reveals what the gaze obscures. Somewhere the author has hidden his message. Find it, and skip the paragraphs that do not talk to you.”

What Emerson was really recommending was a form of speed-reading and the heightened attention that goes with speed-reading. When pressed by the young Woodbury, Emerson gave details:

“Learn how to tell from the beginnings of the chapters and from the glimpses of sentences whether you need to read them entirely through. So turn page after page, keeping the writer’s thoughts before you, but not tarrying with him, until he has brought you the thing you are in search of. But recollect, you only read to start your own team.”

The last point is crucial. Reading was not an end in itself for Emerson. He read like a hawk sliding on the wind over a marsh, alert for what he could use. He read to nourish and to stimulate his own thought, and he carried this so far as to recommend that one stop reading if one finds oneself becoming engrossed. “Reading long at one time anything, no matter how it fascinates, destroys thought,” he told Woodbury. “Do not permit this. Stop if you find yourself becoming absorbed, at even the first paragraph.” (173-174)

These passages speak, in surprising ways, to current debates about digital media. As is often the case, practices popularly understood to be effects of digital media have histories that predate the digital (David Crystal makes this point in Txting: The Gr8 Db8, as does Cathy Davidson in her blog post The Digital Nation Writes Back). Perhaps we might reclaim Emerson as the high priest of continuous partial attention, the ultimate historical rejoinder to the claims of Nicholas Carr and Sherry Turkle.

As Richardson points out, browsing and skimming were, for Emerson, not so much ways of avoiding the hard work of reading deeply as they were methodologies for jump-starting his own writing processes. It’s good practice to remember that there are many possible paths towards wisdom, and that some of them are more direct than others.

Update: Here is a related post by Chris Kelty: How to read a (good) book in one hour.

Clearing Space on the SD Card of a Nexus One Android Phone

CC-licensed photo from Wikimedia

So what if Google has discontinued the Nexus One, closed its N1 web store, and released newer Nexus phones to market? None of that fazes me. I love my Nexus One for the pleasant heft of its metal body and the smooth contours of its rounded corners, its glowing white button and its removable back cover. It’s not for nothing that Wired deemed it “sexy.”

Still, the N1 can frustrate even its adoring owners at times. I ran into just that situation the other day when I tried to use the camera on the phone. An alert notification informed me that I had only 3MB of space left on my 4GB SD card; I would have to lower the quality of the photos I was taking or stop taking them altogether.

This came as a surprise, since I had recently transfered all of my existing photos and videos from my phone to my computer. With that material off of the phone, what could possibly be taking up so much room?

A little bit of googling produced only marginally helpful advice, so I’d like to explain how I found my way back to a nearly empty SD card. In the end, it turned out that an extra step was needed to truly remove those old files from the phone. In the hope that it might be helpful for other N1/Android owners, here is how I cleared additional space on my SD Card:

— Check Settings > SD card & phone storage to see how much free space you have
— Connect N1 to a computer and transfer all photos and videos from the DCIM/camera folder
— Delete all photos and videos from the DCIM/camera folder
— Disconnect N1 from computer
— Download the ASTRO file manager or another file management app from the Android Market. This will allow you to browse the folders on your Android phone from the phone interface itself.
— Open Astro and go to .Trashes
— Delete all files in .Trashes
— Go to Settings > SD card & phone storage to confirm that your SD card now has empty space.

And that’s it — upon completing the above steps, I had 3.69 GB of free space on the card. No need to delete applications or clear caches, as others suggest. Just clear your .trashes folder, and you should be good to go.

Interviewing Bob Stein

On Monday, I will be meeting with Bob Stein, founder and co-director of the Institute for the Future of the Book, to conduct an interview that will later be published in Kairos. If you think you don’t know Stein’s work, you’re probably wrong: over a long career, he has worked on a number of tools and projects that are used both within and outside of academia. He co-founded the Voyager company, which produced innovative books on CD-ROM, such as Who Built America?, and innovative editions of films on laserdisc, which later became the Criterion Collection; and with the Institute for the Future of the Book, Stein has been involved in projects such as CommentPress, Sophie, and MediaCommons.

I plan to ask Bob about all of these projects and about his career as an innovator in the field; I’ll also ask him to discuss the impact of mobile devices on writing and reading practices, the rise of new digital platforms for composition, and the rapid expansion of the eBook marketplace.

But I still have room for some additional questions, and I’d love to have your input: on what subjects would you like to hear Stein speak? Please let me know in the comments and I’ll try to work them into the conversation.

Hacking Together Egalitarian Educational Communities; Some Notes on the Looking for Whitman Project

When I discuss the “Looking for Whitman” project, a multi-campus experiment in digital pedagogy sponsored by the NEH Office of the Digital Humanities, I often emphasize the place-based structure of the project. As part of it, four courses were offered in institutions located in cities in which Walt Whitman lived; students spent the Fall 2009 semester reading texts that Whitman had written in their location and sharing their thoughts, reactions, and research with one another in a dynamic, social, web-based learning environment.

What I discuss a little less often, even though it was extremely important to the project, was the way in which the project worked within existing institutional structures in order to encourage, or at least model, a shift in their functioning. Rather than forming a meta-course that would run classes outside of traditional, credit-bearing disciplinary and institutional frameworks, we chose to work within existing academic boundaries. This wound up necessitating a great deal of administrative work: faculty participants had to ensure that their courses would get on the books in forms that would allow them to be aligned with the project, which involved extensive consultations with departments, deans, registrars, colleagues, and curriculum committees.

But by working within those institutional structures, we subverted some elements of them.  Perhaps the most radical element of the project was the way in which it brought participants from very different types of schools into linked virtual learning spaces. The colleges chosen for participation in Looking for Whitman–-New York City College of Technology (CUNY), New York University, University of Mary Washington, and Rutgers University-Camden-–represented a wide swath of institutional profiles: an open-admissions public college of technology, a private research-intensive university, a public liberal arts college, and a public research university, each with very different types of students. Beyond that, the courses explicitly engaged different types of classes and learners with very different types of backgrounds and knowledge-bases. The class at University of Mary Washington consisted of senior English majors who were taking the course as a capstone experience. There were two classes at Rutgers; one contained a mix of undergraduate English majors and master’s-level students; the other consisted entirely of graduate students who were taking a methods course that served as an introduction to graduate english studies. At City Tech, meanwhile, undergraduate students with little training in literary studies were taking a course on Whitman as part of their general education requirements.

The roster of schools became even more diverse when our NYU faculty member, Karen Karbiener, received a Fulbright Fellowship to Serbia and decided to include her class at the University of Novi Sad in the project. It was this interesting mix of institutions that Jim Groom wrote about in his post on Looking for Whitman:

From the University of Mary Washington to Rutgers-Camden to CUNY’s City Tech to Serbia’s University of Novi Sad, the project represents a rather compelling spectrum of courses from a variety of universities that provide a unique network of students from a wide array of experiences. This is not a “country club for the wealthy,” but a re-imagining of a distributed, public education that is premised on an approach/architecture that is affordable and scales with the individual. It’s a grand, aggregated experiment that will hopefully demonstrate the possibilities of the new web for re-imagining the boundaries of our institutions, while at the same time empowering students and faculty through a focused and personalized learning network of peers, both local and afar.

Mixing all of these students together in a single online space — especially one that placed a great deal of emphasis on social interaction — might seem to some observers to be at best a bad idea, and at worst a dangerous one.  What could graduate students studying literature learn from undergraduate students taking gen-ed courses at an urban school of technology?  Would undergrads flame one another on the course site?  Would undergrads be intimidated by the work of more advanced students who were working within their fields of specialization?

A look around the project website will show that productive interactions did take place, though not always without complications.  We’re just beginning to sort through the data associated with the project, and we’re especially looking forward to examining student responses to the extensive survey we circulated at the close of the semester.

Still, it’s not too early to say that the radical potential of projects like “Looking for Whitman” — and, I would argue, the radical potential of Digital Humanities pedagogical projects more generally — lies in their ability to connect learners in ways that hack around the artificial boundaries of selectivity and elitism that educational institutions have erected around themselves.  And if one result of that hacking is the creation of more open, more diverse, more egalitarian learning environments that engage a broader spectrum of students and institutions, the Digital Humanities might find that it has a social mission that complements its technological one.

(Submitted to Hacking the Academy)