Today, we chat about making digital history with Lauren Klein, Assistant Professor in the School of Literature, Media, and Communication at the Georgia Institute of Technology. Her writing has appeared in Early American Literature and American Quarterly, with a piece forthcoming in 2014 in American Literature. She is currently at work on two book projects: the first on the relationship between eating and aesthetics in the early republic, and a second that provides a cultural history of data visualization from the eighteenth century to the present.
JUNTO: Congratulations on receiving an NEH Humanities Start-up grant. Can you tell us about TOME, and how digital humanities theory and practice fits with your approach to research?
KLEIN: Thanks, Sara. TOME, which stands for Interactive TOpic Model and MEtadata Visualization, is a project that I conceived with a colleague in computational linguistics, Jacob Eisenstein. We connected over a shared interest in natural language processing (NLP), an area of research concerned with techniques that allow computers to understand (or “process”) human (or “natural”) language. I’d used some NLP tools as part of my project, Visualizing the Papers of Thomas Jefferson, while Jacob, as it turned out, investigates the models that underlie those tools as part of his research.
From the start, we wanted our collaboration to engage active research questions in both of our fields. After a series of discussions, we came to discover that the evolution and circulation of ideas is not only of central concern to literary critics and historians; it’s also a current interest in computational linguistics. Our conversations took place right after the Green Revolution in Iran, which, because it relied so much on Twitter and other social networking sites (or so the narrative goes), prompted a slew of analyses and visualizations of the movement’s momentum; we wondered if we might be able to develop a tool that would allow any scholar—and not just computer scientists—to trace the transmission of ideas across social networks and over time.
So that’s the story behind the computational aspect of the project. But we also felt very strongly—again, from the start—that we wanted to be able to use our tools and techniques to engage in current cultural-critical debates. It’s one of my most strongly held scholarly beliefs—and this gets at your question about how I view digital humanities theory and practice in relation to my own research—that digital humanities methods should not only be applied to produce digital humanities research; they should also be employed to advance field-specific arguments. So in selecting the archive that would serve as our initial dataset, we wanted to identify a set of texts of exceptional richness.
We decided to focus on the abolitionist newspapers of the nineteenth-century United States because of the unusual diversity of their authorship—men and women, African Americans and whites, Northerners and Southerners, U.S. citizens and those from abroad—and because of the incredibly contentious debates that took place in their pages. While they were united in their desire to end slavery, they often disagreed about how best to achieve their common goal. My favorite example of this is when Lydia Maria Child resigned her position as editor of the National Anti-Slavery Standard after refusing to serve up the “hyena soup with brimstone seasoning” that her more radical friends desired. With TOME, we hope to identify additional events and ideas, and other aspects of ideological influence, that might have traveled through these papers undetected, and that might complicate our current understanding of that momentous time.
So to return to your question about the role of DH in relation to my research, for me it’s always about advancing humanistic inquiry. I strongly believe that any method—digital or more traditional—should be selected because of how it exposes an issue or engages in a debate in a new (or otherwise provocative) way. For a number of reasons—political as much as epistemological—we tend to distinguish between “digital humanities” and everything else. But there will come a time when that distinction will no longer be meaningful, and sooner than we think!
JUNTO: For those starting out in the field of digital humanities, what kind of training do you recommend? When building your grant proposal, what models did you look to?
KLEIN: That’s a really good question. There are many pathways into the field, ranging from online tutorials and lists of tools, to actually attending an institute. But even before you dedicate yourself to learning a particular tool or approach, I think the best way to start out in DH—like any other field—is to immerse yourself in existing projects, with an eye towards understanding how they were constructed—in terms of both argument and method. Miriam Posner has a great blog post, “How Did They Make That?“, in which she unpacks a range of digital projects, explaining the tools and techniques they employed and providing links to sites for getting started with those tools. Surveying the terrain, and trying to put yourself in the mindset of a DHer, will (ideally) allow you to hone in on what stories you could tell, or arguments you could advance, that would lend themselves to a digital methodology.
The other thing I think it’s important to underscore is that there exist a wide range of really easy-to-use tools, with almost no learning curve, that can be employed to get a digital project off the ground. There are amazing projects that have been made with Google Maps, for instance; or basic text analysis tools. It’s the argument or concept that makes for a compelling digital humanities project; the exact tools deployed can be fine-tuned later.
As for building my grant proposal, I took a lot of advice from colleagues in the sciences, who have been trained to be very specific about what, exactly, they are proposing to do. The successful grant narratives posted on the NEH website were also very helpful; we modeled ours after the narratives that had been posted on the site, and it’s gratifying that ours has now been made available. Finally, many granting agencies will review drafts applications before the final submission date; we did this and received valuable feedback from the program officer.
JUNTO: What’s next for TOME? Is there a digital tool that you’d like to use, but it just doesn’t exist yet?
KLEIN: Jacob and I, along with Iris Sun, a grad student in Georgia Tech’s Digital Media program, spent the past semester exploring various methods of visualizing the newspapers’ content, ultimately deciding upon the set of visualizations that we think are best suited to the archive’s particular needs. We plan to prototype the interface in the spring, using it to develop new arguments about the abolitionist press in the coming years.
JUNTO: Finally, can you reflect on the current state of peer review for digital projects, and suggest any changes for the process?
KLEIN: It’s another good question, and there’s a real (and expanding) need. For the past few years, I’ve been involved in NINES, an organization based out of the University of Virginia that provides peer review for digital projects (among other initiatives). But that review process takes place after the fact—that is, individuals and groups create their projects on their own, or under the direction of a local DH center, and only then submit them to us for comment. They’re often asked to revise certain content, or improve certain features, but the discussion takes place in the end stage of the project’s development.
What we’re starting to see with tools like USC’s Scalar, is partnerships with university presses in the earlier phases of project development, and that’s incredibly exciting to me. Projects always benefit from an outside set of eyes, and in the wide terrain of the internet—and even the field of DH at this point—it can be hard to know where to turn. Adapting the university press structure to fit this need is one promising direction. I’ll be very eager to see these initiatives’ first publications.