This summer, I counted. My dissertation, as my Contributor page at The Junto helpfully notes, includes both qualitative and quantitative analysis. And so, to enrich the latter portion of my project, I spent July at the archives, counting. Perhaps more so than most other forms of archival work, counting is an exercise in delayed gratification, the overall picture springing into focus only once the research and subsequent analysis are complete. This meant I had plenty of time to reflect on my methodology as I scanned through microfilm, paged through record books, examined case files, and counted, and counted, and counted.
During the heyday of the social history movement in the 1970s and 1980s, tables and numbers abounded in recently-published historical monographs and articles. Historians such as Michael Zuckerman, Philip Greven, and John Demos mined town and probate records to examine the fabric of life in colonial New England towns. Lois Carr, Lorena Walsh and others employed parallel methods in the study of the Chesapeake. For these scholars, systematic quantitative analysis of a vast source base yielded their core arguments. Such methodologies remained influential into the 1990s, when historians like Cornelia Dayton published monographs that presented vast amounts of data even as they broadened their arguments beyond those of previous social histories.
In subsequent decades, the importance of counting and computation seem to have waned. Of course, some areas of early American history—for instance, economic history and the study of the slave trade—remain heavily quantitative, but these sub-fields are exceptions. More frequently, historians who engage in quantitative analysis now do so to offer context or to provide figures that complement other methodologies and support broader arguments.
In other words, whereas older social histories used extensive quantitative data to drive and support their arguments, more recent works, while nonetheless deeply researched, present such data more sparingly. This shift is encapsulated in the reflections of many recent PhDs, whom I’ve often heard make statements along the lines of, “My dissertation had lots of numbers, but I’m moving them to the footnotes or cutting them out for the book.”
What accounts for this shift? One could perhaps point to publishers’ increasing preference for concise monographs, or to the shift from local frameworks to broader continental or Atlantic paradigms. Even more fundamentally, however, I think this shift reflects changes in the kinds of questions early Americanists are choosing to ask and answer, and in the models we select to account for change over time.
To most of The Junto’s readers, it probably isn’t news that cultural history has gained strength in recent years. As we have become increasingly interested in shifting discourses and practices and in strategic modes of performance and appropriation, it seems that counting has become less essential to the presentation of our research.
I applaud cultural history’s emphasis on eclectic source bases and creative ways of reading them. Indeed, many of the works that I most admire fall into this category, and my own methods certainly align more closely with that of other recent monographs than with that of the older school of social history. But, at the same time, I wonder where the current state of counting leaves historians like me.
Quantitative analysis demands rigor, as does any other mode of inquiry. Yet, more so than other research strategies, counting requires a clear plan at the outset, as it can be difficult to shift one’s approach mid-stream. This summer, I was struck by the number of seemingly small decisions that I was forced to make in the course of my research. I used a combination of Filemaker and Excel to record my findings, and some of my questions concerned how I should construct my databases. Others confronted me at the archives. Which terms and years should I include in my sample of court records? How many court cases do I need to create a sample of sufficient size? How should I deal with incomplete records? How should I record the cases that fall between the categories that I created at the outset of my research? How I chose to record any one case may have been trivial, but my cumulative choices could have significant consequences for overall findings.
I know that I am far from the only young scholar to engage in quantitative research, and suspect I that my experience this summer was far from unique. Generous, excellent advice from mentors and colleagues was essential as I navigated my way through court records. But is this kind of informal, ad hoc training enough for historians as we engage in our own counting and as we critique the quantitative research of others?
Even as we have moved away from counting as a principal research strategy, numbers have the power to cultivate trust and admiration in readers. If we are not sufficiently transparent about the decisions associated with counting, we reinforce tendencies to see its methodology as self-evident. Particularly as history’s disciplinary boundaries (or lack thereof) continue to come under scrutiny, I would argue that it is incumbent upon us to talk more openly about the perils and possibilities of quantitative research and to offer historians more formal training in how to count.