On Counting: A Reflection on Quantitative Research

Count_von_CountThis summer, I counted. My dissertation, as my Contributor page at The Junto helpfully notes, includes both qualitative and quantitative analysis. And so, to enrich the latter portion of my project, I spent July at the archives, counting. Perhaps more so than most other forms of archival work, counting is an exercise in delayed gratification, the overall picture springing into focus only once the research and subsequent analysis are complete. This meant I had plenty of time to reflect on my methodology as I scanned through microfilm, paged through record books, examined case files, and counted, and counted, and counted.

During the heyday of the social history movement in the 1970s and 1980s, tables and numbers abounded in recently-published historical monographs and articles. Historians such as Michael Zuckerman, Philip Greven, and John Demos mined town and probate records to examine the fabric of life in colonial New England towns. Lois Carr, Lorena Walsh and others employed parallel methods in the study of the Chesapeake. For these scholars, systematic quantitative analysis of a vast source base yielded their core arguments. Such methodologies remained influential into the 1990s, when historians like Cornelia Dayton published monographs that presented vast amounts of data even as they broadened their arguments beyond those of previous social histories.

In subsequent decades, the importance of counting and computation seem to have waned. Of course, some areas of early American history—for instance, economic history and the study of the slave trade—remain heavily quantitative, but these sub-fields are exceptions. More frequently, historians who engage in quantitative analysis now do so to offer context or to provide figures that complement other methodologies and support broader arguments.

In other words, whereas older social histories used extensive quantitative data to drive and support their arguments, more recent works, while nonetheless deeply researched, present such data more sparingly. This shift is encapsulated in the reflections of many recent PhDs, whom I’ve often heard make statements along the lines of, “My dissertation had lots of numbers, but I’m moving them to the footnotes or cutting them out for the book.”

What accounts for this shift? One could perhaps point to publishers’ increasing preference for concise monographs, or to the shift from local frameworks to broader continental or Atlantic paradigms. Even more fundamentally, however, I think this shift reflects changes in the kinds of questions early Americanists are choosing to ask and answer, and in the models we select to account for change over time.

To most of The Junto’s readers, it probably isn’t news that cultural history has gained strength in recent years. As we have become increasingly interested in shifting discourses and practices and in strategic modes of performance and appropriation, it seems that counting has become less essential to the presentation of our research.

I applaud cultural history’s emphasis on eclectic source bases and creative ways of reading them. Indeed, many of the works that I most admire fall into this category, and my own methods certainly align more closely with that of other recent monographs than with that of the older school of social history. But, at the same time, I wonder where the current state of counting leaves historians like me.

Quantitative analysis demands rigor, as does any other mode of inquiry. Yet, more so than other research strategies, counting requires a clear plan at the outset, as it can be difficult to shift one’s approach mid-stream. This summer, I was struck by the number of seemingly small decisions that I was forced to make in the course of my research. I used a combination of Filemaker and Excel to record my findings, and some of my questions concerned how I should construct my databases. Others confronted me at the archives. Which terms and years should I include in my sample of court records? How many court cases do I need to create a sample of sufficient size? How should I deal with incomplete records? How should I record the cases that fall between the categories that I created at the outset of my research?  How I chose to record any one case may have been trivial, but my cumulative choices could have significant consequences for overall findings.

I know that I am far from the only young scholar to engage in quantitative research, and suspect I that my experience this summer was far from unique. Generous, excellent advice from mentors and colleagues was essential as I navigated my way through court records. But is this kind of informal, ad hoc training enough for historians as we engage in our own counting and as we critique the quantitative research of others?

Even as we have moved away from counting as a principal research strategy, numbers have the power to cultivate trust and admiration in readers. If we are not sufficiently transparent about the decisions associated with counting, we reinforce tendencies to see its methodology as self-evident. Particularly as history’s disciplinary boundaries (or lack thereof) continue to come under scrutiny, I would argue that it is incumbent upon us to talk more openly about the perils and possibilities of quantitative research and to offer historians more formal training in how to count.

7 responses

  1. Thanks for an interesting post. New technologies drove social science history in the 70s – my teachers used to talk about late nights with punch cards and giant computers. Yet now, when our capacity to process that data has multiplied exponentially, and we have a far vaster dataset at our disposal than anything they had, that kind of scholarship is out of fashion. Have Fogel and Engermann’s sins pushed an entire approach to the margins? I’ve noticed in historiography classes how often Time on the Cross appears as the reading for the session on cliometrics, but we wouldn’t chose a book with so many flaws for sessions on cultural history. Maybe the spatial turn, the surge of interest in historical political economy, and greater literacy in the likes of SPSS and ArcGIS will get us counting again – this time with more awareness of how ostensibly neutral categories are artifacts of a place and time – but I’m not sure how many grad programs provide a training in it. Mine had a number of (mostly recovering) econometricians on the faculty, but for most of us, the stats class was something we took to avoid a second language exam.

  2. Thanks for these reflections, Andrew. I agree that Fogel and Engermann’s work casts a long shadow on quantitative research–in fact, another Juntoist had suggested I mention them in this post. From informal surveying of colleagues in the US, my impression is that few history grad programs offer training in statistics. When they do, it tends to be through other social science departments rather than through in-depth conversations about which statistical methods are appropriate for the oftentimes spotty data sets that are available to historians of the early modern period and nineteenth century.

    • Sara, it’s great that you’re bringing up these questions. I go through my coursework and qualifying exams without so much as having to think about learning statistics. When I became interested in it myself, I learned a few things. First, my department had once taught stats but, sometime in the eighties, grad students taking the cultural turn lost interest and the class was discontinued. Second, having already gone deeply into my dissertation work, it was a major challenge to pick up quantitative skills that allow for more than just counting, i.e., for mathematically rigorous analysis. After completing my dissertation I did take a grad-level stats class in a sociology department and found it incredibly interesting, but so far I have not found a way to make use of it in my current research. In any event, I wonder whether an article with a regression table would even be readable to most historians today.

      • Let me echo Andrew and Ariel – it’s great to see quant getting thrown back into the mix. Kudos for a great post.

        As someone with formal quantitative training I think it’s important to note that cliometric methods are a means unto an end. History is fundamentally about people, not numbers. One of the reasons why Quant fell out of favor in the 1990s was an increasing focus on minutiae without readily apparent use for the overall big-picture.

        One of the key problems of regressions, for example, is that they are used to demonstrate statistically significant evidence of correlation between an independent and a dependent variable. However, correlation is not causation. And, if historians are interested in any one thing it’s “why.” Regressions may get us close to that answer, but they are not an answer in and of themselves. Plus, when we deal with individuals, we’re all too often dealing with an “N” of 1.

        Thus, I think you’re spot on when you say that the use (and disuse) of quantitative methods comes from the questions that historians ask of the past. But, it’s also on this point that I’d like to press your analysis a bit:

        Counting is great – but why count in the first place? What historical questions will counting legal outcomes/probate records/musket contracts from the War of 1812 answer?

        For answering this question I think that Andrew’s point about quantitative analysis for historical political economy is particularly salient. It’s one thing to write about political debates over economic policies, and another thing entirely to show the very real effects of the implementation of those policies. Quant is thus one way to answer the “so what” question.

        For my own work I’ve self-consciously tried not to be the guy who just “counts muskets” (and gunpowder, cannon, uniforms, etc…), but rather someone who uses quantitative analysis to make a larger argument about the importance of politics for economic change in early America.

        • another thing entirely to show the very real effects of the implementation of those policies

          Implementation denotes such a variety of shifts across time and space that quantitative analysis seems almost inevitable. Hopefully the “analysis” is a well-crafted tale.

        • Thanks, Ariel and Andrew.

          Andrew, I agree with you that we shouldn’t count simply for the sake of counting. In response to the question of “why count?,” I’d make two additional points. First, since, as you note, history is “about people,” quantitative analysis offers one way to describe the characteristics of the actors whom we’re studying.

          Second, I think that quantitative analysis is particularly useful when we are already making implicitly quantitative claims to justify the significance of our research–for instance, when we are arguing that the actors whom we are studying formed a sizable group, or that the phenomenon we are tracking appeared with some regularity. Quantitative research helps us to validate and support these kinds of significance claims. In my own research, I’ve found that that my offhand guestimates about frequency are not necessarily accurate once I’ve sifted through large amounts of data. Counting helps me to fact-check my impressions. (Of course, I’d quickly add that a research topic doesn’t need to concern large groups or a frequently-occurring phenomenon in order to be significant.)

  3. Pingback: Guest Post: The Decline of Barbers? Or, the Risks and Rewards of Quantitative Analysis « The Junto

Leave a Reply to Sara Damiano Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: