Last week, an anonymous Ph.D. student published a Guardian op-ed under the headline “I’m a serious academic, not a professional Instagrammer.” Among other complaints, the author (a laboratory scientist) condemned the practice of livetweeting academic conferences. Livetweeters care less about disseminating new knowledge, Anonymous wrote, than about making self-promotional displays: Look at me taking part in this event.
I hate to admit it, but the author may have a point. When I shared the article, one of my friends, an anthropologist, observed that she finds livetweeting “baffling” because she would rather listen—and be listened to—than be distracted during a conference talk. Katrina Gulliver, an influential advocate of Twitter use by historians, told me (via, yes, Twitter) that she no longer approves of conference livetweeting either. “Staring at screens is uncollegial,” she argued; it interferes with face-to-face discussions, and the value of the information passed along is dubious too, because “tweets present (or misrepresent) work in [a] disconnected, out of context way.” Bradley Proctor told me he has had one of his talks misrepresented by a livetweeter—a particularly sensitive issue for someone who researches Reconstruction-era racial violence.
Surely these are important concerns. It seems to me that conference livetweeters—yours truly included—need to get better at articulating explicit objectives and boundaries if we’re going to take these risks. So what do people say about the way they use Twitter at conferences?
When I asked around, a lot of historians (and some scholars in other fields) told me they livetweet conferences for what I see as the obvious reason: to share information with, or open conversations to, people who can’t be in the room. Most were more specific, though. They argued that livetweeting opens doors for less privileged members of the academy.
Stephanie McKellop said she livetweets conferences to help other scholars overcome exclusion on the basis of funding, disability, or conflicting obligations—in other words, to make conferences [edit: feel like] less of an “elitist scheme” that favors a few over others. Megan Brett said she livetweets as a way of “paying it f[or]w[ar]d” after other people livetweeted conferences she couldn’t afford to attend.
There are other forms of inclusion as well. Carla Cevasco observed that livetweeting can help shy or junior academics with networking. Rachel Hooper, a graduate student in nineteenth-century art history, added that it can be useful for promoting conversation across disciplinary lines: “I like to share what historians are saying at their conferences with art historians and vice versa.”
Other scholars answered my question in terms of excitement: they livetweet conferences as a way to express public enthusiasm for interesting new work. “It’s not unlike leaving a panel and saying to a colleague ‘listen to the great thing I just heard,'” said Jacob Remes. (He added that it “makes me sad” that nobody has tweeted one of his talks yet.) Perhaps this supports Anonymous’s point, but most of the people who talked to me saw this as a matter of promoting other people rather than themselves. Natalia Mehlman Petrzela called livetweeting a way to “celebrate” good presentations and compliment presenters. Julia Gaffield wrote, “I see it as [saying] ‘look out for this person’s work on this topic.'”
A surprising number of people, considering my friend’s concerns about distraction, also told me that livetweeting enhances their own comprehension of the conference talks they attend. Alexandra Montgomery said it is a form of “public note-taking” that also improves her writing, and John Garrison Marks told me it helps him digest a presentation by focusing on its constituent parts. Garrett Wright wrote that livetweeting keeps his mind from wandering. The Junto’s own Rachel Herrmann made a similar point. Alicia Pearson wrote, “Usually it’s me going ‘Must remember this person’s name’ or ‘I want to find that archive’… keeps me awake/paying attention.”
I can add that my experience is similar. The peculiar intensity of livetweeting, with its pressure to capture a presentation accurately in real time, often in conversation with other Twitter users, seems to be a useful mental challenge. On the other hand, the risk of distraction is real. The digital archivist and public historian Larry Cebula commented that “if the presenter reads out loud,” boring the audience, “I can pretend to be live-tweeting while I am really on Facebook.”
Almost all of the Twitter users who responded to my question were ultimately in favor of livetweeting. All of them, however, also seemed happy to impose boundaries on themselves. Some, like Liz Covart, mentioned that their tweets are distillations of more extensive notes, sometimes posted after the talk is over. Sophie Cooper advocated having just one or two “designated” tweeters per room to make sure panelists don’t face a crowd of people all staring down at smartphones. And Megan Brett wrote that she tries to capture themes, reflections, and directions for further inquiry rather than take comprehensive notes. Finally, everyone who mentioned it, starting with what I think was the first response (from L.D. Burnett), seemed to agree that presenters should have the option of putting their presentations off-the-record to Twitter.
Fundamentally, I think, the scholars who responded to my query—and thank you all!—seemed to think of livetweeting as a matter of fluid conversation, an extension of the sorts of things historians and other academics do all the time in person. There was no one idea of what good livetweeting is except that it should promote dialogue and exploration. The Guardian’s anonymous editorialist thus describes a kind of solipsism and self-aggrandizement that I simply don’t see in my circles.
On the other hand, I do not think this solves the problems identified by Katrina Gulliver. Conference livetweeting is socially awkward at the best of times, and it involves real risks. Its value depends on the good judgment of those who engage in it; they need to know how to read a room as well as how to summarize someone else’s work accurately. It also depends to an uncomfortable degree on the good faith of the far-flung audience. We rely on them to understand that livetweeting is ad-hoc, fallible, and fragmentary—a series of impressions that could be misleading in unpredictable ways. Thus, we should expect academic livetweeting to change as the overall culture of Twitter changes. And we probably need to be prepared to adopt defensive measures as Twitter becomes (so far) an increasingly charged public space.
This is a genuinely fascinating post, thanks for pulling it together. As someone who used to live-tweet rather more than I do now (in large part because of some negative feedback I recieved from someone whose presentation I put out there) I think you’re right to say that despite the good intentions of many who do it, there is a bigger question here about the spirit in which it is done. We’re closer than we used to be to a general rule of live-tweeting that takes seriously the opportunity for access it provides to those who can’t make certain conferences. I myself have benefitted from multiple tweeters at various American Revolution events over the last few years that I personally couldn’t be at. But I think it’s clear we need to state outright that live-tweeting doesn’t capure everything the presenter wants to say, and that there’s no substitute for being in the room and getting to hear them say it.
As an addendum, I will say that I think the presenter bears rather more responsibility for how they are interpreted than some like to admit. I think we’ve all been in a situation where someone in the audience took a point we were trying to make the wrong way, and while in the Q&A we have an immediate right of reply, it strikes me that the same principle applies to live-tweeting. You ought to be as clear as you can in your remarks, so there is little room to misinterpret you, but everyone ought to have the right-of-reply to tweets as much as questions. I think this, above all, is what worries the non-twitterstorians out there.
I have nothing against livetweeting in principle (and can’t say I’m shocked that people active on Twitted like it…). But most of the time I see it in action, it leaves a bad taste in my mouth–a lot of passive aggressive cheap-shots. It’s not particularly fair, for example, to call someone a racist on twitter while sitting quietly during their presentation.
Excellent and thought-provoking post. Thank you, Jonathan, for raising the topic here (and on Twitter, of course!)
Beyond conferences, I’d note that tweeting research finds from the archive in real time–as well as special events/talks–has become a social media mainstay for many historical societies, museums, and libraries. For public history professionals, that means maintaining a fine balance between supporting scholarship-in-progress and highlighting diverse corners of the collections. I’d love to learn from Junto readers what “works” when public historians/institutions live-tweet, and what doesn’t–yet.
Smartphones have done a lot for us, but improving etiquette is not one of them. Tweeting during a conference is the academic equivalent of “that guy” who texts in the theater during a movie. And don’t play the “I’m helping others who couldn’t be here” card. If that’s the case, then don’t tell students in your classroom to put away their phones while you’re teaching. After all, they’re just helping others who couldn’t be in the class.
Uh this may come as a shock to you, but not everyone fully endorses telling students to put away their phones. http://www.thetattooedprof.com/archives/339 http://www.thetattooedprof.com/archives/609
Are we to make an announcement before a conference presentation, ” I band tweeting of my presentation”? A serious question if one is not interested in having research-in-progress sent into cyberspace. Conference should be safe space to make mistakes and improve work not a place for final reviews.
Surely Twitter, and actually everywhere (including blogs!), should be a “safe space to make mistakes and improve work”? Perhaps the real issue here is an academic culture in which there’s a perception that everything we do should be perfect.
At past conferences, I have been on panels where we have asked the audience not to live-tweet, and that hasn’t been a problem. And when I do tweet a talk or a panel to share with people who follow the #USIH hashtag, I always make sure I have the panel’s/conference’s permission first. Sometimes that’s a blanket permission given in the program. (And FWIW panels/panelists have always been free, at any conference I’ve attended, to tell the audience that they would prefer not to have the talk tweeted. People should respect such requests.)
But if it’s not stated explicitly in the program that “tweeting is encouraged,” or some such thing, I ask before live-tweeting someone’s talk. Maybe that’s a quaint and old-fashioned of me, but I know that not everybody is comfortable having a talk intended for a small audience broadcast to a much larger one. Never hurts to ask. And anyone who asks not to have their talk shared in that way should have those wishes respected.
It definitely makes sense to ask if the conference does not provide “no tweeting”/”tweet this” signs. I appreciated the presence of those signs at this year’s SHEAR, and also the panel chairs who said in their introductions that it was okay to tweet Presenter A but not Presenter B. It also depends on the tone of the conference – a THATCamp or other unconference is live-tweet central, NCPH seems to vary by panel topic, and other conferences twitter is thin on the ground.
As one who cannot attend most conferences due to budget & obligations, I’m glad for those who share their experiences via twitter. However, the fact that I cannot attend the conference also means that I probably won’t be able to watch my Twitter feed for a live event, either. Therefore, I’m perfectly fine with presenters making their sessions twitter-free so that attendees listen carefully and think through their tweets before sending them out, rather than relying on off the cuff remarks.
I’m really pleased to see this issue being discussed in a much more even-handed and thought-provoking way than that bit of Grauniad clickbait. I personally hate having my work live Tweeted in conferences, and have decided to ask panel chairs to ask attendees to refrain from doing so during my presentations – the reason being that I’ve seen too many extremely esoteric or distorted accounts of my work (or the work of my peers) appear on Twitter, and I just don’t see who this helps.
These peculiar, sometimes rather Delphic renditions of conference papers were not, I am sure, the result of the nefarious ambitions of deliberately mean-spirited saboteurs. Rather, they were because the Tweeters were simply not very good at Twitter. Reading these weird edits of my work, I felt like my voice had been completely garbled, and my work rather mangles. Undue prominence was given to some minor points, and really key messages were ignored or badly explained. There is a knack to accurately relaying complex, live-delivered content (sorry!) in 140 characters or less, which is why this very skill has become a highly lucrative professional asset for journalists and in comms departments across almost every industry and sector. And it’s a skill which, let’s face it, many of us are never going to properly develop. We’ve all seen Tweeted conference presentations where a paper’s killer blow has been missed because the Tweeter was too busy struggling to delete ‘deleterious’ or spell ‘Chesapeake’. Who benefits from this?
There’s also a lot to be said for conferences being used (as many of us do) as workshop spaces for new ideas to be thrashed out or new evidence to be presented. It’s not really fair for academics to be expected to share mangled versions of that work beyond the room, and to a much larger online audience about whom they know nothing, if they aren’t ready.
Pingback: Editors’ Choice: What’s Livetweeting For, Anyway?
Thanks for the great post, Jonathan! You’ve really captured both sides of a complex and important debate. I put some of my initial responses on (where else?) Twitter.
I am both a practitioner and consumer of conference live-tweeting. While I’m sympathetic to the concern that “my ideas are too complicated to be understood in real time,” I wonder whether this means that the content is not appropriate to a spoken presentation in the first place. I’ve seen, for instance, live-tweeters use the wrong name for a main character in a speaker’s presentation when the speaker has not bothered to say the character’s first name, or mischaracterize (in good faith) the argument of a presentation whose argument is murky at best. If multiple people in the audience misunderstand your argument, that may not be the audience’s fault. Seeing people misunderstand your work on Twitter can be a way to realize that you are not, in fact, fully communicating what it is that you’re trying to say–that’s certainly happened to me, and I’ve adjusted subsequent talks accordingly.
tl;dr version: If you see yourself as a transparent vessel of knowledge, you may be disappointed in how people tweet you. If you see scholarship as a conversation and conference papers as drafts, being live-tweeted is an excellent way to see how audiences interpret your work in (nearly) real time.
Two years ago while I was on the Omonhundro Institute Council we decided to develop “Twitterquette” guidelines for live tweeting at conferences to try to balance all of the reasons/concerns raised here –including giving presenters an opportunity to opt out of having their papers tweeted and encouraging tweeters to respect those wishes. I *think* the guidelines have worked pretty well at OI conferences so far
Pingback: useR and JSM 2016 conferences: a story in tweets | A bunch of data
Pingback: useR and JSM 2016 conferences: a story in tweets – Mubashir Qasim
Pingback: Making a Webpage for a Conference Paper « The Junto
Pingback: Canadian History Roundup – Week of August 7, 2016 | Unwritten Histories