Last week, an anonymous Ph.D. student published a Guardian op-ed under the headline “I’m a serious academic, not a professional Instagrammer.” Among other complaints, the author (a laboratory scientist) condemned the practice of livetweeting academic conferences. Livetweeters care less about disseminating new knowledge, Anonymous wrote, than about making self-promotional displays: Look at me taking part in this event.
I hate to admit it, but the author may have a point. When I shared the article, one of my friends, an anthropologist, observed that she finds livetweeting “baffling” because she would rather listen—and be listened to—than be distracted during a conference talk. Katrina Gulliver, an influential advocate of Twitter use by historians, told me (via, yes, Twitter) that she no longer approves of conference livetweeting either. “Staring at screens is uncollegial,” she argued; it interferes with face-to-face discussions, and the value of the information passed along is dubious too, because “tweets present (or misrepresent) work in [a] disconnected, out of context way.” Bradley Proctor told me he has had one of his talks misrepresented by a livetweeter—a particularly sensitive issue for someone who researches Reconstruction-era racial violence.
Surely these are important concerns. It seems to me that conference livetweeters—yours truly included—need to get better at articulating explicit objectives and boundaries if we’re going to take these risks. So what do people say about the way they use Twitter at conferences?