Open Access, Open Secrets: Peer Review and Alternative Scholarly Production

[Draft of a talk for Open Access Week to a mixed university and community audience. It’s longish and much of it may be familiar to those in the field, but comments are welcome. Slides are not included here.]

Let me preface this talk with two apologies. First, to the organizers for my admittedly fuzzy definition of “open access.” I take that term very loosely: not simply the condition of open distribution for familiar scholarly forms, like freely available journals, but “open access” as the open distribution of all kinds of scholarly and media forms, from tweets to books to software to entire networks. Second, apologies for my complete failure to talk about “open access” without straying into topics like institutional evaluation and peer review. Thinking about open access, especially in a research university context, is like pulling a string on a knitted sweater: pretty soon the whole thing is going to unravel. So now you’ve either been warned or prepared to follow the winding thread of what follows: “Open Access, Open Secrets: Peer Review and Alternative Scholarly Production.”

For scholarship in general, but especially in the humanities, what open access does is shine a bright light on why access was ever closed. Open access has helped us access our open secrets, particularly in the context of the production, dissemination and review of scholarly work. It opens questions about journals and copyright and the business of publishing and consuming scholarship. And it opens questions about peer review—the remaining stalwart in any defense of a closed publishing model in the humanities, or so it seems to me. (See Smith for a quick overview of the economics of open access.) Sure, there are other defenses, but none seem to me as tough to address. Or so deceptively easily to solve, according to some in what’s called the digital humanities.

Today I want to reflect on open access and scholarly production in the humanities: its traditional forms as well as emerging alternatives. So what are we talking about when we talk about scholarly production? First, we’re already talking about genre, and in the humanities, the monograph, the edited collection, the research essay, the review. These things were in large part shaped by their production context, rather than exclusively by the intellectual distinctions of those forms. That production context includes the presses and libraries and intellectual property laws as well as the scholar. Usually a lone scholar, whose solitude is a historical artifact preserved mainly, if not exclusively, by a regime of tenure and promotion (or “T&P” as it’s commonly referred). So we’re also talking about evaluation, starting with a regime of T&P which is situated in the research university but actually outsources its primary work (Ramsay). T&P standards look to the oracle of the publisher or university press for answers to that crucial question: is this person any good at what they individually do?

There are familiar problems when scholarship is the measure of job performance. Obviously, the whole idea of tenure tries to deal with this. Tenured, one has the freedom for research whose merits will be measured solely by its intellectual contributions to the field. The field weighs, the university pays. But when such measurement happens outside the control of the university, there is naturally loads of pressure on their initial tenure decision. So early on, scholars are writing to impress, working to achieve promotional standards which have been in place almost as long as the research university itself.

Those standards are changing. Slowly and unevenly. For example, in the last few years the Modern Language Association—which recommends standards for English departments, among others—deployed a “Task Force on Evaluating Scholarship for Tenure and Promotion” to explore alternative standards in light of two related facts: 1) the academic publishing crisis, and 2) the heavy demands on junior scholars to publish. And so they recommend things like crediting more publications in article forms instead of the irritable reaching after monographs. Though not the focus of the MLA’s report, there is a third and perhaps more interesting dimension to the problem: the production and evaluation of alternative kinds of scholarship afforded by digitization and the world wide web. (Elsewhere, the MLA has produced “Guidelines for Evaluating Work with Digital Media.”) For example, making digital editions of the textual materials which are our cultural inheritance. Compiling databases to expose the larger patterns of datasets, or textual and linguistic corpora and the discourses they embody. Writing software to sort and analyze digitized texts for insights into language, literary tropes, genres, bibliographic differences, etc. Building visualization tools, or developing geographic information systems. Defining the codes and technical standards for research in any of the above.

There are obvious and immediate challenges to evaluating such kinds of scholarship. The first relates to the material itself. For instance, does writing code for scholarship count as scholarly writing? How might you review someone’s XML for its intellectual or disciplinary integrity? The second problem relates to its production: such scholarship is rarely conducted in a vacuum, but frequently requires broader and what we might call “extra-disciplinary collaboration,” which is to say beyond the traditional academic departments to collaborators like project managers, software designers, and programmers whose intellectual contributions cannot be overlooked (Nowviskie). Of course, scholarship never was done in a vacuum, but drew on a network of librarians, publishers, reviewers, scholars, and readers who, for various reasons, all tacitly acceded to the accreditation regime of the single author. But digital scholarship pries that secret completely open and makes it a problem, if not a scandal. The old social contracts of scholarly publishing are rapidly expiring (Cohen, “Social Contract”).

Here at my university, we are trying to pay attention. But such a big ship can be slow to turn. Let me open a few more secrets. After coming on as both a professor of British Victorian Literature and a digital humanist, I got some senior advice about my future projects: if any of them looked more bookish than others, start there. Now that advice giver is not biased, and is committed to the vital role that alternative or digital projects will play in ongoing scholarship, and knows it has an improving chance in English department evaluation committees. But they also know that’s just the first rung in a ladder escalating through various college and university committees who themselves ultimately make a recommendation to the provost. And there’s a rumor going around, which though surely untrue is still kind of funny, that it doesn’t count as research unless you can kill a rat with it.

Am I fired yet? This is not an exposé. These are our open secrets. This is not to point fingers, either. The ancien régime of scholarly evaluation is a castle of depersonalized bureaucracy. Its artifacts are scattered everywhere in academia. For example, consider this official template for a CV to be submitted with one’s T&P file, which I have to update each year. The template gets changed every so often, but the genre of CV fundamentally hasn’t since its invention in the late nineteenth-century as “a format designed to highlight analog achievements” (Scheinfeldt). In using it to record new forms of scholarly production, we might be, as Tom Scheinfeldt suggests, pouring new wine into old skins. Let’s take the example of today’s talk, and figure out under which heading it might fit. I now have something to add in this section, for “Non-Refereed Presentations at Conferences” or “… at Symposia.” That happens to be the very last subsection in the entire category of “Presentations,” but hey, I’m having fun. But maybe I should cash this in for more institutional credit. Were I to cook this up into an article submitted to a peer-reviewed journal, it would go way up here in “Refereed Journal Articles” at the very top of the list of “Scholarly Activities”. If I published it with open access, say by putting it into the FSU D-Scholarship institutional repository—which, we might note, currently has very little humanities content—it would go … where would it go … maybe “Non-Refereed Journal Articles” or maybe further down “Non-Refereed Reports”? Would it fit anywhere? If I published it on my own blog, it would go … where, in “Original Creative Works” under magazines or novels? Even I’m not that high on my own writing. Maybe it would go back up in “Non-Refereed” something or other. Or maybe down in the “Service” section? Is an open-access blog “Service to the Community”? That is the penultimate section of the entire CV template, just before “consultation,” as if the dark threshold to the private sector. What if I tweeted various parts of this presentation to my network of scholarly peers, for further reflection, comment, and distribution? Well … if I did, like messing with some photograph of the future, all this would never have happened.

In fact I have been [read: will be] tweeting this as I go, as a a real-time performance of its paradoxical institutional invisibility. I’m using the hashtag [#oaw2010], distributing key points publicly and also to my network of “followers,” about 80 scholars, digital humanists, librarians, and students, who are themselves exponentially connected to a broader network of participants. Otherwise known as “tweeps.” Those of you yet to join the Twitterati might be thinking: “tweeps”? Isn’t that an unkind thing to say about someone? Like everyone else, I was first confused by and skeptical about Twitter. I got an account early (@pfyfe), but only started using it when, at the MLA’s annual convention, a colleague argued for the potential of Twitter as a tool for “lightweight scholarly collaboration” (Jason Jones, presentation at “Links and Kinks in the Chain: Collaboration in the Digital Humanities,” 2009). By now, I see its gravity is far stronger than that. My peers on Twitter share news and resources which map out a rapidly evolving professional and intellectual landscape. My own tweets, with perhaps a few excusable exceptions, are done in the same spirit. It is advancing the work of the discipline. It is publishing, in non-linear installments of 140 characters at a time. I would go so far as to call Twitter one of the most active and prolific open-access publishers in existence.

But it doesn’t count. Should it count? How on earth would you count it? Could you imagine this CV template if it recorded tweets? Indexed who retweeted my tweets? Captured the dissemination of tweets and retweets into automated feeds or plain old citations? Have you tried to “cite” a tweet yet? The Library of Congress is betting you might and is now indexing them all. If my tweets are archived in the Library of Congress, is that a publication? What if a twitter account from a university press retweets me: is that a peer-reviewed publication? Or maybe what some citation-counting software refers to as a “usage event”? Does that even mean reading? Does it matter? Getting away from Twitter, what about blog entries? Online working groups and scholarly collaboratives? Exhibitions of digital objects assembled and presented within a collaborative framework like NINES? How does all this count? Several academics have proposed ways of “making digital scholarship count” (Kelly; Davidson 1, 2), or measuring the metrics of open access scholarship (Mangiafico), and suggest various guidelines and distinctions for the purpose. But those distinctions proliferate and blur as rapidly as digital work and access policies change. What might be called the “bibliometrics” of digital scholarship is a hell I would not wish on any administrator.

Perhaps the point is not to count at all. Again, universities never really have. University presses have, and research universities count their sums instead. (Or don’t count at all, according to Ramsay.) Perhaps it’s time for the university to outsource their evaluation in an entirely different way. If scholarship is increasingly happening in non-traditional venues with non-traditional collaborators outside the academy’s purview, perhaps the evaluation of scholarship needs to shift toward that same space. Peer review has traditionally worked as a gatekeeper. But in the fenceless wilds of the world wide web, having someone defend a gate is practically absurd (Kelly). Scholarly publishing is being outflanked. So what to do? Build more fences, or give up the gateway? From an open access perspective, the answer is obvious. In terms of peer review, some have argued to “crowdsource” it.

Such has been the recent proposal of a number of notable figures within the digital humanities, including Cathy Davidson of the Humanities, Arts, Science, and Technology Advanced Collaboratory (or HASTAC) at Duke University. Davidson has even provocatively and successfully crowdsourced the syllabi and grading in her undergraduate classes, relinquishing peer review by a professorial gatekeeper to the review by one’s own roving peers. Some of you may know of a recent experiment by the journal Shakespeare Quarterly, edited by Katherine Rowe, in which they handed over the “peer review” of four articles to the public—which is to say, whoever wanted to trouble themselves with four articles on Shakespeare and digital media. Over 80 different commenters participated, some invited and others uninvited but still welcomed. You can browse the project yourself online; they use publishing software called “CommentPress” which does a nice job of relating comments by paragraph and page. Ultimately, the editors found out that quality was preserved; many of the comments influenced the final shape of the articles.

Now, this probably meets with a great big “duh” from certain communities in the sciences who, with the wonderful example of arXiv.org, have pioneered how the model of “publish first, filter later” might work in academia. But “filter first, then publish” is so deeply engrained in the institutional and everyday lives of humanities scholars, such an idea can seem quite radical. It seems to invert the economics of prestige and publishing scarcity by which the discipline of English, at least, has historically operated. Is it really going to work?

Kathleen Fitzpatrick of Pomona College recently wrote a book on crowdsourcing peer review, Planned Obsolescence, the manuscript of which was actually online and open for public comment during her writing. In contrast to the Shakespeare Quarterly project, Fitzpatrick takes the concept even further by suggesting peer review occur not before publication, but either while in process or even afterwards. Elsewhere she calls this “peer-to-peer” review. We should note that Fitzpatrick’s “book” comes out in physical book form this fall from NYU Press. Publishing online is preaching to the choir, Fitzpatrick claims; she’s also trying to reach more traditional audiences in traditional settings to make the case for reform. Fitzpatrick wants scholars to imagine some kind of post-publication review system in any form, and makes strong appeals to our better natures:

“given that we in the […] humanities excel at both uncovering the networked relationships among texts and at interpreting and articulating what those relationships mean, couldn’t we bring those skills to bear on creating a more productive form of post-publication review that serves to richly and carefully describe the ongoing impact that a scholar’s work is having, regardless of the venue and type of its publication?” (“Open Access”)

Yes we can. The way to do it is to consider not just the credibility of the scholarship itself, but to understand the creditability of specific networks or entirely different and open platforms of publishing than what we’re used to. That is hard, because those networks are only just coming into focus and constantly evolving. But they’re out there and they’re working right now.

I will point out the obvious again: open access not only shines a hard light on review, but also on peers. Who are our peers? Who is the crowd which is the source of what Clay Shirky calls the web’s cognitive surplus? On the one hand, our scholarly peers seem fewer and fewer, to judge from library acquisitions budgets and the hiring crisis in the academy. On the other, our peers are potentially greater and greater, well beyond the walls of the ivory tower which has been radically opened by the world wide web.

For a growing group of scholars, “review” itself misses the point. A closed system makes no sense in open terms. It’s like trying to unfold a box and then still keep stuff in it. Instead of review, peers are the point. Or perhaps more precisely peer networks. There are some really interesting things happening at the intersection of scholarly collaboration and open access publishing. Here we move even a little farther from experiments like Shakespeare Quarterly or Fitzpatrick’s Planned Obsolescence. We move to a model of scholarly production different enough to recently make the national edition front page of the New York Times. That article featured a picture of Dan Cohen, a historian at George Mason University and a pioneer in digital humanities, particularly in the development of Zotero—the free and open-source citation software; Omeka—a free and open-source collection and exhibition platform; and his more recent and aptly-named project (with Scheinfeldt) “Hacking the Academy.”

As you’re probably aware, the word “hacking” these days means to adapt, manipulate, and make productive use out of a given technology or technological context, what we might call a “platform.” So Cohen’s “Hacking the Academy” is about manipulating the platform of academic production, or in other words about refitting how we write, peer review, and publish to the possibilities of digitization and the internet. But if the title also sounds like a barbarian assault on the academy’s ivory tower, that’s not exactly wrong. The crowd in crowdsource might be a rough bunch. It might be a mob.

Cohen invited that mob to help create “a book crowd-sourced in one week,” as the subtitle goes. The projected solicited all kinds of written and media contributions: blog posts, videos, tweets, and more by the hundreds. And the project has since been picked up by Digital Culture Books, an open-access imprint of the University of Michigan Press and the UofM Library. Hacking the Academy, as even its title font suggests, carries the energies of a manifesto. Consider its opening paragraph:

“Can an algorithm edit a journal? Can a library exist without books? Can students build and manage their own learning management platforms? Can a conference be held without a program? Can Twitter replace a scholarly society?”

Cohen’s “flash book,” as it were, is an early and fascinating attempt to formalize an answer to these questions, developing alongside related answers in the forms of Zotero, Omeka, the now-international “unconferences” on technology and the humanities called THATCamps, and the recently-released tool “Anthologize” which allows users to aggregate their own “books” from automated feeds online. “Anthologize,” like Hacking the Academy, was created on the fly, as part of a Mellon-funded project this summer called “One Week, One Tool.” It raises lots of interesting questions, not just about producing open access work, but about editing open access collections, about making arguments through the very networked relationships that Fitzpatrick calls on us to analyze.

In all these projects there’s a revolutionary rhetoric at work, which is unmissable in David Parry’s contribution to Hacking the Academy, titled “Burn the Boats/Books.” Parry is an insightful and determined provocateur, and he knows we will flinch at the apostasy implied in his title. But his point is simple: there is no way back. Parry claims we must “embrace new modes of scholarships enabled by web-based communication, rather than attempting to port old models into the new register.” Naturally, Parry is a staunch advocate of open access publishing, explaining with his characteristic edge: “If you publish in a journal which charges for access, you are not published, you are private-ed.”

Private-ed as in commercial, and private ed as in education. These scholars agree that higher education, particularly in institutions at least nominally supported by public funds (come on Florida legislature!), owe their publications not to the publishers but to the public itself. “There is an ethical imperative to share information,” says Cohen. But there are selfish motives too. They even think this very unguarded publicity gives you a better chance at being credentialed, inside the university as well as out. As Cohen suggests, “your work can be discovered much more easily by other scholars (and the general public), can be fully indexed by search engines, and can be easily linked to from other websites and social media” (“Open Access”). In other words, people might actually read it.

What then? What happens once we open access, or open networks of scholarly communication? The genie who escapes the bottle ain’t going back in. But perhaps the loss of his wishful thinking means the gain of a more diffused genius. Steven Berlin Johnson, who visited FSU as part of the “Origins” lecture series in the spring, makes a provocative suggestion about this. In a talk at Columbia University called “The Glass Box and the Commonplace Book,” Johnson outlined what he considered two paths for the future of text. Basically, text in “glass boxes” or closed, non-reactive forms for looking at, and text in “commonplace books” or open, dynamic forms for interacting with. Though we can relate these metaphors to scholarly publishing, Johnson is more interested in the systemic intelligence of the web itself. He proposes a notion of “textual productivity” for how the web can semantically process and network texts through its cascading algorithms, completely beyond the scope of any given writer or designer. Familiar examples of this include the ads which often dynamically populate a web page based on its contents, sometimes to comic effect. Or the autocomplete mind-games of search engines like Google which suggest questions as you type them. (Try, for example, “how do we fix peer review?”) Or the open-source plug-ins for WordPress which relate web pages and links and tweets to generate a networked conversation on the fly (for example, this page is referenced in the comments of a related blog post on open access). Google itself might be “in effect a gigantic peer review” (The Aporetic). As Johnson says, “By creating fluid networks of words, by creating those digital-age commonplaces, we increase the textual productivity of the system. […This] may be the single most important fact about the Web’s growth over the past fifteen years.” We can extrapolate Johnson’s point to this: If the internet is teaching us to think, so by publishing openly and online are we teaching the internet to think.

If that seems a little heady, consider this more down-to-earth example. Johnson gave that talk at Columbia University on the night of Thursday, April 22. The next morning, Friday April 23, he posted the transcript to his blog using an iPad on his flight to Tallahassee, and then sent an update message to Twitter. By noon, Twitter notified me of something interesting to read, “The Glassbox and the Commonplace Book,” which had already been collected by my RSS feed aggregator. At 3:00 pm in a seminar room in the Williams building, I had a face-to-face discussion with Johnson about his post, first aired 18 hours before in a lecture hall on the upper West side. Who says Tallahassee is not cosmopolitan?

Here’s another example. Last week I posted a draft of this talk to my blog. In fact, at around 1:15 pm on Monday. After posting it, I looked it over on the live site, scanning a final time for errors or typos or missing links. Ten minutes later, I went to Twitter to make an announcement about it. But someone had already beaten me to it. Bethany Nowviskie, a former colleague and now director of digital scholarship at the University of Virginia, had already read and enthusiastically broadcasted the post to her 1,250 followers. Within ten minutes. A few of Bethany’s followers read the draft and retweeted her announcement in turn. Including the Executive Director of the Modern Language Association, Rosemary Feal. In the week since, hundreds of people have read the draft. In some ways, my actual talk here is the shadow of its virtual reality.

This is scholarship at warp speed, especially compared with its conventional forms, or with publishing in a “glass box.” Of course, the compression of time and space isn’t necessarily the point. Rather, it is the connections facilitated by the open network, and the cascading productivity of the text and media and people which constellate it.

4 Comments

Filed under Uncategorized

4 responses to “Open Access, Open Secrets: Peer Review and Alternative Scholarly Production

  1. FIRST FIX ACCESS TO PEER-REVIEWED RESEARCH…

    See: bit.ly/peer-review-reform

  2. mike O'Malley

    Great to see like-minded people out there. But we have to actually post the work, not just talk about posting the work. I often think it’s up to people like me, who have tenure and a relatively secure place in the system, to open up alternatives. But maybe not–maybe that depends too much on preserving the existing academic structure

  3. Pingback: Briefly Noted for November 16, 2010

  4. Pingback: Peer review: open to enquiry « Research Communications Strategy

Leave a comment