(Part 1 of a 2 part series – read part 2 here.)
Anyone having done peer review is well aware of the process, as can be seen in this example from Oxford University Press’ Nucleic Acids Research: “Upon receipt, manuscripts are assessed for their suitability for publication by the Senior Executive Editors and the editorial staff. Only the manuscripts meeting the journal’s general criteria for consideration are sent out for review, saving time both for the Authors and the Referees. These manuscripts are assigned to Executive Editors who take overall responsibility for the peer-review process. Typically, a minimum of two reviews are required for each manuscript. Referees are chosen first and foremost for their expertise in the field. Referees can also be recommended by the Authors, the Editors and other Referees. Once they have agreed to review a manuscript, Referees have two weeks to submit their comments via our online manuscript tracking system. We aim to reach a decision on all manuscripts within 4 weeks of submission; hence a prompt delivery of the Referee’s reports is essential.”
In 2010, it was estimated that 50 million scholarly articles have been published in the 350 years since the first academic journal was organized. That publication, Philosophical Transactions from the Royal Society began in 1665, “providing a mechanism for the registration, dissemination and archiving of research, it provided the framework upon which peer review would eventually develop.” [Pictures of the first edition are available here.] Stress on the system of evaluation and publication of scientific research can be seen in the very rise of Retraction Watch, which has provided “a valuable source of information that has helped focus public attention on scientific misconduct and the process of self-correction” since 2010.Though never perfect, the scholarly publishing system has served the advancement and application of ideas and discoveries well over the years. Even today, given the pressures of publish or perish and the diminishing numbers of tenure-track academic positions being made available in an era of austerity, scholarly publishing has thrived. Today, with the explosion of technological advances, we are seeing an amazing series of alternatives, opportunities, advances, and challenges. Three facts seem assured:
1. Scholarly Publishing Continues to Grow Rapidly. The number of active, academic/scholarly, peer-reviewed journals in 2011 has been estimated at 26,746, according to data from Ulrich’s Periodical Directory. With the advent of Open Access publishing and the World Wide Web, the numbers of journals and published peer-review articles have growth dramatically. International Association of Scientific, Technical and Medical Publishers’ report on STM Publishing in 2012 estimated that more than 1.8 million scientific and scholarly journal articles are now being published each year. The Public Library of Science (PloS) noted in their 2013 annual report that they “published over 34,000 research articles in 2013, an increase of 33% over 2012, bringing the total number of Open Access articles published by PLoS to more than 100,000.” They estimate that they receive as many as 4,000 manuscripts each month.
Scholarly publishing expert Stevan Harnad, psychology professor at Université du Québec à Montréal, believes the major problems with scholarly publishing today are quality and reliability, not fraud. The major issues to be addressed, he believes are “too many papers, too many journals, too few reviewers, too little time, and very variable performance by reviewers, especially in the lower-quality journals (which are the majority). The online medium has made it a bit faster and more efficient, but it has not solved the quality problem.”
This is creating problematic issues and roles for authors. Indiana University professor Cassidy Sugimoto, who recently co-authored the important book Scholarly Metrics Under the Microscope with Blaise Cronin, believes that “the need for scholars to self-market in today’s scholarly publishing system is as true as it is onerous. The work of scholars has now extended the lifecycle of a publication. It is no longer enough to conceive of an idea, request funding (if necessary), collect and analyze data, and write up the report. Now you must also curate a public online persona and use this to disseminate, market, and discuss your work. This has fabulous implications for science communication efforts; however, it also has the potential to create disparities in the scientific workforce based on non-scientific criteria.”
2. Peer Review is Essential. In February 2014, a survey of Canadian authors publishing in a wide spectrum of scientific and technical fields was taken by Canadian Science Publishing. The results showed that researchers believed that peer review, along with journal reputation, and fast publication were the top considerations they made in evaluating potential journals to publish their work. Much further down the list were open access, article-level metrics, and mobile access. “97% of researchers reported that peer review (e.g., ensuring methodological soundness, proper interpretation of results, etc.) is important (p. 5). Other related findings showed the importance of a journal’s ability to reach the intended audience (93%), and discoverability (92%). Other related factors getting support were copy editing (77%), layout and formatting (71%) still received a majority, and advocating on behalf of authors when their work was being misused or their rights were violated (71%).
3. Peer Review Takes Significant Time & Energy. In a 2010 survey, 48% of scientists surveyed said that their administrators encouraged them to partake in science grant review, but only 14% said their superiors knew how much time they spent reviewing. Of this, 32% were expected to do this in their own time—outside of office hours—and only 7% were given protected time to conduct grant reviews. Worse yet, 74% reported that they didn’t receive any academic recognition for this activity. For article reviews, a 2008 survey by Mark Ware Consulting (commissioned by the Publishing Research Consortium (PRC))found that “reviewers say that they took about 24 days to complete their last review, with 85% reporting that they took 30 days or less. They spent a median 5 hours (mean 9 hours) per review.” However, the majority of reviewers were satisfied with the current system of peer review (64%), “the large majority (85%) agreed with the proposition that scientific communication is greatly helped by peer review. There was a similarly high level of support (83%) for the idea that peer review provides control in scientific communication.”
What Future for Peer Review?
The need to improve or modify peer review has been discussed and theorized within the academic community for many years now. In 2013, the European Molecular Biology Organisation, BioMed Central, the Public Library of Science and the Wellcome Trust, all announced that they would give authors of rejected papers the option of making their referees’ reports available to the other journals or publishers. PLoS’s policies noted that “authors can request that papers (with referee reports, if relevant) rejected from one PLoS journal be transferred to another PLoS journal for further consideration there. We trust that reviewers for any PLoS journal are willing to have their reviews considered by the editors of another PLoS journal.”
Recent articles describe a wide variety of options: Research and commentary have focused on issues of bias, cost of the process, lack of proof that it is effective, a time-wasting, bureaucratic process, or an impediment to innovation. Although critics abound, no replacement for some form of peer review has yet been implemented—and for good reason. Many of the suggested options hold problems all their own.
Gregory Northcraft and Ann Tenbrunsel (2011) have made suggestions to better guarantee competency and making peer review a more transparent, public good. They suggested that employing institutions clearly define reviewing as a key expectation or duty that is both recognized and rewarded in the performance evaluations of those institutions. Second, they suggest creating a public database of reviewers to increase the accountability of reviewers and by making the list public, help to guard against “free riding” by ghost reviewers.
Chemist Christopher Lee (2012) has suggested a system of “selected-papers” (SP) networks where “instead of reviewing a manuscript in secret for the Editor of a journal, each reviewer simply publishes his review (typically of a paper he wishes to recommend) to his SP network subscribers. Once the SP network reviewers complete their review decisions, the authors can invite any journal editor they want to consider these reviews and initial audience size, and make a publication decision.” This, he believes, can provide significant benefits with “a new way of measuring impact, catalyze the emergence of new subfields, and accelerate discovery in existing fields, by providing each reader a fine-grained filter for high-impact.”
Răzvan Florian has suggested “a review and rating portfolio for each scientist, which would be publicly available, similar to the publication or citation portfolios of scientists, which are currently used to reward them. The system would need a mechanism for uniquely identifying scientists, which hopefully will be provided soon by the Open Researcher and Contributor ID (ORCID). Each journal or grant giving agency, once authenticated, will be able to register to the system the identity of the reviewers that helped them and, possibly, to rate the reviewer’s contribution. This information provided by the journals or the agencies, i.e., a quantity representing the extent of the reviewer’s contribution and another quantity representing the quality of the reviewer’s contribution, will be made public after a random timing.”
Journals Increasingly See a Need for Change
Although standards and principles for scholarly publishing exist in many forms, such as those of the Committee on Publication Ethics, problems and issues remain. Publishing consultant and academic Irene Hames has pointed out some of the criticisms and issues with peer review today:
- “The process is unreliable and unfair
- No clear standards exist leaving the system idiosyncratic
- The process is open to abuse and bias
- Writing for ‘approval’ can stifle innovation
- Despite technological change, the process remains slow, causing publication delays
- The process is expensive and labor intensive
- Reviewers are often overloaded and working without recognition or compensation
- The current system is basically useless at detecting fraud or misconduct”
She concludes that “peer review is also facing new challenges as large amounts of data are being generated and needing to be reviewed or viewed with research reports. New standards and workflows are needed.” (“Peer review at the beginning of the 21st century.” In: Smart P., Maisonneuve H., Polderman A., editors. EASE science editors’ handbook. 2nd ed. Cornwall: European Association of Science Editors; 2013. p. 133-6)
In 2013, GigaScience took a step in another direction when a previously posted manuscript was submitted for consideration. The reviewer not only provided commentary for his review but posted this on his blog. Soon others were commenting and providing redirection from Twitter and other social media. The result was that the journal changed its peer review guidelines: “Modifying our Guide to Reviewers confidentiality statement from “Any manuscript sent for peer review is a confidential document and should remain so until it is formally published” to now include: “Exceptions: As we are promoting and encouraging more transparent review and the use of pre-print servers, if the authors and reviewers consent then we do allow open discussion of the work prior to publication. Note: when there is a ‘crowd-sourced’ review, our decisions will be based on the comments from our chosen reviewers, not on the ongoing comments. But we feel that having reviewers and the community having the option to weigh on a paper pre-publication in an open and productive manner is definitely something to be encouraged.” In addition to this, GigaScience and others are now expanding peer review to include full data and tool review as well.
Some form of peer review is common through research—whether for tenure or promotion, for evaluation of grant applications, or review of new drugs or therapies. The U.S. government itself uses peer review, as codified in law as the Daubert Standard, for trial expert witnesses and government regulations. In the U.K., peer review has not only been an essential tenet of research, but the subject of legislative investigation in recent years. The House of Commons Science & Technology Select Committee conducted an official enquiry into the efficacy of peer review in 2011, in the wake of the Andrew Wakefield vacine/autism scandal.
Providing open access to pre-prints of research is now common (one example is arXiv.org); however, these have not gone through any process of examination or review and for the uninformed, these articles may lead to assumptions of ‘truth’ that may lead to problems. Researchers no longer live in a ‘bubble,’ in a community of fellow researchers in which new information can easily be shared, proposed, accepted, rejected without significant damage. Today research institutions send out boastful press releases upon publication of research by their members. Journals do the same thing. In the case of the 2014 Haruko Obokata Stem Cell case, LexisNexis Academic lists 20 wire service reports in the first 24 hours of journal publication extolling the important new findings. Once the genie is out of the bottle, the results—as in the case of Andrew Wakefield and a purported link between measles vaccines and autism—can continue to cause damage to science and individuals for many years to come. Making research data open access is not an answer in itself either. Pre-prints of research prior to peer review is perhaps more troubling and potentially damaging: Anyone can find these over the internet today with no posted cautions or implied understanding of what the document is—a free version of a scholarly article (versus a fake), or at what phase of the publication process that preprint was posted. Is it a final version? Pre-peer review? These are key indicators for reliance on the research contained in the document.
“The speed and pressure of the contemporary scholarly publishing system certainly creates incentives for malpractice and spaces for error,” Sugimoto believes. “Peer review still has a gatekeeping role to play and, if reimagined, can certainly reduce the amount of false published data. However, we may also want to rethink the metrics that drive this system. There has been a large degree of goal displacement in scholarly publishing—the objective is not to create science, but to improve reputation. Creative solutions must be made to ensure that scholars have adequate support, security, and resources to conduct their work in a competitive environment, without encouraging fraud, salami slicing, and metric gaming.”
In addition to the many suggestions being made from within academe, we are starting to see potential options being developed by the private sector.
Nancy K. Herther is librarian for American Studies, Anthropology, Asian American Studies and Sociology at the University of Minnesota Libraries, Twin Cities campus.
Tom is originally from Brooklyn N.Y but has spent his entire professional career in South Carolina, most recently as Head of Reference Services at the College of Charleston. As part of the Against the Grain and Charleston Conference team, he serves as the associate editor of the print ATG as well as the co-editor of the webpage. Tom’s conference duties include coordinating the Penthouse Suite interviews as well as the conference poster sessions.
He received his MLS from the University of Buffalo, SUNY and a second master’s in public administration from the College of Charleston and the Univ. of South Carolina. His wife Carol and he live in downtown Charleston and she is an artist and a tour guide offering historic walking tours of the city.