Academics in the sciences and medicine list many accomplishments on CVs and biosketches, but the peer-reviewed journal article rules the realm. Even grant dollars accrue only to those with publications. Pretty preliminary data may grace a proposal, but until at least a portion of that critical novel knowledge passes through the peers, it remains suspect in the eyes of grant reviewers.
A 2002 paper by Ray Spier relates a history of peer- review. The first example occurred in medicine late in the ninth century:
Perhaps the first documented description of a peer-review process is in a book called Ethics of the Physician by Ishap bin Ali Al Rahwi ( 854–931) of Al Raha, Syria. This work, and its later variants or manuals, states that it is the duty of a visiting physician to make duplicate notes of the condition of the patient on each visit. When the patient had been cured or had died, the notes of the physician were examined by a local council of physicians, who would adjudicate as to whether the physician had performed according to the standards that then prevailed. On the basis of their rulings, the practising physician could be sued for damages by a maltreated patient.
The invention of the printing press allowed much wider distribution of work, and a greater responsibility to regulate what was written. As science evolved, societies to discuss and debate findings arose. With societies came their journals. Initially editors selected studies for inclusion. In the 1700s, editors might ask committees of society members to review a paper, a process originating with the Royal Society of Edinburgh. In general, through the early 20th century, journal space exceeded submissions, so editorial boards primarily existed to solicit material, not to reject it. Only fatally flawed studies would be excluded from the scientific record.
Increasing diversity and specialization of science eventually demanded expertise beyond a small editorial board; in the 1940s some journals began adapting peer review. The practice really boomed after the photocopier became commercially available in 1959. Carbon paper only allowed 3-5 simultaneous copies, while the Xerox copier permitted almost limitless facsimilies. The post-war era also saw an increase in the number of scientists and the range of interests; the material generated quickly outpaced the available journal space. At this time, peer review took on its modern functions, especially ranking studies among the different "tiers" of journals or excluding it from publication altogether.
No one suggested a critical study of peer-review until the 1980s. Given the importance and expense of this process, the paucity of literature on it truly amazes!
The most comprehensive summary of the literature on peer-review comes from The Cochrane Collaboration:
The Cochrane Collaboration is an international, independent, not-for-profit organisation of over 28,000 contributors from more than 100 countries, dedicated to making up-to-date, accurate information about the effects of health care readily available worldwide.
We are world leaders in evidence-based health care
Our contributors work together to produce systematic reviews of healthcare interventions, known as Cochrane Reviews, which are published online in The Cochrane Library. Cochrane Reviews are intended to help providers, practitioners and patients make informed decisions about health care, and are the most comprehensive, reliable and relevant source of evidence on which to base these decisions.
Cochrane last published a systematic review of peer-review in 2008. Most studies examined things like masking reviewers during review and the impact of peer-review on quality of the text from initial submission to the final published version. Virtually no information has been published to manuscripts rejected from journals, in part because most studies have focused on a single journal. Cochrane reviews often rely on meta-analysis; however, these studies were so dissimiliar and nongeneralizable that results could not be pooled.
As most of us in this biomedical scientific endeavor are dependent on the peer-review process for career advancement, the conclusions of this group are chilling:
We conclude that at present there is little systematic, empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of reports of biomedical research in biomedical journals. Practitioners of editorial peer review should recognise the lack of convincing empirical evidence of its effects and bear this in mind when making editorial decisions. At the same time, editors and reviewers should be aware of the conceptual and methodological difficulties involved in studying an activity as complex as the scientific process, of which editorial peer review is an integral part.
So why do we put so much value on the peer-review process? What do we expect from it in its current form?
First, peer-review assures a minimum level of quality:
- Many journals supply checklists to help meet this requirement. Are the studies methodologically sound? Were ethical standards met? Are the results clear? Are the conclusions reasonable? Has it been done before? In the 19th century, this level of review could be provided by an editor; today, broader expertise may be required. However, this level of review certainly seems reasonable for any endeavor.
- We hope that peer-review prevents fraud. Reviewers have identified duplicate publication, plagiarism, and other unsavory practices, preventing future embarrassment for the journal(s) in question. Since these manuscripts generally get rejected, the rate of this type of problem is difficult to estimate. Obviously, reviewers cannot exclude all fraud or we would not see so many retractions. Fake and "over-interpreted" data can still creep under the radar.
Second, peer-reviewers and editors help decide whether a study meets the standards of a particular journal. Journals have thus become "tiered" by their reputation, primarily as assessed by the impact factor. Many institutions weight publications on the CV by the impact factor of journals; one's career may depend on a glamor-journal citation, not the quality or meaning of the actual study. I have previously addressed the insanity of the impact factor and glamor-rags. I will not discuss this again here.
My question at this point is whether this second expectation remains valid? Our current system evolved in a "buyer's market," one where submissions overwhelmed available printed space. In an online world, there is no reason to limit the studies in an issue, other than gaming your selectivity and impact factor. Are these valid reasons to maintain the system? Or should we return to the days of old, when only fatally flawed science was excluded from the "published" record? A novel system is just around the corner- if we let it happen.