Peer Review: The Worst Way to Judge Research, Except for All the Others

Date:

A look at the system’s weaknesses, and possible ways to combat them.

Even before the recent news that a group of researchers managed to get several ridiculous fake studies published in reputable academic journals, people have been aware of problems with peer review.

Throwing out the system — which deems whether research is robust and worth being published — would do more harm than good. But it makes sense to be aware of peer review’s potential weaknesses.

Reviewers may be overworked and underprepared. Although they’re experts in the subject they are reading about, they get no specific training to do peer review, and are rarely paid for it. With 2.5 million peer-reviewed papers published annually worldwide — and more that are reviewed but never published — it can be hard to find enough people to review all the work.

There is evidence that reviewers are not always consistent. A 1982 paper describes a study in which two researchers selected 12 articles already accepted by highly regarded journals, swapped the real names and academic affiliations for false ones, and resubmitted the identical material to the same journals that had already accepted them in the previous 18 to 32 months. Only 8 percent of editors or reviewers noticed the duplication, and three papers were detected and pulled. Of the nine papers that continued through the review process, eight were turned down, with 89 percent of reviewers recommending rejection.

Peer review may be inhibiting innovation. It takes significant reviewer agreement to have a paper accepted. One potential downside is that important research bucking a trend or overturning accepted wisdom may face challenges surviving peer review. In 2015, a study published in P.N.A.S. tracked more than 1,000 manuscripts submitted to three prestigious medical journals. Of the 808 that were published at some point, the 2 percent that were most frequently cited had been rejected by the journals.

An even bigger issue is that peer review may be biased. Reviewers can usually see the names of the authors and their institutions, and multiple studies have shown that reviews preferentially accept or reject articles based on a number of demographic factors. In a study published in eLife last year, researchers created a database consisting of more than 9,000 editors, 43,000 reviewers and 126,000 authors whose work led to about 41,000 articles in 142 journals in a number of domains. They found that women made up only 26 percent of editors, 28 percent of reviewers and 37 percent of authors. Analyses showed that this was not because fewer women were available for each role.

Read full article here.

Aaron E. Carroll – The New York Times – November 5, 2018.

Want More Investigative Content?

Curate RegWatch
Curate RegWatchhttps://regulatorwatch.com
In addition to our original coverage, RegWatch curates top stories on issues and impacts arising from the regulation of economic, social and environmental activity in Canada and the U.S.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

MORE VAPING

Real Threat | Health Minister Unravels Canada’s Tobacco Strategy | RegWatch

Canadian Federal Health Minister Mark Holland is launching a crusade against safer nicotine products, driven by the uncompromising stance of non-profit health groups vehemently...

Vaping Coverage Get it NOW!

Sign Up for Incisive Content!

RegWatch original video is designed to move opinion. Get our videos first and be the first to share.

Your Information will never be shared with any third party