Peer review, or scientific refereeing, is the basis of the academic process. It’s a rigorous evaluation that aims to ensure only work which advances knowledge is published in a scientific journal. Scientists must be able to trust this system: if they see that something is peer reviewed, it should be a hallmark of quality.
When the editor of scientific journal receives a manuscript, they ask other another scientist – a specialist in their field – to review it. The referee is required to advise the editor whether the manuscript should be published and to give feedback to the authors.
The system is not flawless. There have been instances of fraud and manipulation due to refereeing, but these are – we hope – isolated cases.
But there are much bigger systemic problems associated with peer review. These are negatively affecting scientific credibility. These include the fact that, globally, it is hard to find referees: reviewing a manuscript requires a lot of time and minimal reward. Very few journals pay referees, and most academics who act as referees are doing so for free in their spare time.
On top of this those who do act as referees often struggle to deliver on time. Worse still, their reports are not always helpful to editors or authors.
Some journals work actively to tackle these issues, but more can be done to ensure that the scientific refereeing system retains its integrity.
The challenges
Journal editors are frustrated about the dearth of referees. In an open letter to the scientific community, a group of editors wrote that, despite:
… so much weight [being] given to peer-reviewed publication the essential “backroom” tasks of editing journals and reviewing articles are rarely acknowledged as aspects of academic performance.
No wonder they’re worried: more than 1 million research articles are published globally each year. That requires a lot of referees. But finding appropriate referees is just one part of the bigger task facing editors.
Editors have to get referees to stick to the agreed deadlines. That’s not easy: people tend not to prioritise their review tasks since time spent on their own research is more rewarding.
An experiment conducted with the Journal of Public Economics based in Cambridge in the US found that its referees are late with their reports half of the time. There are also instances, across journals, of referees simply never delivering even though they’ve promised to do so.
In some disciplines, these problems have given rise to a serious publication lag – the time between when the manuscript arrives to the actual publication. Over the past 30 years this lag has nearly tripled in economics, from 11 months to just under 30 months.
It not only takes longer to disseminate ideas. The publication lag also worsens the prospects of young scientists who need publications to be hired.
Another problem with the existing system is that referee reports do not always adequately inform the editor nor really suggest ways of fundamentally improving the article.
It’s not just authors who complain about this: journal editors do too. One explanation is that referees may follow their own interests, which are not necessarily those of the editor nor the author.
All too often they try to impress editors by making blemishes look like flaws. Economists call this problem “signal jamming”. At worst it may turn down innovative research.
Possible changes
The good news is that journals are aware of these problems, and are committed to tackling them.
Journals should develop and nurture a large base of potential referees, constantly adding new ones and retaining old ones. And these referees need proper recognition. This could involve simply thanking referees publicly, or perhaps awarding prizes for good refereeing.
Journals should also consider paying referees. The estimated value of unpaid referee time is as much as £1.9 billion a year – it is clearly a service that requires some financial reward.
Small changes help, too. Shorter deadlines reduce turnaround time work referees often just submit before the deadline. A public list of referees’ turnaround encourages them to stay on time, too.
Editors should also reject articles that are too sloppy, rather than letting a referee improve them.
Editors should also engage in “active editing”, instructing the author to ignore referee requests that are merely asking them to fix blemishes.
Editors should also pare down the demands on referees, perhaps by asking them to separate necessities from suggestions. The guiding principle should be that the work is the author’s – not the referee’s.
New approaches are being tested
Journals are already testing new approaches. For instance, some require their editors to judge the quality of a referee to weed out those people who are simply unhelpful.
Elsevier, a major publisher, has launched a platform which publicly lists referees and how often they have written referee reports. A similar, independent platform is Publons.
“Open peer review” is also growing in popularity. Traditionally, reviewers remain anonymous to guarantee an unbiased opinion. Open peer review goes the opposite way: the referee’s name and report are published together with the article. Everyone can see who the referee was, which is meant to encourage transparency. Not everyone is convinced about this approach.
Another option is post-publication peer review, in which articles are open for comments all the time from anyone. Sadly, internet trolls have tainted this process for many scientists.
It is encouraging that the problems of peer review are being debated and that new approaches are being tested. The peer-review process is very important and its challenges must be taken seriously if academics are to keep publishing quality articles that disseminate new ideas.
No comments:
Post a Comment