A recent Twitter exchange got me thinking about academic publishing again. It seems to me that much of the current debate about peer review, publication bias and open access boils down to a conflict between quantity and quality, and I have a favourite: quantity.
Quality (the problem)
This is why we have peer review, to ensure that only the good stuff gets published. Clearly this doesn’t work, but I do still feel there is a place for peer review; not in the selection of the ‘best’ papers, but in the filtering-out of erroneous work. In the UK, the REF, and by extension universities, encourage quality over quantity. I have often heard school heads and research group leads trumpet the need for fewer, higher-quality papers. No wonder, if that’s what brings in the money.
Quality is important, for sure (even if our ways of defining quality are weak). However, in my opinion, these incentives are totally unnecessary for ensuring quality. The reason an economist might give half their right thumb to publish in American Economic Review over any other journal is not simply because of quantifiable career benefits and employability. No doubt the prestige gained (or the envy induced) is a sufficient incentive.
Quantity (the solution)
Isn’t this what current campaigns are striving for? We want to reduce publication bias through the publication of uninteresting or negative results. We want datasets and detailed methodologies made available. Yet academics are encouraged not to waste their time on these things and instead strive for that publication in AER/Science/Nature/NEJM. We want academics to stop prioritising prestigious journals with unscalable paywalls, yet this is exactly what they are currently incentivised to do.
Incentives for quantity should be appended to my previous suggested solution to the problems of academic publishing. The REF should reward quantity instead of quality, for example. Some research suggests that academics face a quality/quantity trade-off, while others suggest that the two may go hand-in-hand; no doubt this depends on the field of research. Nevertheless, a re-alignment of incentives towards quantity and away from (self-sustaining, immeasurable) quality would surely be better for academia as a whole.
Interesting discussion on Twitter, in which everyone disagrees with me: https://twitter.com/ChrisSampson87/status/423698185672876033
I’m really pleased that Richard Norman and Terry Flynn have both written follow-up blog posts on this topic (here and here). I thought it worth responding to those here…
Richard sides with quality (as do I, given the choice), but identifies that authors’ desires to push up their metrics results in them targeting prestigious journals and only writing-up citable work. It seems to me that this is a recognition of the self-sustaining nature of quality that I identify above. But Richard suggests that the issue is not so much about the benefits of publishing less-interesting papers being too low, but the cost being too high. I disagree. BMC, and other similar enterprises, bring the cost (in time and effort) down to the bare minimum. No matter what, writing a paper requires effort. BMC has been around for a decade or so, and it hasn’t solved our problems. Academics still need the incentive to write those papers that only the likes of BMC will publish.
Terry cites his own experience as evidence that the pursuit for quality is more valuable than the pursuit for quantity. I agree. No doubt Terry’s important contributions wouldn’t have had so much of an impact had he not taken the time to perfect them. But what’s best for Terry isn’t necessarily best for society. Does formal reporting of early work or failed attempts really jeopardised the quality of the final results? It may allow others to beat you to it, but that’s only bad for one person. Fewer publications can certainly favour a h-index, but does a higher h-index really indicate a higher contribution to the field? I’m not convinced it does.