In his regular column, the BBC's Pallab Ghosh looks into how measures of the quality of scientific research have changed.
In the old days it didn't matter so much which journal research was published in. Now it counts for everything.
Are citations a valuable measure of good research?
Funding bodies now award grants almost exclusively to researchers who have published in a handful of top scientific journals.
According to Peter Lawrence, an emeritus professor at the University of Cambridge, it's this new accounting mentality that is "corrupting" the scientific process.
Professor Lawrence, who used to edit a scientific journal, and is a respected researcher himself, says "it's a bit like judging a hospital by how quickly the telephone is answered.
"[Awarding grants] was never a very accurate process in the past. But it was done by people reading the [research] papers and determining whether it contained sparks of originality and quality of rigour and argument. Now that aim has been more or less abandoned."
What counts now is how often the research is cited, or mentioned, by other researchers in their publications, he says. This is supposed to be a reflection of how influential a piece of research has been. But many outside the grant awarding system regard it as a crude measure.
Citations, they say, measure how fashionable and well funded a field of research is rather than its true quality.
Nowadays, scientists are ranked by working out how many papers they publish in top journals and the degree to which their work is cited. Some funding bodies even have complex algorithms to calculate a researcher's prowess.
According to Professor Lawrence: "Once you start doing that, those numbers start gaining an importance to a point where in fact the real value of the work is extinguished."
The priority now for many career scientists is to market themselves in a way that maximises their ability to have their research published in the top journals. They spend time travelling to scientific meetings to network with colleagues who may be reviewing their work. The research itself can at times seem a secondary concern.
Because everything now depends on being published in the top journals they are on the receiving end of an enormous number of submissions.
Some journals send only 10% of the research papers they receive for review. Journal editors therefore have to make difficult decisions in a short space of time on which research to consider for publication and which to throw out.
All sorts of factors come to bear on those decisions. Among these is consideration of the journal's own "impact factor" - which is determined by how often the research they publish is cited.
Again, this can favour mediocre research in a fashionable field over high quality work in a smaller, less well funded area. It can also favour established scientists who are well known in their field.
To younger scientists, publication in a major journal may seem like a lottery. Of course, the work has to be of high quality, but it also depends on a great deal of good luck, being part of an influential group and on the timing of its submission.
If their work is published, they increase their chances of promotion and receiving further funding. If not, their career stands still or, once their funding runs out, takes a nosedive.
People have to publish, so they submit more to the top journals, exacerbating the problem for already overstretched journal editors dealing with an avalanche of research papers.
The only way out of this cycle of "corruption", according to Peter Lawrence, is for grant agencies to move away from counting citations and to actually read research proposals and to judge their quality.