AN HONEST APPRAISAL OF PEER REVIEW

W. R. Gibbons gibbons at northpole.med.uvm.edu
Fri Jun 21 12:45:54 EST 1996


On Sat, 15 Jun 1996, Bert Gold wrote:
OK, I guess he forwarded:
> 
> 
> PAPER in FASEB Journal (1993) volume 7  pages 619-621
> 
> On giraffes and peer review
> 
> D. R. FORSDYKE

Which read, in part:

>                  Cutbacks Reveal Flaws
>   And so the process began. The grant applications were written
> and duly marked. Funds were awarded to those who scored highly.
> For many years, as long as adequate funds chased the pool of
> talent there were few complaints from the research community.

<deletions>

>   Then in the early 1970s came the crunch. For the first 
time (at
> least in North America), there were insufficient funds to sustain
> all the talented researchers5-7. The administrators, muttering
> among themselves about the invigorating effects of heightened
> competition, responded by elevating the cut-off point below which
> funds would not be given. Suddenly, a new selective gate had been
> imposed. Being able at research was no longer a guarantee of
> getting through. A new breed of scientist began to emerge,...the
> grantsmen,... people whose skills lay not so much in doing good
> science, but in tuning into the perceptions of the peer group. (I

<more deletia, then the stuff I disagreed with:>

> designers of the peer-review system failed to do. Two principles
> of decision-making in uncertain environments are, (i) place most
> weight on parameters which can be assessed objectively, and (ii)
> hedge your bets. A design based on these principles, named
> bicameral review, has been presented elsewhere 18,19. Grant
> applications are divided into a major retrospective part and a
> minor prospective part, which are routed separately. The
> retrospective part (track record), is subjected to peer review.
> The prospective part (proposed work) is subjected to in house
> review by the agency, solely with respect to budget
> justification. 

There's a serious problem in this.  One thing the grantsmen have figured 
out very, very well is that you must not only get the grant, but publish 
papers.  Lots of papers.  The more papers the better, because if you 
publish enough you will overwhelm the reviewers who have no time to read 
all that you have written, to see if those papers are in fact any good.

I've argued on a study section that an applicant whose work I knew all too
well had contributed very little to his field.  Others pointed to an
impressive stack of papers and manuscripts--100 papers over 5 years--and
said, "Look! How can you say that?"  I explained I had read, reviewed, and
managed the review of many of those papers, and they were not significant
works.  The author wrote reams of uncritical rubbish, and just kept
submitting each paper to lesser and lesser journals until one accepted it.
I had seen some papers once as an editor, and two or three times as a
potential reviewer.  The only response I could get was, "He is *very*
productive; he deserves to be funded." 

So to some extent the retrospective analysis operates now, but badly. 
Output is weighed, or counted (and counted in a silly way in which each
author of an 8-author paper is credited with a full publication).  Output
is rarely critically assessed.  If the proposed system were inagurated, it
would in my opinion exacerbate problems if it did not address the question
of what it means to be productive. 

   Ray Gibbons  Dept. of Molecular Physiology & Biophysics
                Univ. of Vermont College of Medicine, Burlington, VT
                gibbons at northpole.med.uvm.edu  (802) 656-8910




More information about the Bioforum mailing list