Alternatives to Peer Review
berezin at MCMAIL.CIS.MCMASTER.CA
Fri Jun 2 21:00:45 EST 1995
Dear Dr. Stormo:
I can send you some papers in mail if you give me your
full postal address.
You can also ask Dr. Don Forsdyke, Queens University,
[ FORSDYKE at qucdn.queensu.ca ]
to send you a pack of papers on "bicameral peer review".
In a nutshell the idea is to split the proposals on
retrospective (basically, track record) and prospective
(proposals as such) parts which are evalutated separately
(on two different route, hence the term "bicameral").
Next point is to replace sharp cut-offs (the major
flaw of the present system) by a sliding funding scale.
De-emphasize "proposal" (futurology) and stress track
record (actual achievements). In short, move towards
how it should be at first place:
"FUND RESEARCHERS, NOT PROPOSALS".
This, as some may say, also "not excellenet" but it
is far, far better than the present NIH/NSF/NSERC/MRC
system of "selectivity". And don't worry about REAL
crackpots - they wan't be able to slip through this
system either, at least no more easier than now.
(please see many more comments inside your text)
On 2 Jun 1995, Gary Stormo wrote:
> Would you be so kind as to give a brief summary of proposals to replace peer
> review? Unfortunately I do not have the time, at least right now, to track
> down the references you provided. I am very familiar with the peer review
> process, both as reviewer and reviewee, and I know that it is not perfect.
> However, I am unaware of alternatives, possibly through ignorance, that
> would appear to be better.
> With regard to funding, peer review is really designed to assign priorities
> to competing applications. Although it has changed somehwat, the process
> at NIH used to be first deciding if an appication was to be approved or not.
> Disapproval meant, in simplistic terms, that the application was not worth
> funding even if sufficient funds existed (i.e. all approved applications
> were funded). This happened to only a very small percentage of applications
> and usually indicated some fatal flaw in the application. All of the
Almost no-one will object till this point. The REAL problem
starts right after, when the system tries to SEPARATE "good"
applications on (1) "good but unfunded" and (2) "very good and
funded". This is where the NIH/(etc) system enters the
FUNDAMENTALLY FALLACIOUS course in implicitly assuming that
"those applications which gained the highest peer review
scores are really THE best (and hence the most promising)".
The problem is that the inherent uncertainty of the process
(+ all subjective biases of peer reviewrs who are for the
most part are direct COMPETITORS of the applicants)
makes the whole game of "fine tuning" a rather poinless
excercise, or, I would say, it does not do any better
than any random trial can provide.
The ALTERNATIVES can (roughly)
be summarized down to three:
(A) Keep business as usual (the system as it is today):
fund the tops (having highest peer review scors) and
discard the rest of (also "good") proposals.
(B) Assuming "not enough money to fund all 'good' proposals"
fund (whatever %) of "good" proposals on a RANDOM basis
(e.g. by throwing dice). You will, of course, get some
lucky and unluckys, but at least the scheme will be free
from coercive pressures of the trends which is another
detrimental aspect of the present system. Historically
speaking, this is (more or less) how science ACTUALLY
developed till about 1950.
(C) Share whatever pool of money is availble
between all "good" proposals/researchers on a
SLIDING SCALE (according to cummulative ranking - to
reduce the error factor).
I personally believe that (B) is better than (A) (plus
dice is much cheaper than NIH bureaucracy - say this
to Gingrich), but the real transfer point would be to
adopt (gradually, or abruptly) the scheme (C).
By many reasons (C) is the best, as it at least keeps
all capable (and already hired) people working
on some meaningful projects (the present system fails
to provide this). (There are some logistic problem with
soft-money people, but they can be addressed).
However, the major PROBLEM is that (C) means the radical
departure from one of the most cherished sacred cows of
this continent: the idea that everything which is worthy
(including science) is a "competition". Only a miserable
failure (unfortunately, inevitable) of the entire idea of
"competition as the true foundation of this society", will
likly be able to correct this misperception. My feeling
is that it is bound to happen; my estimate is that it
will take about 10 more years before the idea of
competition (on its presently homeric scale) will be
However, we, scientists, are supposed to be ahead (not
behind) the others, so we could start the (inevitable,
anyway) adjustment earlier. Should I don't keep even
a mild hope that our species (scientists) are not able
to raise to this task, I won't bother to write all
these meassages. I am not sure, perhaps we indeed
CAN NOT, but at any rate the choice now and here
is OURS, not Gingrich's et folks.
Should the latter (sliding scale) replace the present
(selectivity) system, the ONLY people who will APPARENTLY
to loose, are those who presently run (often OVERfunded)
empires, people which are broadly known as "grantsmanship
establishment". Why I say "APPARENTLY loose" ? - Because,
they (grantsmanship breed) are, as a rule, run many
projects (often ca.5 to 10) - not because all these
projects are really that important, but largely beacause
having more oprerating money and larger groups brings
even more power and institutional weight, etc (self-
propelled grantsmanship loop).
Trimming the budgets of these "fat cats" will force them
to keep ONLY THEIR BEST PRIORITIES (when and if they have
any - some do, some don't) - and as a result, the QUALITY
of what they are doing is likely to _increase_, not decrease.
(of course, less $$$ means less power, but this is another
issue). As a result, almost all our (research) community
will benefit from the sliding scale.
Because NIH (and other major granting bodies in USA and
Canada) are dominated by poweful grantsmanship elite, it
is unlikely that they will be eager to initiate the change.
That is why (WITH A GREAT REGRET - mind you) we (those who
believe that change is highly desirable) are virtually
"forced" to go to the press and politicians to look for
their help. Yes, in some why it is almost like going
for a love to a prostitute - we would prefer NOT to do
so. (and we, in Canada, have already our views
distorted by ill-informed political "helper").
It would be immensely better if we (the scientists)
could clean our own house ouselves. Unfortunately, so far
we did not show even a marginal capacity of doing it on
our own terms - the spirit of "competition" and
"dog eats dog" is still too strong. Too bad.
On the other hand, should we be able to break this
barrier and unscrew the main holding bolt, all the rest
is a rather minor technical detalization of
proceedures - no shortage of concrete ideas on how
this can be done.
Alex Berezin, McMaster University,
> other grants were approved, menaing that if sufficient funds existed they
> should receive them. Of course there have never been enough funds to
> fund all of the approved applications; several years ago the percentage
> that were funded was often in the 25-30% range, and lately its more often
> been near, or below, the 15% range. So given that there are a lot more
> good ideas, in the form of approvalable applications, than there are funds
> to support them, how does one, of the government in this case, go about
> deciding which are to receive funds and which are not? I think most
> people would agree that the advice of experts should be obtained, and
> that's the basic idea behind peer review. As I said, its not perfect and
> some ideas that later prove to have been worthwhile were passed over,
> and other ideas were funded that turned out to be complete wastes of money.
> But those kinds of errors are to expected given that the reviewers are
> merely experts and not omniscient. Other kinds of problems with peer
> review, such as conflicts of interest, are attempted to be avoided with
> great care, at least at NIH. Probably some cases slip through. Perhaps
> a more pervasive problem is the tendency of reviewers to look more favorably
> at applications that look like a sure bet than at ones that seem more likely
> to fail, but have large benefits if they succeed. But this is not really
> a fault of the peer review system per se, that is one would still like
> the advice of experts, but more a problem in how the government should
> invest its limited resources. In fact, NIH has instigated a special
> category of grants, called R21s, that are pilot projects or feasibility
> studies for applications that are high risk/high payoff. That is they
> may well not succeed, but if they did they could prove to be very
> important. The idea is to identify applications in that category and
> to give them enough funding to be able to show whether the idea has
> real merit. If so then it will be able to compete with the regular
> grants, and if not it will at lesat have been tried without allocating
> a large amount of resources.
> Of course, most of what I just went through is just details, but the essence
> of peer review is to decide how to spend public money based on the advice
> of experts, i.e. the peers of the applications who are supposed experts in
> their field or they wouldn't even be applying for funds. If you can
> come up with a better alternative to allocating limited resources I
> would love to hear about it.
> Gary Stormo |
> MCD Biology | Keep in mind that to the advertising industry,
> Univ. of Colorado | every day is April Fool's Day.
> Boulder, CO 80309 |
More information about the Bioforum