Self-Selected Vetting vs. Peer Review: Supplement or Substitute?
harnad at ecs.soton.ac.uk
Sun Nov 10 17:33:15 EST 2002
On Sun, 10 Nov 2002, Andrew Odlyzko wrote:
> An interesting source for data and references about peer
> review (including costs) that I have just learned about is the
> recent paper of Fytton Rowland, "The peer-review process,"
> Learned Publishing, vol. 15, no. 4, Oct. 2002, pp. 247-258,
> available online at
Yes, and Fytton seems to confirm the conclusion that peer-review
alone costs about $500 per paper.
"The True Cost of the Essentials (Implementing Peer Review)"
>sh> It is important to make it quite explicit here just how close our
>sh> position is, so as to pinpoint exactly what it is that we disagree on
>sh> (i.e., the trends you predict):
>sh> (1) My motivation is freeing access to all research, both pre- and post
>sh> peer review (i.e., not just unrefereed preprints), through self-archiving.
>sh> (2) If we could (for some arbitrary reason!) free only one of the two, I
>sh> would choose the refereed final draft rather than the unrevised preprint,
>sh> but there is no reason not to free both (with a few special exceptions:
>sh> see below).
> Here we differ, in that I would opt for freeing the unrevised preprint,
> if for no other reason that it is much easier. (In general, my papers
> are more descriptive, writing about what is happening, and what is likely
> to happen, and less prescriptive, writing about what I would like to see
> happen. Yours are the other way.)
True. But we both agree that we would like to see self-archiving
happening, and that it is happening too slowly. Hence something over
and above description seems to be needed...
>sh> (8) Your belief that self-vetting will eventually replace classical
>sh> peer review is one that would *reinforce* rather than relieving
>sh> researchers' worry on that score.
> I do not necessarily accept the first sentence.
It is unfortunately a fact that worries about peer review are among the
two most frequently voiced reasons for not self-archiving (worries about
copyright being the other).
>sh> (9) I believe strongly that your belief (that self-vetting will replace
>sh> classical peer review) is wrong, and that there is no trend...
>sh> in that direction, nor will there be...
>sh> for the very concrete reasons I have repeatedly adduced
>sh> in this series of exchanges.) But if someone like me -- who believes fully
>sh> in self-archiving and the transition to open-access, and disbelieves in
>sh> any causal connection between that and any risk to classical peer
>sh> review -- is not persuaded by your contrary belief, nor the arguments
>sh> you adduce in its support, then how likely is it that someone who does
>sh> not yet believe in self-archiving *and* worries that it would be a risk
>sh> to peer review will be emboldened (to self-archive) by your hypothesis?
> The logic here is deficient. You are assuming that those skeptical of
> self-archiving are enthusiastic about classical peer review. Yet if we
> look at the literature on classical peer review, we see a lot of concern
> about its deficiency, dating back many decades, before self-archiving was
> even a possibility.
Logic dictates the following question: Do you have any evidence at
all that the many who are not yet self-archiving today because of
worries about its possible negative effect on peer review are the same
population who are worried about peer review's deficiencies? (Are the
relative sizes of the populations and the actual contents of the
respective worries not the relevant factors?)
>sh> (10) The optimality of open access to the research literature is a
>sh> certainty, not a hypothesis. We both agree about that, and about the ample
>sh> evidence that it maximizes research visibility, accessibility, uptake,
>sh> usage, citation, and impact, as well as scope, speed, and interactivity,
>sh> in short, that it greatly benefits research and researcher productivity.
> We both agree on this optimality, but that is not a universal opinion.
> I think there is still considerable concern, especially in the biomed
> community, of letting the laity have access even to the peer-reviewed
Quite right. And that biomed "concern" comes close to being the most
irrational one of them all! It would be best served by keeping the
peer-reviewed literature under lock and key, even when it is on paper!
My guess is that such irrational fires are fed by self-serving concerns
about protecting journal access-revenue streams rather than protecting
the laity from the peer-reviewed literature! (That's what I've dubbed:
" Conflating Gate-Keeping with Toll-Gating"
But surely this conflation is irrelevant to what we are discussing here,
which is about the necessity (or non-necessity) of classical peer review
to protect the quality of the research literature, not its insufficiency
to protect the lay readership!
> Again, if you want a test, just look at what happens in arXiv.
I have devoted considerable space to trying to point out exactly why
I think arXiv is in no way a test of the hypothesis that self-selected
vetting can or will serve as a substitute rather than merely a supplement
for classical peer review (while still yielding a literature of at least
equal quality): ArXIv preprints and self-selected vetting co-exist and
have always co-existed in parallel with classical peer review, and hence
with answerability (and the expectation of answerability) to classical
peer review, exerting their quality-controlling and sign-posting effects,
as they always did. The only way to test whether self-selected vetting
can -- unlike in arXiv -- actually serve as a substitute for classical
peer review rather than merely a supplement to it (while still yielding
a literature of at least equal quality) is by testing a representative
sample of research WITHOUT any classical peer review at all to back it
up, only self-selected vetting (and a large enough sample, long enough,
for reasonable confidence that any effect would endure, and would scale
up to the literature as a whole).
This seems to me the minimal methodological requirement for testing
your hypothesis. Nothing of the sort has been tried yet.
>sh> How does anarchic self-selected vetting ensure an equivalent outcome,
>sh> and how is it to be sign-posted?
> I skipped through some earlier points, but let me just say a few words
> here that will address them as well as this one.
> To take a simple example, the quality signal associated with a journal-name
> is something rather nebulous. After all, it is not all that hard to start
> up a new journal,
True. And it takes time for a new start-up to establish its quality-level
and track-record. But that is as it should be, if the journal-name is
to serve as a reliable guide.
> and the perceived quality range of different journals is really vast.
Surely not just their perceived quality range, but their actual quality
range: There is, I think you agree, a quality hierarchy among journals,
corresponding to the degree of selectivity and the rigor of the peer
review level that a given journal practises.
> Further, even a single journal contains papers of varying quality levels.
True. But now we are talking about variance, of which there will always
be some, whereas the question is about means: mean differences in
quality between journals, and the reliability, hence the signal value,
of those differences. And the even more fundamental difference
(or non-difference, if your hypothesis is right) in quality between
the classically peer reviewed literature we have now and an untested
hypothetical literature with its quality controlled only by self-selected
vetting. (Nor have you yet hinted how that putative quality is to
be sign-posted: How do we know what has been sufficiently vetted and is
hence ready for reading and using?)
> So how does a scholar use the signal that publication in a journal
By reading and using with confidence only the peer-reviewed papers,
and waiting or treating with caution the not-yet-peer-reviewed
papers. (And there is more to it than that, for, as noted repeatedly,
classical peer review is a dynamic process of revision and answerability,
not just a static red-light green-light signal for raw preprints.)
> Well, it is a very vague quality signal. In the
> Gutenberg era, that was about all that was possible to obtain.
Even for those users who are not satisfied with the level of a literature
whose quality is controlled by classical peer review as ours is today
(and I might be one of those dissatisfied users too!) there is still the
anterior question: But would the level of a literature whose quality
was controlled by self-selected vetting only, IN PLACE OF classical
peer review, be even as high as that of our current, classically
peer-reviewed one? Never mind the "very vague quality" of this literature
and its sign-posts: How do we know that the alternative would not be
still lower in quality? (The absence of the sign-posting by journal-names
and associated track-records already seems to augur that it would be
> Today, though, there are a variety of other signals that people can collect
> and easily sift through (the subject of "The rapid evolution of scholarly
> communication," <http://www.catchword.com/alpsp/09531513/v15n1/contp1-1.htm>).
Indeed. And they should. As a supplement to what they already have. But
you have yet to give evidence suggesting that they should or would give
up what they already have (classical peer review) in exchange! Not only
no evidence, but not even any good reason...
>sh> Yes, human judgment, even expert peer judgment, is fallible. And
>sh> supplementing its systematic, answerable, labelled application with
>sh> open feedback will be very useful: But why construe this supplement
>sh> as a substitute? Why would it not simply co-exist with the classical
>sh> line of defence?
> They coexist now, and will continue to do so for a long time. (As I wrote
> in "The slow evolution of electronic publishing," sociological changes are
> very slow.) The question is, which is going to be more important? That is
> where we differ.
I'll speculate on the distant future once the sure benefits of universal
self-archiving are upon us. Open access is optimal and inevitable;
substitutes for peer review are far more uncertain. (And, for the
moment, speculating about them may well be at odds with hastening
the optimal and inevitable.)
>>ao> There are lots more examples.) The point is that classical peer
>>ao> review does not provide much of a signal, especially for journals
>>ao> in the lower quality tiers.
>sh> And you think self-selected feedback would provide at least as much of a
>sh> "signal," especially for journals in the lower quality tiers?
> Absolutely. Instead of a vague signal that some referee(s) decided the
> submission was worth publishing (without explaining to the readers the
> reasons for this judgement, of the quality evaluation), we could have
> a much richer set of signals.
But it would seem that we would be facing at least one more level of
vagueness for every single self-posted paper in the self-selected vetting
era (about 2,000,000 papers per annum, currently): And that is whether
even ONE referee has found it worth reading or using! (This is without
even raising questions about the competence or track record of the
self-selected vetters who will instead be patrolling the skies for us.)
We used to have publication lags, before the refereed, paper appeared,
duly tagged as having been refereed and published. Self-archiving the
pre-refereeing preprints might have compensated for this somewhat,
until the other show dropped; but in the self-selected vetting era,
classical refereeing is gone, and the lag now risks become indefinite:
How do I know whether a paper has been vetted? and if not, whether and
when it ever will be? and if so, how competent the vetter(s) were? and
how conscientious the author was in following their advice?
This used to all come as a matter of course with the old, vague,
classical system. Where is even that level of vagueness now, with the
new anarchic system?
>sh> (Sometimes I think what you are saying is that the elite work does not
>sh> really need peer review and for the rest it doesn't really matter...!)
> I believe that all work benefits from peer review, but the potentially
> important work needs more of it, and will get more of it.
No doubt. But (by definition of the Gaussian distribution) there is far
more of the other kind of work in the 2,000,000 -- and not always clear
which is which...
>sh> Classical peer review is not a signal; it is a dynamic, interactive
>sh> quality-control and tagging system -- and the only one that is systematic
>sh> and answerable. I cannot see any way that anarchic self-selected feedback
>sh> can replace this (other than by re-inventing classical peer review under
>sh> another name) -- though I can see how it can (and does) complement it.
> We'll just have to continue to disagree on this.
But let it be after the optimal/inevitable open-access era is safely
upon us -- not before, or instead!
More information about the Jrnlnote