Impact Factor, Open Access & Other Statistics-Based Quality
harnad at ecs.soton.ac.uk
Sat Jun 5 07:43:33 EST 2004
Citation counts do not measure quality directly, but they are
correlated with it. So are download counts, and no doubt other
digitometric measures that are under development, and that will be
derived from a growing OA corpus. See http://citebase.eprints.org/
Some studies on the correlation:
Lee KP, Schotland M, Bacchetti P, Bero LA (2002) Association of
journal quality indicators with methodological quality of clinical
research articles. AMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
287 (21): 2805-2808
"High citation rates... and low manuscript acceptance rates...
appear to be predictive of higher methodological quality scores
for journal articles"
Ray J, Berkwits M, Davidoff F (2000) The fate of manuscripts rejected
by a general medical journal. AMERICAN JOURNAL OF MEDICINE 109
"The majority of the manuscripts that were rejected... were
eventually published... in specialty journals with lower impact
Donohue JM, Fox JB (2000) A multi-method evaluation of journals in the
decision and management sciences by US academics. OMEGA-INTERNATIONAL
JOURNAL OF MANAGEMENT SCIENCE 28 (1): 17-36
"perceived quality ratings of the journals are positively
correlated with citation impact factors... and negatively
correlated with acceptance rate."
Yamazaki S (1995) Refereeeng System of 29 Life-Science Journals
Preferred by JapanesE Scientists SCIENTOMETRICS 33 (1): 123-129
"There was a high correlation between the rejection rate and
the impact factor"
On Sat, 5 Jun 2004, Jan Velterop wrote:
> There is of course a distortion. If one is looking to measure quality, an
> impact factor is unlikely to be the right tool. Two equivalent papers, one
> OA and the other in a subscription journal, should have the same or a very
> similar IF. If not, they're not equivalent (or, more to the point in the
> current situation, their impact isn't measured properly, e.g. by arbitrary
> exclusion from the count by the 'impact factory').
> But impact factors do not measure quality; they measure impact. Not nearly
> the same thing. The OA paper of two equivalent ones is likely to have the
> best impact (when measured, of course).
> Everybody is playing the impact factor game. Authors and publishers
> (including BioMed Central with some pretty nice impact factors) do,
> because most funders and tenure committees do (though often deny it), so
> careers and business prospects depend on it. But it shouldn't be confused
> with quality.
> On quality flaws in high impact journals, this may be illustrative
> reading, too:
> Jan Velterop
> On 1 Jun 2004, at 06:34, Sally Morris ((ALPSP)) wrote:
> > I'm concerned that there's possibly a built-in distortion here. Impact
> > factors (or any other 'qualitative' measures) need to be equally
> > applicable across the entire literature, both open and closed-access.
> > However, both 'big deals' and OA may have an inbuilt distortion factor
> > which has everything to do with availability and nothing (necessarily)
> > to do with quality.
> > Can anyone suggest how we can solve this dilemma? I'm assuming our
> > aim is to 'measure' quality, not to skew perceptions in favour of any
> > particular business model ;-)
> > Sally Morris, Chief Executive
> > E-mail: chief-exec at alpsp.org
More information about the Jrnlnote