Query about journal (not author) self-citation rates

Stevan Harnad harnad at ecs.soton.ac.uk
Tue Mar 25 10:01:27 EST 2003


Author self-citation rates are easily calculated and corrected for.
One can always subtract self-citations from an author's citation
count. But what about journal self-citations (by which I mean
articles in a journal citing other articles in that same journal)?

In both cases -- author self-citation and journal self-citation -- the
self-citations may be legitimate and necessary, or they may be
excessive and inflated. In the case of journals, it is no doubt
possible that the majority of the important and relevant work happens
to be done in the pages of that journal.

But because journals are often evaluated on the basis of their impact
factors (by libraries, choosing which journals to purchase, by authors,
choosing which journals to submit to, and by grant-funders and research
assessors, choosing which research and researchers to hire, fund, and
promote) there is every temptation to get those journal impact factors
as high as possible. The legitimate way is to attract the best research,
by maintaining the best peer-review standards, but a short-cut is to
encourage authors to cite the journal more often in their articles
(as a condition or inducement for acceptance in that journal).

Which leads me to my question: Has anyone done a systematic analysis
to test for this? One could calculate average rates for (S) journals
citing themselves (articles in the same journal, not self-citations
by its authors), (T) journals citing *to* other journals, (B) journals
cited *by* other journals (this could be done across as well as within
fields or even subfields). This could perhaps also be fine-tuned by the
citation-rates of the authors in the journals (their personal t and b
rates, across all their papers). This would give a preliminary picture
of which journals have inflated S-rates, relative to others, perhaps
weighted by the other factors, including google-like "authorities",
namely, high-impact, uninflated journals that can be used as bench-marks.
Even the possibility that a journal's higher S-rate is because it is the
only one in its subfield (or the only one at its level in the subfield)
could be tested using triangulation with the above variables.

Does anyone know of such studies? (Or of evidence of encouraging
self-citation in any way?)

It goes without saying that once the journal literature is open-access,
potential journal-based biases like this will be far less consequential,
because there will be many direct measures of a paper's or author's
research impact, among which the citation impact factor of the journal
in which the paper appeared will be a relatively minor one.
http://www.ecs.soton.ac.uk/~harnad/Temp/self-archiving.htm

Stevan Harnad




More information about the Jrnlnote mailing list