Can any readers suggest on-line resources in teaching scientists with
limited mathematical training how to avoid these sorts of errors before
the referees get burdened with them?
In article <19970305061900.BAA16622 at ladder01.news.aol.com>,
elcoyotero at aol.com (ElCoyotero) wrote:
> I must agree with the previous posters that there are far too many
> examples of inappropriate, misused, and misleading statistical analyses in
> published research. As an ecologist I am more aware of examples in this
> field than in medicine, but the impact that this unfortunate situation has
> on the direction of future research and the expenditure of funds for
> corrective measures in either field can be frightening.
> I believe that one way to address this problem is in the peer review
> process. If every journal editor were to include for each article, one
> reviewer who's primary purpose is to determine if the statistical
> analyses were appropriate for the experimental design and data, and were
> done correctly, the editor can then give the authors opportunity to
> rectify inappropriate or incorrect analyses.
> Further corrective measures need to be taken at a more basic level.
> While it probably isn't necessary for every researcher to become an
> accomplished statistician, a certain level of statistical sophistication
> is desriable. For today's students that are tomorrow's research
> scientists, a certain amount of course work in statistics should be a
> requirement, especially at the Ph.D. level. In addition to basic
> classical statistics, this should include some form of instruction in
> experimental design, as well as training on when and how to use
> non-parametric statistics. For current researchers, they really owe it to
> not only themselves, but to the readers and decision makers who will be
> reacting to their published research, to attain this same level of
> statistical competence, through some form of personal development.
>> Jesse M. Purvis, Ph.D.
>elcoyotero at aol.com