laubach at biogfx.neuro.wfu.edu (Mark Laubach) writes:
> To those who worry about someone re-analyzing their data, coming
> to conclusions other than those made by the collectors of the
> data, I can only say that I thought the whole goal of science was
> to check all possible interpretations of a data set. Alternative
> interpretations, IMHO, are the root of debate and progress in
> science at large and could not be anything but beneficial for
> anyone willing to let their data speak for themselves.
I think this was probably written in response to my comments, but it
isn't quite what I had in mind. The problem is that most types of
real laboratory data are messy in various ways, with noise and
artefact of one sort or another. The people who record the data are
hopefully aware of the problems, and know that they need to compensate
for them or else simply avoid doing certain types of analysis. People
who just pull a data set off of a web site won't necessarily know
about these things. Even if the organizer tries to write a detailed
description of every possible problem with the data -- often a very
tedious task -- the downloader won't necessarily read it or understand
it. The result is likely to be at least a few papers drawing
earthshaking theoretical conclusions from sophisticated analysis of
experimental artefacts. This is what needs to be avoided. I'm not
saying it's impossible, but it isn't all that easy.
Note that these caveats don't apply to making modeling or analysis
software publicly available, which I wholeheartedly favor.