My reply to Mat w/o attachment
sturla at molden_dot_net.invalid
Fri Nov 22 15:49:47 EST 2002
On Wed, 13 Nov 2002 20:35:50 +0100, Glen M. Sizemore wrote:
> 1.) a p-value is a conditional probability of the form p(A/B) where A is
> the observation and B is the truth of the null hypothesis.
> 2.) you don't know if B is true or false.
> Conclusion: whatever a p-value is, it cannot be a quantitative
> assessment of the truth of B because the meaning of the p-value is
> dependent on B and you don't know what B is. Now attack the premises or
> the conclusion. I dare you.
1. The p-value is not p(A|B) where A is the observation and B is
the truth of the null hypothesis. The p-value is
p(A or more extreme data | B).
2. p(A|B) equals ZERO for data sampled from a continous
distribution. Probability is the integral under the
probability density function. So at the point-probability
p(data == A | B) is zero (you must integrate fom A to A).
However, the likelihood of B is the value of the pdf evaluated
at data == A, which is what Bayesian statitics use to derive
the posterior distribution.
3. Your argument would be correct for data from a discrete
distribution, if the p-value actually were p(A|B). But since
it's not, your argument is worthless.
Then the case for and against the p-value:
Ronald Fisher claimed the p-value could be a heuristic
that would roughly express our case against H0. (He was very
enigmatic on this issue.) The 5% level was used for computational
simplicity. (The original reference is the textbook "Statitical
methods for research workers.) Later on, he derived the likelihood
principle, which lead to the conclusion that the p-value could
measure the evidence against H0 after all. To circumvent this
problem, Fisher derived something he called "fiducial inference",
which has never gained popularity. It is somewhat similar to
Bayesian inference, and results in a posterior distribution.
Egon Pearson and Jerzy Neyman invented the notion of significance
tests, which has nothing to do with p-values despite the
common confusion produced by introductory statistics texts.
Fisher was not particularly polite towards his opponents. His
arch-opponent used to be physicist and founder of modern
Bayesian inference Harold Jeffeys. According to an old story,
Fisher and Jeffreys were listening to a talk on significance
tests by Jerzy Neyman in London. They were both so utterly
shocked that the shook hands and promised never to insult each
other in public again. Fisher spent the rest of his life
denouncing Egon Pearson and Jerzy Neyman's approach.
More information about the Neur-sci