My reply to Mat w/o attachment

Radford Neal radford at cs.toronto.edu
Sat Nov 23 10:26:08 EST 2002


In article <pan.2002.11.22.22.41.50.193755.3757 at molden_dot_net.invalid>,
Sturla Molden  <sturla at molden_dot_net.invalid> wrote:

>The original argument for the p-value was "modus tollens":
>
>(H0 predicts outcome) & (outcome is not observed) => (H0 must be false)
>
>This is of course a valid statement. What it amount to saying is that 
>we can reject H0 on the grounds that we did not get the predicted result. 
>And we can compute what results H0 predicts.

>So the case for H0 is spesifically based on rejecting hypothesis by Modus
>Tollens (cf. Karl Popper). There is no implication of "inverted 
>conditionality" in the argument, as it would be meaningless by definition.
>The argument is modus tolles, which is valid.


Actually, such rules of of deductive logic are not applicable to statistical
inference.  Deductive logic has the property that if from true premises
A and B you can deduce P, then if you learn additional true premises C and D,
you can still deduce P.  Statistical inference does not have this property.
>From data X1 and X2 you may reject H0, but after observing X3 and X4, you
may change your mind and decide that you now have no reason to reject H0.

To handle inference in contexts of uncertainty, you need probability.
This is why non-Bayesian statistical methods keep getting into trouble.

   Radford Neal

----------------------------------------------------------------------------
Radford M. Neal                                       radford at cs.utoronto.ca
Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca
University of Toronto                     http://www.cs.utoronto.ca/~radford
----------------------------------------------------------------------------



More information about the Neur-sci mailing list