Some myths concerning statistical hypothesis testing
Robert Dodier
robert_dodier at yahoo.com
Sun Nov 10 11:08:24 EST 2002
Sturla Molden <sturla at molden_dot_net.invalid> wrote:
> On Thu, 07 Nov 2002 18:51:38 +0100, Glen M. Sizemore wrote:
> > GS: Tell me about Bayesian statistics, please!
>
> Basically, if you want to judge if a hypotesis is correct,
> then what you should compute is the probability that the
> thypotesis is correct. And this is what Bayesian statistics does.
>
> It is based on "Bayes theorem" that can be reformulated as:
>
> Prob(H is correct given data) is proportional to
> Prob(data given H is correct) times Prob(H is correct a priori).
In this context it is worth pointing out that the example
given above generalizes quite effortlessly into more
complex scenarios.
In the Bayesian world view, probability can be assessed for
any sort of a proposition, not just propositions involving
random variables. So one direction of generalization is to
consider problems involving not only hypotheses and observable
data, but also parameters and even theories (as organized
collections of hypotheses).
The Bayesian approach handles such structural problems naturally,
because there is a single mode of reasoning that applies at any
level. It is claimed, in conventional statistics, that since
probability cannot be assessed for a hypothesis, a different mode
of reasoning is needed. This is similar to the medieval theory
that one cannot add a number to the square of a number, since one
is a length and the other is an area. The Bayesian approach does
not suffer from this self-imposed constriction.
There is another kind of generality, which is that one can
easily handle problems in which the relation between the
hypothesis and the observables is more complex than
(hypothesis -> observables). The summary p(H|data) \propto
p(data|H) p(H) is a useful way to decompose the problem if,
essentially, H separates anything upstream from everything
downstream. However, it is easy to devise problems for which
computing p(data|H) or p(H) is not any easier than computing
p(H|data) itself; these are the problems which don't have a
nice upstream-downstream separation, that is, which have more
than one path to get from upstream to downstream. In these cases,
the Bayesian approaches still shows the way to a solution.
Now, computing the solution may be extremely difficult --
generally involving multidimensional integrations. However,
at least you can see where you're headed -- there is a
``right answer'' to work towards.
For what it's worth,
Robert Dodier
--
``He wins most who toys with the dies.'' -- David O'Bedlam
More information about the Neur-sci
mailing list