(Please forgive my morning incoherence -- not enough coffee yet)
At 8:31 am -0700 18/10/99, James McInerney wrote:
...
>The question of bootstrapping with or without constant sites is one that
>has not yet been adequately answered. There is the philosophical point
>that all sites should be included in bootstrapping, since the raw datum
>is a random (?) sample from the universe of sample points (sequences).
>Therefore, in order to adequately characterise this universe from such a
>small sample, we must use all the information available to us.
(Playing devil's advocate here, perhaps :) But the underlying
assumption of bootstrapping is that the observed data sample is
representative of the whole population, isn't it, in which case I
wonder how that assumption is possibly being broken when there are
constant sites (as in a covarion model), as opposed to some sites
which are just going very slowly by chance (iid rates across sites
model). I suppose I'm thinking about James' "?" parenthesised above
in a similarly concerned vein. Views?
>There is then the counterpoint that constant sites convey little or no
>information and indeed contribute to a violation of an assumption that
>is made by many tree-making methods (the assumption that all sites are
>free to change).
(Not all models -- but that's another issue entirely...)
Thanks for the reference posting.
Cheers,
Mike
---