use of a Likelihood Ratio Test
Jack Sullivan
jacks at uidaho.edu
Thu Oct 11 15:16:48 EST 2001
Hi folks - Lots of issues with LRT's and model selection. First, John's
right regarding the application of LRT's in which one model is a special
case of the other. In the absence of a nested relationship between
alternative models, however, the AIC is still an appropriate means of
choosing.
I have very mixed feelings about whether model choice should be hard coded.
On the up side, with ModelTest, more folks will end up using a reasonable
model. Very cool - huge imporvement over just picking a model arbitrarily -
excellent contribution. On the down side, the more automation there is in
data analysis, the less folks will understand the relationships among the
models. Not so cool. This is one reason I prefer a top-down approach to
model selection (i.e., start with the most complex and parameter rich and
simplify from there); looking at estimates of all parameters makes one
think about ways in which to simplify.
In answer to Brice's last questions:
HKY+gamma not a special case of GTR[equal rates] (which is GTR+I+G w/a=inf
& pinv=0).
HKY-SSR is not a special case of HKY+Gamma with alpha=infinity.
The latter is actually HKY[equal rates] which one may think of as a special
case of HKY-SSR (i.e., with the three rate matrices forced to be
identical): just the opposite as Brice mentioned.
Although SSR models usually have higher likelihoods, they often don't
perform as well as gamma, I+gamma, or even I alone (i.e., assuming some
proportion of sites are invariable and rates at variable sites are
uniform). The paper by Thomas Buckley et al. (2000 SystBiol:50:67) is an
excellent demonstration of this. Certainly SSR+G may deal with the problem
SSR has of assuming all sites in a rate class have a uniform rate, but
we'll eventually run into the problem of trying to estimate too many
parameters from too few data.
This illustrates that better fit models (as judged by higher likelihood
scores) may not necessarily be the best ones to use for phylogenetic
estimation. Certainly the closer one gets to the true model, the greater
the probability that phylogenetic estimation using that model will be
consistent. However, all our models are wrong and we have yet to determine
how wrong a model can be, and more importantly the manner(s) in which a
model can be violated, yet still be adequate.
My take on the SSR vs gamma issue is that SSR's improvement in likelihood
is associated with differences in base frequencies among codon postions.
Pooling across codon positions (as in, say, GTR+G) hides position-specific
difference in base frequencies that can be huge: hence the often
dramatically higher likelihoods for SSR models relative to gamma, etc.
However in terms of phylogenetic performance, it's probably more important
to model rate variation among sites better than it is to better model base
frequencies. That's a guess at this point.
So, in the meantime, I stick with an I, a Gamma, or an I+Gamma, rather than
SSR's, to deal with rate heterogeneity across sites. All are wrong,
certainly, but they seem to perform pretty well (based on a paper Dave
Swofford and I have in press), at least under simple conditions that have
been simulated to date.
I hope that helps rather than hinders.....
Jack
---
More information about the Mol-evol
mailing list