I think that I can answer Brice's second question:
I (and others) have also had concerns about the order dependence in hLRT.
The Akaike Information Criterion (AIC) that is produced as secondary
output in Modeltest compares all models in the comparisons done. Often it
arrives at the same conclusion as the hLRT; but in cases where the
hierarch can lead one down the wrong path, it produces a more favored
Perhaps Keith or David can comment on why they favor the hLRT?
<learn at u.washington.edu>
in article 9q27pa$nde$1 at mercury.hgmp.mrc.ac.uk, Brice Quenoville at
quenovib at naos.si.edu wrote on 10/10/2001 12:32 pm:
>> I am using Modeltest to find the evolutionary model that best fits my data
> given an assumed tree, and have a question on how one should apply the
> Likelihood Ratio Test.
>> My question is the following: are any two models with a different number of
> free parameters nested, thus comparable through a LRT, or is there other
> restrictions than just having d.f.>0?
>> Modeltest uses a "step by step with no return" in its procedure based on LRT,
> and it seems to me that the model given at the end is not always the best one.
> For instance I sometimes have TrN93 better than HKY, but HKY + gamma better
> than TrN93 + gamma. The firsts two are compare before the second two with
> Modeltest so I will end with a model equal or more complex than TrN93. I
> understand this if different parameters have different, additive and non
> independent effects on the Likelihood score, but then why not make a program
> that would compare all possible nested models without ordering the comparison.
> I would then guess that there must be other restrictions and I am curious to
> know which one, or it has to do with computation time.
>> Thanks for any input,