mtDNA eve (response to Hedges)

David Swofford swofford at
Sun Mar 15 14:29:31 EST 1992

On 29 Feb 1992 Blair Hedges posted the following:

>    In discussing the two recent "mitochondrial Eve" reanalyses published
>in SCIENCE (255:737-739) Dave Swofford gave a good description of the
>problems encountered by parsimony programs in handling large data sets.
>However, as an author of one of those reanalyses, I must respond to Dave's
>comment that our reanalysis was "not nearly as thorough" as his (Maddison,
>Ruvolo, & Swofford, in press).  Although his paper has not yet been
>published, I have seen a copy of the manuscript.

We suspect that other readers are growing as tired of this controversy as we
are, so we will keep our comments as brief as possible.  First, let us re-
emphasize that the main point of our paper does not concern the ancestral
origin of "mitochondrial Eve"; rather, it addresses important and
fundamental methodological issues.  We are quite happy, in principle, with
the notion of African origin, but do believe that this hypothesis should not
be bolstered by data that support a non-African origin essentially as
strongly as an African one.  So, Blair's comment:

  >the conclusions of all three reanalyses are the same: parsimony analysis
  >of this data set cannot resolve the geographic origin of modern humans.

is partially correct.  But in fact, the conclusions of all three papers are
NOT the same.  We conclude that there are 3 classes of parsimonious trees
(with basal clades consisting of some Pygmies, some !Kung, or some Papua New
Guineans); none of the other papers (including the original Vigilant et al.
paper) finds and recognizes these three classes.

>Second, the most-parsimonious tree length that we report (522) is several
>steps shorter than those reported by Maddison et al.  I recently learned
>from David Maddison that the difference in tree length is apparently due to
>the inclusion or exclusion of one or a few sites in the different
>analyses. When he re-did his reanalysis with the same sites that we used,
>he obtained mpt lengths of 521 (this will be mentioned as a
>"note-added-in-proof" in their paper)

Just to set the record straight, in our first analysis we included 3 sites
that Hedges et al. did not include, making our shortest trees of length 525.
When we also excluded those 3 sites, those trees from our first analysis are
of length 521, shorter than Hedges et al.'s.  We also redid our analysis,
using the same sites as Hedge et al., and found again that the shortest
trees are of length 521.
The readers might be interested to know that the treelength of 528
reported by Vigilant et al. could not be duplicated, thus it seems that the
data used by Vigilant et al. differs from that used by the recent
reanalyses, and thus the treelengths are not comparable (thus, we have no
idea if our trees are 6 or 7 steps shorter than Vigilant et al.'s, or only 2
steps shorter).  Also, Templeton (pers comm) included the entire control
region in his analysis, and thus his treelengths are not directly comparable
to others.  Thus, the statement in Hedges et al. that Templeton's trees are
longer than theirs are unfounded (through no fault of their own).

>   The approach taken by Maddison et al. was to do many searches and to
>save only a few trees during each search.  Our approach was to do 5
>searches, saving a very large number of trees (10,000) with each search
>(the advantage of this approach is that shorter-length trees are
>encountered as the number of trees stacks up; each time restarting the
>count of 10,000). These two different approaches with the same data set
>resulted in mpt's of identical or nearly identical length and the same
>conclusion: no resolution. Thus I see no evidence that one was more
>thorough than the other.

This statement is perplexing.  In the early stages of our analysis, we tried
a similar approach (saving many trees rather than doing more replicate
searches).  We found that little was gained by saving many trees in each
search -- in fact, this so lengthened the time for each search, that it
proved more effective to cut down the number of trees saved in each search
so that many more searches could be performed. We did over 4,000 searches.
47 of our searches found trees shorter than every one of the 50,000 trees
found by Hedges et al. Our search strategy was effectively guaranteed of
finding some trees of length 521; Hedges et al.'s strategy clearly was not.
Furthermore, they did not find ANY of the trees in which the basal clade was
non-African.  How can it *possibly* be claimed that Hedges et al.'s searches
were as thorough as ours?

>     I should point out that the second analysis we presented (p. 738, Fig
>1B) using the neighbor-joining method resulted in a single tree very
>quickly (minutes) allowing the option of bootstrapping (2000 replications).
>[Bootstrapping was not possible with parsimony because a single cycle of
>the program could not be completed - i.e., all of the mpt's could not be
>found - and our PAUP analysis took 2 weeks on a Silicon Graphics
>computer!]. Although the bootstrap p-values on the nj-tree were low, that
>analysis did support an African origin and did extract information from the
>data set that the parsimony analysis could not - e.g., that all members of
>the !Kung tribe form a single group.

What's the point of boostrapping if one is just going to ignore the
confidence statements it makes?  We would love to see a majority-rule
consensus  of the trees found among the neighbor-joining (NJ) replicates. It
was very misleading to show a strict consensus of the equal-length parsimony
trees and a  fully resolved NJ tree.  A great many of the NJ trees computed
for different bootstrap resamplings failed to support African origin.  For
the full data set,  NJ provides one tree that happened to favor African
origin. Many of the parsimony trees also favored African origin.  Why did
the NJ analysis "support an African origin and ... extract information from
the data set that the parsimony analysis could not"?  The NJ bootstrap
results showed that the  conclusion was equivocal; so did the equally
parsimonious trees.

Furthermore, is it not reasonable to ask how much better (or worse) the NJ
tree is than the best tree that failed to support an African origin?  The
inability to answer this question is one of the greatest limitations of NJ.
Suppose I have a tree that I got via NJ and you have a different tree.  You
ask me "how much better is your tree?"  All I can say is "Well, mine's a
neighbor-joining tree and yours isn't."  Virtually every other phylogenetic
method has an explicit optimality criterion that can be used to compare
trees--parsimony, maximum-likelihood, additive-tree distance methods, etc.
Perhaps you could impose an optimality criterion like "minimum sum of
branch-lengths" but then it would no longer be enough to just look at the
one tree found by the NJ algorithm.  You would have to rearrange the tree to
see if other trees having lower sums of branch-lengths could be found.  And
then, NJ would no longer be able to find "a single tree very quickly
(minutes)".  It too might take "2 weeks  on a Silicon Graphics computer" if
it had any hope of identifying the optimal tree(s).

As a final comment on Hedges et al.'s NJ analysis, we note that they said of
the parsimony results: "the two to ten most basal nodes in the five
majority-rule trees [of the 10,000 trees for each replicate] lead
exclusively to Africans."  Referring to their NJ results, they wrote "the
two deepest branches of our neighbor-joining tree lead exclusively to
Africans."  Thus, it is unclear how results from the two methods can be
interpreted as being radically different.

>     I do not argue that parsimony is necessarily an inferior method of
>analysis - it is quite powerful and useful in many cases (this is not one
>of them) - only that systematists should be more open-minded about methods
>of analysis.  There several very good methods available for analyzing DNA
>sequences, and...

This statement is the one that provoked us into this response.  We DID NOT
and DO NOT argue that parsimony is necessarily a *superior* method of
analysis (see Swofford and Olsen, 1990 in "Molecular Systematics" published
by Sinauer for proof that at least one of us doesn't feel this way).  We
just used the same method Vigilant et al. used, but without falling into the
trap of looking at only a few trees that supported one hypothesis and
ignoring thousands that didn't.  The issue of whether parsimony was the
most appropriate method to analyze these data was not within the scope
of our paper (remember, our primary goal was not to determine the home of 
mitochondrial Eve, but instead to examine questions in implementation
of parsimony methods).

>...unyielding adherence to one method (maximum parsimony), especially when
>it fails, is not healthy for systematics.

How does one know it "failed?"  It seems that maximum parsimony is judged to
have failed because it could not choose a single hypothesis. Why blame the
messenger?  Perhaps the multiple trees found with parsimony indicates noise
in the data, noise which is ignored by presentation of a single NJ tree.  Or
was the criterion for success whether it was able to "correctly" find
unequivocal support for African origin? The point of the Vigilant et al.
paper should then have been to compare methods in their ability to recover a
known result.  Readers, decide for yourselves if you agree with the
following logic:

    Results from method A are taken as support for hypothesis I.  Properly
    used, method A finds about equal support for mutually exclusive
    hypotheses I and II.  Method B finds equivocal support for Hypothesis I.
    Conclusion: method A "failed" and method B succeeded.

If this makes sense to you, let's agree to disagree and move on to other

Dave Swofford   swofford at 
David Maddison  maddison at
David L. Swofford                 Phone:    (217)244-6959
Illinois Natural History Survey   FAX:      (217)333-4949
607 E. Peabody Drive              BITNET:   DAVESWOF at UIUCVMD
Champaign, Illinois 61820 USA     Internet: swofford at

More information about the Mol-evol mailing list