Error rate in Taq pol. 1:270?

Bryan L. Ford fordb at
Wed Feb 12 01:09:07 EST 1997

Dr. Duncan Clark wrote:
> In article <t.chappell-1002971432260001 at>, Tom
> Chappell <t.chappell at> writes
> >
> >You're using faulty logic to calculate the error rate. If you've done a 30
> >cycle PCR, the polymerase has synthesized more than 24,000 bases to
> >generate the single DNA fragment that you've subcloned and sequenced. Your
> >error rate would be about 1 in 8,000, which isn't too bad for Taq.
> Unfortunately so are you.
> 30 cycles of exponential amplification will give 2 to the power of 30
> fold synthesis ie 1,074,000,000 x 810bp.

And unfortunately there is a considerably weakness in the above
statement. It is safe to say that one *never* sees anything near the
theoretical "2 to the nth" number of products. There may be several
reasons for this, but one reason is that not all templates productively
participate in each round of PCR. In analyses of this, one often sees
terminology such as "effective cycle number", that is "2 to the 30th"
might be adjusted to "2 to the 18th" based on empirical determination of
the number of product strands. This "effective cycle number" fiction has
no theoretical basis, it is simply an empirical correction of the "2 to
the nth" fiction. Let me suggest that it would be better (but still not
perfect) to have the equation reflect an adjustment to the number of
templates at each cycle by the following fiction: the number of strands
at n cycles is approximated by raising (2-u) to the nth power, where "u"
represents the probability of any given strand not participating as a
template for any round of the PCR. Or more concretely, suppose that on
average 30% of the strands failed for one reason or another to be
extended in any cycle (a not unrealistic number, especially at the later
cycles), then the expression of the expected product numbers would be
1.7 raised to the nth power. Of course this fiction is itself flawed,
since the number of nonparticipating strands is subject to substantial
increases as the cycle number climbs (for example, from declining primer
and dNTP concentrations, thermal exhaustion of the polymerase etc.) But,
at the least the (2-u)^n expression has the advantage that it can much
more closely model the real situation especially if "u" itself is
redefined either empirically and/or statistically to reflect its own

> Anyone have a formula for actually calculating the real no. of errors
> one would see after so many cycles for a fixed target size. I use a
> published one for determining fidelity (similar to Wayne's) on a lacI
> assay but my algebra is not good enough to know how to reverse the
> formala and for a given error rate get an actual. no. of errors for so
> many cycles for a fixed target size. It would be nice to have a table in
> the FAQ showing no. of errors to be expected after say 10, 15, 20, 25
> and 30 cycles for targets of 100, 500, 1000, 2500bp etc. Any offers?

Such a table would be valuable, even if it did not reflect the
refinements mentioned above, since one can always use the table values
corresponding to the "effective cycle number"-- which anyone who can
quantitate specific DNA (e.g. on a gel or blot) and use a fairly good
handheld calculator having a "y to x power" button, using successive
iterations to approximate the value to any desired degree of precision--
limited of course by the accuracy (and specificity)of your DNA

Who's gonna do that table?

More information about the Methods mailing list