Kevin Cowtan kdc3 at
Wed Oct 29 07:51:33 EST 1997

Kevin Cowtan wrote:
> Why are these different? Well, in practice we know that there are errors
> in the measured magnitudes, and usually we have some estimate of these
> errors. Least squares will try and introduce bogus features in the model
> to try and reproduce those errors in the magnitudes. Maximum likelihood
> will only try and fit the data as well as the error estimates require,
> and of the ensemble of possible models which could fit that criterion
> will produce the most probable.

Eleanor Dodson adds that maximum likelihood refinement also takes into
account model incompleteness and coordinate error: Least squares
introduces errors when trying to fit an incomplete model to complete
structure factors. As a result ML difference maps are considerably more
powerful in producing missing density then conventional difference maps.


More information about the Xtal-log mailing list