# Least squares

### From Motofit

The least squares method is often used as way of fitting data. In this method you have a dataset, *y _{obs}*, which consists of

*L*datapoints. Associated with each of the datapoints is an experimental uncertainty,

*y*.

_{error}Supposing you are able to describe the data with a theoretical model, you can then calculate the expected values, *y _{calc}*, for each of the L datapoints.

The χ^{2} value is simply the sum of the squared differences between *y _{obs}* and

*y*, divided by the sum of the squared errors:

_{calc}Where there are a total of *L* measured datapoints, *y _{obs}* (with a statistical error of

*y*, each of which has a corresponding theoretical value,

_{error})*y*. There are a total of

_{calc}*p*variables allowed to change during the fit procedure.

If one chooses to fit without error weighting then the denominator is set at 1 for each of the points.

The aim of the optimization, or fitting, routine is to minimise the Χ^{2} value.