# Least squares

The least squares method is often used as way of fitting data. In this method you have a dataset, yobs, which consists of L datapoints. Associated with each of the datapoints is an experimental uncertainty, yerror.

Supposing you are able to describe the data with a theoretical model, you can then calculate the expected values, ycalc, for each of the L datapoints.

The χ2 value is simply the sum of the squared differences between yobs and ycalc, divided by the sum of the squared errors:

$\chi^{2} = \sum_{{}_{i=1}}^{^{n}}\frac{1}{L-p}~\left(\frac{y_{n,obs}-y_{n,calc}}{y_{n,error}}\right)^2$

Where there are a total of L measured datapoints, yobs (with a statistical error of yerror), each of which has a corresponding theoretical value, ycalc. There are a total of p variables allowed to change during the fit procedure.

If one chooses to fit without error weighting then the denominator is set at 1 for each of the points.

The aim of the optimization, or fitting, routine is to minimise the Χ2 value.