Sponsored Links

Selasa, 28 November 2017

Sponsored Links

Least Squares Linear Regression - EXCEL - YouTube
src: i.ytimg.com

In statistics and mathematics, linear least squares is an approach fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model. The resulting fitted model can be used to summarize the data, to predict unobserved values from the same system, and to understand the mechanisms that may underlie the system.

Mathematically, linear least squares is the problem of approximately solving an overdetermined system of linear equations, where the best approximation is defined as that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is called linear least squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems are convex and have a closed-form solution that is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast, non-linear least squares problems generally must be solved by an iterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using the Bayesian MMSE estimator.

In statistics, linear least squares problems correspond to a particularly important type of statistical model called linear regression which arises as a particular form of regression analysis. One basic form of such a model is an ordinary least squares model. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models and statistical inferences related to these being dealt with in the articles just mentioned. See outline of regression analysis for an outline of the topic.


Video Linear least squares (mathematics)



Example

As a result of an experiment, four ( x , y ) {\displaystyle (x,y)} data points were obtained, ( 1 , 6 ) , {\displaystyle (1,6),} ( 2 , 5 ) , {\displaystyle (2,5),} ( 3 , 7 ) , {\displaystyle (3,7),} and ( 4 , 10 ) {\displaystyle (4,10)} (shown in red in the picture on the right). We hope to find a line y = ? 1 + ? 2 x {\displaystyle y=\beta _{1}+\beta _{2}x} that best fits these four points. In other words, we would like to find the numbers ? 1 {\displaystyle \beta _{1}} and ? 2 {\displaystyle \beta _{2}} that approximately solve the overdetermined linear system

? 1 + 1 ? 2 = 6 ? 1 + 2 ? 2 = 5 ? 1 + 3 ? 2 = 7 ? 1 + 4 ? 2 = 10 {\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}&&\;=\;&&6&\\\beta _{1}+2\beta _{2}&&\;=\;&&5&\\\beta _{1}+3\beta _{2}&&\;=\;&&7&\\\beta _{1}+4\beta _{2}&&\;=\;&&10&\\\end{alignedat}}}

of four equations in two unknowns in some "best" sense.

The "error", at each point, between the curve fit and the data is the difference between the right- and left-hand sides of the equations above. The least squares approach to solving this problem is to try to make the sum of the squares of these errors as small as possible; that is, to find the minimum of the function

S ( ? 1 , ? 2 ) = [ 6 - ( ? 1 + 1 ? 2 ) ] 2 + [ 5 - ( ? 1 + 2 ? 2 ) ] 2 + [ 7 - ( ? 1 + 3 ? 2 ) ] 2 + [ 10 - ( ? 1 + 4 ? 2 ) ] 2 = 4 ? 1 2 + 30 ? 2 2 + 20 ? 1 ? 2 - 56 ? 1 - 154 ? 2 + 210. {\displaystyle {\begin{aligned}S(\beta _{1},\beta _{2})=&\left[6-(\beta _{1}+1\beta _{2})\right]^{2}+\left[5-(\beta _{1}+2\beta _{2})\right]^{2}\\&+\left[7-(\beta _{1}+3\beta _{2})\right]^{2}+\left[10-(\beta _{1}+4\beta _{2})\right]^{2}\\&=4\beta _{1}^{2}+30\beta _{2}^{2}+20\beta _{1}\beta _{2}-56\beta _{1}-154\beta _{2}+210.\end{aligned}}}

The minimum is determined by calculating the partial derivatives of S ( ? 1 , ? 2 ) {\displaystyle S(\beta _{1},\beta _{2})} with respect to ? 1 {\displaystyle \beta _{1}} and ? 2 {\displaystyle \beta _{2}} and setting them to zero

? S ? ? 1 = 0 = 8 ? 1 + 20 ? 2 - 56 {\displaystyle {\frac {\partial S}{\partial \beta _{1}}}=0=8\beta _{1}+20\beta _{2}-56}
? S ? ? 2 = 0 = 20 ? 1 + 60 ? 2 - 154. {\displaystyle {\frac {\partial S}{\partial \beta _{2}}}=0=20\beta _{1}+60\beta _{2}-154.}

This results in a system of two equations in two unknowns, called the normal equations, which when solved give

? 1 = 3.5 {\displaystyle \beta _{1}=3.5}
? 2 = 1.4 {\displaystyle \beta _{2}=1.4}

and the equation y = 3.5 + 1.4 x {\displaystyle y=3.5+1.4x} of the line of best fit. The residuals, that is, the discrepancies between the y {\displaystyle y} values from the experiment and the y {\displaystyle y} values calculated using the line of best fit are then found to be 1.1 , {\displaystyle 1.1,} - 1.3 , {\displaystyle -1.3,} - 0.7 , {\displaystyle -0.7,} and 0.9 {\displaystyle 0.9} (see the picture on the right). The minimum value of the sum of squares of the residuals is S ( 3.5 , 1.4 ) = 1.1 2 + ( - 1.3 ) 2 + ( - 0.7 ) 2 + 0.9 2 = 4.2. {\displaystyle S(3.5,1.4)=1.1^{2}+(-1.3)^{2}+(-0.7)^{2}+0.9^{2}=4.2.}

More generally, one can have n {\displaystyle n} regressors x j {\displaystyle x_{j}} , and a linear model

y = ? 1 + ? j = 2 n + 1 ? j x j - 1 . {\displaystyle y=\beta _{1}+\sum _{j=2}^{n+1}\beta _{j}x_{j-1}.}

Using a quadratic model

Importantly, in "linear least squares", we are not restricted to using a line as the model as in the above example. For instance, we could have chosen the restricted quadratic model y = ? 1 x 2 {\displaystyle y=\beta _{1}x^{2}} . This model is still linear in the ? 1 {\displaystyle \beta _{1}} parameter, so we can still perform the same analysis, constructing a system of equations from the data points:

6 = ? 1 ( 1 ) 2 5 = ? 1 ( 2 ) 2 7 = ? 1 ( 3 ) 2 10 = ? 1 ( 4 ) 2 {\displaystyle {\begin{alignedat}{2}6&&\;=\beta _{1}(1)^{2}\\5&&\;=\beta _{1}(2)^{2}\\7&&\;=\beta _{1}(3)^{2}\\10&&\;=\beta _{1}(4)^{2}\\\end{alignedat}}}

The partial derivatives with respect to the parameters (this time there is only one) are again computed and set to 0:

? S ? ? 1 = 0 = 708 ? 1 - 498 {\displaystyle {\frac {\partial S}{\partial \beta _{1}}}=0=708\beta _{1}-498}

and solved

? 1 = 0.703 {\displaystyle \beta _{1}=0.703}

leading to the resulting best fit model y = 0.703 x 2 {\displaystyle y=0.703x^{2}}


Maps Linear least squares (mathematics)



The general problem

Consider an overdetermined system

? j = 1 n X i j ? j = y i ,   ( i = 1 , 2 , ... , m ) , {\displaystyle \sum _{j=1}^{n}X_{ij}\beta _{j}=y_{i},\ (i=1,2,\dots ,m),}

of m linear equations in n unknown coefficients, ?1,?2,...,?n, with m > n. (Note: for a linear model as above, not all of X {\displaystyle X} contains information on the data points. The first column is populated with ones, X i 1 = 1 {\displaystyle X_{i1}=1} , only the other columns contain actual data, and n = number of regressors + 1.) This can be written in matrix form as

X ? = y , {\displaystyle \mathbf {X} {\boldsymbol {\beta }}=\mathbf {y} ,}

where

X = [ X 11 X 12 ? X 1 n X 21 X 22 ? X 2 n ? ? ? ? X m 1 X m 2 ? X m n ] , ? = [ ? 1 ? 2 ? ? n ] , y = [ y 1 y 2 ? y m ] . {\displaystyle \mathbf {X} ={\begin{bmatrix}X_{11}&X_{12}&\cdots &X_{1n}\\X_{21}&X_{22}&\cdots &X_{2n}\\\vdots &\vdots &\ddots &\vdots \\X_{m1}&X_{m2}&\cdots &X_{mn}\end{bmatrix}},\qquad {\boldsymbol {\beta }}={\begin{bmatrix}\beta _{1}\\\beta _{2}\\\vdots \\\beta _{n}\end{bmatrix}},\qquad \mathbf {y} ={\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{m}\end{bmatrix}}.}

Such a system usually has no solution, so the goal is instead to find the coefficients ? {\displaystyle {\boldsymbol {\beta }}} which fit the equations "best",in the sense of solving the quadratic minimization problem

? ^ = a r g m i n ? S ( ? ) , {\displaystyle {\hat {\boldsymbol {\beta }}}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\,S({\boldsymbol {\beta }}),}

where the objective function S is given by

S ( ? ) = ? i = 1 m | y i - ? j = 1 n X i j ? j | 2 = ? y - X ? ? 2 . {\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}{\bigl |}y_{i}-\sum _{j=1}^{n}X_{ij}\beta _{j}{\bigr |}^{2}={\bigl \|}\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}{\bigr \|}^{2}.}

A justification for choosing this criterion is given in properties below. This minimization problem has a unique solution, provided that the n columns of the matrix X {\displaystyle \mathbf {X} } are linearly independent, given by solving the normal equations

( X T X ) ? ^ = X T y . {\displaystyle (\mathbf {X} ^{\rm {T}}\mathbf {X} ){\hat {\boldsymbol {\beta }}}=\mathbf {X} ^{\rm {T}}\mathbf {y} .}

The matrix X T X {\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {X} } is known as the Gramian matrix of X {\displaystyle \mathbf {X} } , which possesses several nice properties such as being a positive semi-definite matrix, and the matrix X T y {\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {y} } is known as the moment matrix of regressand by regressors. Finally, ? ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} is the coefficient vector of the least-squares hyperplane, expressed as

? ^ = ( X T X ) - 1 X T y . {\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} .}

Example implementation

MATLAB

The following MATLAB code shows implementation of this approach on the data used in the first example above.

Python

Python 3.6 code using essentially the same variable naming as the MATLAB code above:

Julia (programming language)

R (programming language)


Lecture 2-4: Least squares approximations
src: dmpeli.math.mcmaster.ca


Derivation of the normal equations

Define the i {\displaystyle i} th residual to be

r i = y i - ? j = 1 n X i j ? j {\displaystyle r_{i}=y_{i}-\sum _{j=1}^{n}X_{ij}\beta _{j}} .

Then S {\displaystyle S} can be rewritten

S = ? i = 1 m r i 2 . {\displaystyle S=\sum _{i=1}^{m}r_{i}^{2}.}

Given that S is convex, it is minimized when its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further - see maxima and minima.) The elements of the gradient vector are the partial derivatives of S with respect to the parameters:

? S ? ? j = 2 ? i = 1 m r i ? r i ? ? j   ( j = 1 , 2 , ... , n ) . {\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}\ (j=1,2,\dots ,n).}

The derivatives are

? r i ? ? j = - X i j . {\displaystyle {\frac {\partial r_{i}}{\partial \beta _{j}}}=-X_{ij}.}

Substitution of the expressions for the residuals and the derivatives into the gradient equations gives

? S ? ? j = 2 ? i = 1 m ( y i - ? k = 1 n X i k ? k ) ( - X i j )   ( j = 1 , 2 , ... , n ) . {\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i=1}^{m}\left(y_{i}-\sum _{k=1}^{n}X_{ik}\beta _{k}\right)(-X_{ij})\ (j=1,2,\dots ,n).}

Thus if ? ^ {\displaystyle {\hat {\beta }}} minimizes S, we have

2 ? i = 1 m ( y i - ? k = 1 n X i k ? ^ k ) ( - X i j ) = 0   ( j = 1 , 2 , ... , n ) . {\displaystyle 2\sum _{i=1}^{m}\left(y_{i}-\sum _{k=1}^{n}X_{ik}{\hat {\beta }}_{k}\right)(-X_{ij})=0\ (j=1,2,\dots ,n).}

Upon rearrangement, we obtain the normal equations:

? i = 1 m ? k = 1 n X i j X i k ? ^ k = ? i = 1 m X i j y i   ( j = 1 , 2 , ... , n ) . {\displaystyle \sum _{i=1}^{m}\sum _{k=1}^{n}X_{ij}X_{ik}{\hat {\beta }}_{k}=\sum _{i=1}^{m}X_{ij}y_{i}\ (j=1,2,\dots ,n).}

The normal equations are written in matrix notation as

( X T X ) ? ^ = X T y {\displaystyle (\mathbf {X} ^{\mathrm {T} }\mathbf {X} ){\hat {\boldsymbol {\beta }}}=\mathbf {X} ^{\mathrm {T} }\mathbf {y} } (where XT is the matrix transpose of X).

The solution of the normal equations yields the vector ? ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} of the optimal parameter values.

Derivation directly in terms of matrices

The normal equations can be derived directly from a matrix representation of the problem as follows. The objective is to minimize

S ( ? ) = ? y - X ? ? 2 = ( y - X ? ) T ( y - X ? ) = y T y - ? T X T y - y T X ? + ? T X T X ? . {\displaystyle S({\boldsymbol {\beta }})={\bigl \|}\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}{\bigr \|}^{2}=(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\rm {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})=\mathbf {y} ^{\rm {T}}\mathbf {y} -{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} -\mathbf {y} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}.}

Note that  : ( ? T X T y ) T = y T X ? {\displaystyle ({\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} )^{\rm {T}}=\mathbf {y} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}} has the dimension 1x1 (the number of columns of y {\displaystyle \mathbf {y} } ), so it is a scalar and equal to its own transpose, hence ? T X T y = y T X ? {\displaystyle {\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} =\mathbf {y} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}} and the quantity to minimize becomes

S ( ? ) = y T y - 2 ? T X T y + ? T X T X ? . {\displaystyle S({\boldsymbol {\beta }})=\mathbf {y} ^{\rm {T}}\mathbf {y} -2{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} +{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}.}

Differentiating this with respect to ? {\displaystyle {\boldsymbol {\beta }}} and equating to zero to satisfy the first-order conditions gives

- X T y + ( X T X ) ? = 0 , {\displaystyle -\mathbf {X} ^{\rm {T}}\mathbf {y} +(\mathbf {X} ^{\rm {T}}\mathbf {X} ){\boldsymbol {\beta }}=0,}

which is equivalent to the above-given normal equations. A sufficient condition for satisfaction of the second-order conditions for a minimum is that X {\displaystyle \mathbf {X} } have full column rank, in which case X T X {\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {X} } is positive definite.

Derivation without calculus

When X T X {\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {X} } is positive definite, the formula for the minimizing value of ? {\displaystyle {\boldsymbol {\beta }}} can be derived without the use of derivatives. The quantity

S ( ? ) = y T y - 2 ? T X T y + ? T X T X ? {\displaystyle S({\boldsymbol {\beta }})=\mathbf {y} ^{\rm {T}}\mathbf {y} -2{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {y} +{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}\mathbf {X} {\boldsymbol {\beta }}}

can be written as

? ? , ? ? - 2 ? ? , ( X T X ) - 1 X T y ? + ? ( X T X ) - 1 X T y , ( X T X ) - 1 X T y ? + C , {\displaystyle \langle {\boldsymbol {\beta }},{\boldsymbol {\beta }}\rangle -2\langle {\boldsymbol {\beta }},(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} \rangle +\langle (\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} ,(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} \rangle +C,}

where C {\displaystyle C} depends only on y {\displaystyle \mathbf {y} } and X {\displaystyle \mathbf {X} } , and ? ? , ? ? {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product defined by

? x , y ? = x T ( X T X ) y . {\displaystyle \langle x,y\rangle =x^{\rm {T}}(\mathbf {X} ^{\rm {T}}\mathbf {X} )y.}

It follows that S ( ? ) {\displaystyle S({\boldsymbol {\beta }})} is equal to

? ? - ( X T X ) - 1 X T y , ? - ( X T X ) - 1 X T y ? + C {\displaystyle \langle {\boldsymbol {\beta }}-(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} ,{\boldsymbol {\beta }}-(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} \rangle +C}

and therefore minimized exactly when

? - ( X T X ) - 1 X T y = 0. {\displaystyle {\boldsymbol {\beta }}-(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} =0.}

Generalization for complex equations

In general, the coefficients of the matrices X , ? {\displaystyle {\displaystyle \mathbf {X} },{\displaystyle {\boldsymbol {\beta }}}} and y {\displaystyle {\displaystyle \mathbf {y} }} can be complex. By using a Hermitian transpose instead of a simple transpose, it is possible to find a vector ? ^ {\displaystyle {\displaystyle {\boldsymbol {\hat {\beta }}}}} which minimizes S ( ? ) {\displaystyle {\displaystyle S({\boldsymbol {\beta }})}} , just as for the real matrix case. In order to get the normal equations we follow a similar path as in previous derivations:

S ( ? ) = ? y - X ? , y - X ? ? = ? y , y ? - ? X ? , y ? ¯ - ? y , X ? ? ¯ + ? X ? , X ? ? = y T y ¯ - ? + X + y - y + X ? + ? T X T X ¯ ? ¯ , {\displaystyle {\displaystyle S({\boldsymbol {\beta }})=\langle \mathbf {y} -\mathbf {X} {\boldsymbol {\beta }},\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\rangle =\langle \mathbf {y} ,\mathbf {y} \rangle -{\overline {\langle \mathbf {X} {\boldsymbol {\beta }},\mathbf {y} \rangle }}-{\overline {\langle \mathbf {y} ,\mathbf {X} {\boldsymbol {\beta }}\rangle }}+\langle \mathbf {X} {\boldsymbol {\beta }},\mathbf {X} {\boldsymbol {\beta }}\rangle =\mathbf {y} ^{\rm {T}}{\overline {\mathbf {y} }}-{\boldsymbol {\beta }}^{\dagger }\mathbf {X} ^{\dagger }\mathbf {y} -\mathbf {y} ^{\dagger }\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\rm {T}}\mathbf {X} ^{\rm {T}}{\overline {\mathbf {X} }}{\overline {\boldsymbol {\beta }}},}}

where + {\displaystyle {\displaystyle \dagger }} stands for Hermitian transpose.

We should now take derivatives of S ( ? ) {\displaystyle {\displaystyle S({\boldsymbol {\beta }})}} with respect to each of the coefficients ? j {\displaystyle {\displaystyle \beta _{j}}} , but first we separate real and imaginary parts to deal with the conjugate factors in above expression. For the ? j {\displaystyle {\displaystyle \beta _{j}}} we have

? j = ? j R + i ? j I {\displaystyle {\displaystyle \beta _{j}=\beta _{j}^{R}+i\beta _{j}^{I}}}

and the derivatives change into

? S ? ? j = ? S ? ? j R ? ? j R ? ? j + ? S ? ? j I ? ? j I ? ? j = ? S ? ? j R - i ? S ? ? j I     ( j = 1 , 2 , 3 , . . . , n ) . {\displaystyle {\displaystyle {\frac {\partial S}{\partial \beta _{j}}}={\frac {\partial S}{\partial \beta _{j}^{R}}}{\frac {\partial \beta _{j}^{R}}{\partial \beta _{j}}}+{\frac {\partial S}{\partial \beta _{j}^{I}}}{\frac {\partial \beta _{j}^{I}}{\partial \beta _{j}}}={\frac {\partial S}{\partial \beta _{j}^{R}}}-i{\frac {\partial S}{\partial \beta _{j}^{I}}}\ \ (j=1,2,3,...,n).}}

After rewriting S ( ? ) {\displaystyle {\displaystyle S({\boldsymbol {\beta }})}} in the summation form and writing ? j {\displaystyle {\displaystyle \beta _{j}}} explicitly, we can calculate both partial derivatives with result:

? S ? ? j R = - ? i = 1 m ( X ¯ i j y i + y ¯ i X i j ) + 2 ? i = 1 m X i j X ¯ i j ? j R + ? i = 1 m ? k ? j n ( X i j X ¯ i k ? ¯ k + ? k X i k X ¯ i j ) , {\displaystyle {\displaystyle {\frac {\partial S}{\partial \beta _{j}^{R}}}=-\sum _{i=1}^{m}{\Big (}{\overline {X}}_{ij}y_{i}+{\overline {y}}_{i}X_{ij}{\Big )}+2\sum _{i=1}^{m}X_{ij}{\overline {X}}_{ij}\beta _{j}^{R}+\sum _{i=1}^{m}\sum _{k\neq j}^{n}{\Big (}X_{ij}{\overline {X}}_{ik}{\overline {\beta }}_{k}+\beta _{k}X_{ik}{\overline {X}}_{ij}{\Big )},}}
- i ? S ? ? j I = ? i = 1 m ( X ¯ i j y i - y ¯ i X i j ) - 2 i ? i = 1 m X i j X ¯ i j ? j I + ? i = 1 m ? k ? j n ( X i j X ¯ i k ? ¯ k - ? k X i k X ¯ i j ) , {\displaystyle {\displaystyle -i{\frac {\partial S}{\partial \beta _{j}^{I}}}=\sum _{i=1}^{m}{\Big (}{\overline {X}}_{ij}y_{i}-{\overline {y}}_{i}X_{ij}{\Big )}-2i\sum _{i=1}^{m}X_{ij}{\overline {X}}_{ij}\beta _{j}^{I}+\sum _{i=1}^{m}\sum _{k\neq j}^{n}{\Big (}X_{ij}{\overline {X}}_{ik}{\overline {\beta }}_{k}-\beta _{k}X_{ik}{\overline {X}}_{ij}{\Big )},}}

which, after adding it together and comparing to zero (minimization condition for ? ^ {\displaystyle {\displaystyle {\boldsymbol {\hat {\beta }}}}} ) yields

? i = 1 m X i j y ¯ i = ? i = 1 m ? k = 1 n X i j X ¯ i k ? ^ ¯ k     ( j = 1 , 2 , 3 , . . . , n ) . {\displaystyle {\displaystyle \sum _{i=1}^{m}X_{ij}{\overline {y}}_{i}=\sum _{i=1}^{m}\sum _{k=1}^{n}X_{ij}{\overline {X}}_{ik}{\overline {\hat {\beta }}}_{k}\ \ (j=1,2,3,...,n).}}

In matrix form:

X T y ¯ = X T ( X ? ^ ) ¯       or       ( X + X ) ? ^ = X + y . {\displaystyle {\displaystyle {\textbf {X}}^{\rm {T}}{\overline {\textbf {y}}}={\textbf {X}}^{\rm {T}}{\overline {{\big (}{\textbf {X}}{\boldsymbol {\hat {\beta }}}{\big )}}}\ \ \ {\text{or}}\ \ \ {\big (}{\textbf {X}}^{\dagger }{\textbf {X}}{\big )}{\boldsymbol {\hat {\beta }}}={\textbf {X}}^{\dagger }{\textbf {y}}.}}

Linear Regression - Least Squares Criterion Part 1 - YouTube
src: i.ytimg.com


Computation

A general approach to the least squares problem m i n ? y - X ? ? 2 {\displaystyle \operatorname {\,min} \,{\big \|}\mathbf {y} -X{\boldsymbol {\beta }}{\big \|}^{2}} can be described as follows. Suppose that we can find an n by m matrix S such that XS is an orthogonal projection onto the image of X. Then a solution to our minimization problem is given by

? = S y {\displaystyle {\boldsymbol {\beta }}=S\mathbf {y} }

simply because

X ? = X ( S y ) = ( X S ) y {\displaystyle X{\boldsymbol {\beta }}=X(S\mathbf {y} )=(XS)\mathbf {y} }

is exactly a sought for orthogonal projection of y {\displaystyle \mathbf {y} } onto an image of X (see the picture below and note that as explained in the next section the image of X is just a subspace generated by column vectors of X). A few popular ways to find such a matrix S are described below.

Inverting the matrix of the normal equations

The algebraic solution of the normal equations can be written as

? ^ = ( X T X ) - 1 X T y = X + y {\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} =\mathbf {X} ^{+}\mathbf {y} }

where X+ is the Moore-Penrose pseudoinverse of X. Although this equation is correct and can work in many applications, it is not computationally efficient to invert the normal-equations matrix (the Gramian matrix). An exception occurs in numerical smoothing and differentiation where an analytical expression is required.

If the matrix XTX is well-conditioned and positive definite, implying that it has full rank, the normal equations can be solved directly by using the Cholesky decomposition RTR, where R is an upper triangular matrix, giving:

R T R ? ^ = X T y . {\displaystyle R^{\rm {T}}R{\hat {\boldsymbol {\beta }}}=X^{\rm {T}}\mathbf {y} .}

The solution is obtained in two stages, a forward substitution step, solving for z:

R T z = X T y , {\displaystyle R^{\rm {T}}\mathbf {z} =X^{\rm {T}}\mathbf {y} ,}

followed by a backward substitution, solving for ? ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} :

R ? ^ = z . {\displaystyle R{\hat {\boldsymbol {\beta }}}=\mathbf {z} .}

Both substitutions are facilitated by the triangular nature of R.

See example of linear regression for a worked-out numerical example with three parameters.

Orthogonal decomposition methods

Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more numerically stable because they avoid forming the product XTX.

The residuals are written in matrix notation as

r = y - X ? ^ . {\displaystyle \mathbf {r} =\mathbf {y} -X{\hat {\boldsymbol {\beta }}}.}

The matrix X is subjected to an orthogonal decomposition, e.g., the QR decomposition as follows.

X = Q ( R 0 )   {\displaystyle X=Q{\begin{pmatrix}R\\0\end{pmatrix}}\ } ,

where Q is an m×m orthogonal matrix (QTQ=I) and R is an n×n upper triangular matrix with r i i > 0 {\displaystyle r_{ii}>0} .

The residual vector is left-multiplied by QT.

Q T r = Q T y - ( Q T Q ) ( R 0 ) ? ^ = [ ( Q T y ) n - R ? ^ ( Q T y ) m - n ] = [ u v ] {\displaystyle Q^{\rm {T}}\mathbf {r} =Q^{\rm {T}}\mathbf {y} -\left(Q^{\rm {T}}Q\right){\begin{pmatrix}R\\0\end{pmatrix}}{\hat {\boldsymbol {\beta }}}={\begin{bmatrix}\left(Q^{\rm {T}}\mathbf {y} \right)_{n}-R{\hat {\boldsymbol {\beta }}}\\\left(Q^{\rm {T}}\mathbf {y} \right)_{m-n}\end{bmatrix}}={\begin{bmatrix}\mathbf {u} \\\mathbf {v} \end{bmatrix}}}

Because Q is orthogonal, the sum of squares of the residuals, s, may be written as:

s = ? r ? 2 = r T r = r T Q Q T r = u T u + v T v {\displaystyle s=\|\mathbf {r} \|^{2}=\mathbf {r} ^{\rm {T}}\mathbf {r} =\mathbf {r} ^{\rm {T}}QQ^{\rm {T}}\mathbf {r} =\mathbf {u} ^{\rm {T}}\mathbf {u} +\mathbf {v} ^{\rm {T}}\mathbf {v} }

Since v doesn't depend on ?, the minimum value of s is attained when the upper block, u, is zero. Therefore, the parameters are found by solving:

R ? ^ = ( Q T y ) n . {\displaystyle R{\hat {\boldsymbol {\beta }}}=\left(Q^{\rm {T}}\mathbf {y} \right)_{n}.}

These equations are easily solved as R is upper triangular.

An alternative decomposition of X is the singular value decomposition (SVD)

X = U ? V T   {\displaystyle X=U\Sigma V^{\rm {T}}\ } ,

where U is m by m orthogonal matrix, V is n by n orthogonal matrix and ? {\displaystyle \Sigma } is an m by n matrix with all its elements outside of the main diagonal equal to 0. The pseudoinverse of ? {\displaystyle \Sigma } is easily obtained by inverting its non-zero diagonal elements and transposing. Hence,

X X + = U ? V T V ? + U T = U P U T , {\displaystyle \mathbf {X} \mathbf {X} ^{+}=U\Sigma V^{\rm {T}}V\Sigma ^{+}U^{\rm {T}}=UPU^{\rm {T}},}

where P is obtained from ? {\displaystyle \Sigma } by replacing its non-zero diagonal elements with ones. Since ( X X + ) * = X X + {\displaystyle (\mathbf {X} \mathbf {X} ^{+})^{*}=\mathbf {X} \mathbf {X} ^{+}} (the property of pseudoinverse), the matrix U P U T {\displaystyle UPU^{\rm {T}}} is an orthogonal projection onto the image (column-space) of X. In accordance with a general approach described in the introduction above (find XS which is an orthogonal projection),

S = X + {\displaystyle S=\mathbf {X} ^{+}} ,

and thus,

? = V ? + U T y {\displaystyle \beta =V\Sigma ^{+}U^{\rm {T}}\mathbf {y} }

is a solution of a least squares problem. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, XTX, is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.


Lesson 23 Some final comments on structure solution Non-linear ...
src: images.slideplayer.com


Properties of the least-squares estimators

The gradient equations at the minimum can be written as

( y - X ? ^ ) T X = 0. {\displaystyle (\mathbf {y} -X{\hat {\boldsymbol {\beta }}})^{\rm {T}}X=0.}

A geometrical interpretation of these equations is that the vector of residuals, y - X ? ^ {\displaystyle \mathbf {y} -X{\hat {\boldsymbol {\beta }}}} is orthogonal to the column space of X, since the dot product ( y - X ? ^ ) ? X v {\displaystyle (\mathbf {y} -X{\hat {\boldsymbol {\beta }}})\cdot X\mathbf {v} } is equal to zero for any conformal vector, v. This means that y - X ? ^ {\displaystyle \mathbf {y} -X{\boldsymbol {\hat {\beta }}}} is the shortest of all possible vectors y - X ? {\displaystyle \mathbf {y} -X{\boldsymbol {\beta }}} , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.

Introducing ? ^ {\displaystyle {\hat {\boldsymbol {\gamma }}}} and a matrix K with the assumption that a matrix [ X   K ] {\displaystyle [X\ K]} is non-singular and KT X = 0 (cf. Orthogonal projections), the residual vector should satisfy the following equation:

r ^ ? y - X ? ^ = K ? ^ . {\displaystyle {\hat {\mathbf {r} }}\triangleq \mathbf {y} -X{\hat {\boldsymbol {\beta }}}=K{\hat {\boldsymbol {\gamma }}}.}

The equation and solution of linear least squares are thus described as follows:

y = [ X K ] ( ? ^ ? ^ ) , {\displaystyle \mathbf {y} ={\begin{bmatrix}X&K\end{bmatrix}}{\begin{pmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {\gamma }}}\end{pmatrix}},}
( ? ^ ? ^ ) = [ X K ] - 1 y = [ ( X T X ) - 1 X T ( K T K ) - 1 K T ] y . {\displaystyle {\begin{pmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {\gamma }}}\end{pmatrix}}={\begin{bmatrix}X&K\end{bmatrix}}^{-1}\mathbf {y} ={\begin{bmatrix}(X^{\rm {T}}X)^{-1}X^{\rm {T}}\\(K^{\rm {T}}K)^{-1}K^{\rm {T}}\end{bmatrix}}\mathbf {y} .}

If the experimental errors, ? {\displaystyle \epsilon \,} , are uncorrelated, have a mean of zero and a constant variance, ? {\displaystyle \sigma } , the Gauss-Markov theorem states that the least-squares estimator, ? ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} , has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statistical distribution function of the errors. In other words, the distribution function of the errors need not be a normal distribution. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased.

For example, it is easy to show that the arithmetic mean of a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss-Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.

However, in the case that the experimental errors do belong to a normal distribution, the least-squares estimator is also a maximum likelihood estimator.

These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.

Limitations

An assumption underlying the treatment given above is that the independent variable, x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case, total least squares or more generally errors-in-variables models, or rigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.

In some cases the (weighted) normal equations matrix XTX is ill-conditioned. When fitting polynomials the normal equations matrix is a Vandermonde matrix. Vandermonde matrices become increasingly ill-conditioned as the order of the matrix increases. In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate. Various regularization techniques can be applied in such cases, the most common of which is called ridge regression. If further information about the parameters is known, for example, a range of possible values of ? ^ {\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} } , then various techniques can be used to increase the stability of the solution. For example, see constrained least squares.

Another drawback of the least squares estimator is the fact that the norm of the residuals, ? y - X ? ^ ? {\displaystyle \|\mathbf {y} -X{\hat {\boldsymbol {\beta }}}\|} is minimized, whereas in some cases one is truly interested in obtaining small error in the parameter ? ^ {\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} } , e.g., a small value of ? ? - ? ^ ? {\displaystyle \|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|} . However, since the true parameter ? {\displaystyle {\boldsymbol {\beta }}} is necessarily unknown, this quantity cannot be directly minimized. If a prior probability on ? ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} is known, then a Bayes estimator can be used to minimize the mean squared error, E { ? ? - ? ^ ? 2 } {\displaystyle E\left\{\|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|^{2}\right\}} . The least squares method is often applied when no prior is known. Surprisingly, when several parameters are being estimated jointly, better estimators can be constructed, an effect known as Stein's phenomenon. For example, if the measurement error is Gaussian, several estimators are known which dominate, or outperform, the least squares technique; the best known of these is the James-Stein estimator. This is an example of more general shrinkage estimators that have been applied to regression problems.


ch8 4. Nonlinear Least Squares Method. Wen Shen - YouTube
src: i.ytimg.com


Weighted linear least squares

In some cases the observations may be weighted--for example, they may not be equally reliable. In this case, one can minimize the weighted sum of squares:

a r g m i n ? ? i = 1 m w i | y i - ? j = 1 n X i j ? j | 2 = a r g m i n ? ? W 1 / 2 ( y - X ? ) ? 2 . {\displaystyle {\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\,\sum _{i=1}^{m}w_{i}\left|y_{i}-\sum _{j=1}^{n}X_{ij}\beta _{j}\right|^{2}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\,{\big \|}W^{1/2}(\mathbf {y} -X{\boldsymbol {\beta }}){\big \|}^{2}.}

where wi > 0 is the weight of the ith observation, and W is the diagonal matrix of such weights.

The weights should, ideally, be equal to the reciprocal of the variance of the measurement. The normal equations are then:

( X T W X ) ? ^ = X T W y . {\displaystyle \left(X^{\rm {T}}WX\right){\hat {\boldsymbol {\beta }}}=X^{\rm {T}}W\mathbf {y} .}

This method is used in iteratively reweighted least squares.

Parameter errors and correlation

The estimated parameter values are linear combinations of the observed values

? ^ = ( X T W X ) - 1 X T W y . {\displaystyle {\hat {\boldsymbol {\beta }}}=(X^{\rm {T}}WX)^{-1}X^{\rm {T}}W\mathbf {y} .}

Therefore, an expression for the residuals (i.e., the estimated errors in the parameters) can be obtained by error propagation from the errors in the observations. Let the variance-covariance matrix for the observations be denoted by M and that of the parameters by M?. Then

M ? = ( X T W X ) - 1 X T W M W T X ( X T W T X ) - 1 . {\displaystyle M^{\beta }=(X^{\rm {T}}WX)^{-1}X^{\rm {T}}WMW^{\rm {T}}X(X^{\rm {T}}W^{\rm {T}}X)^{-1}.}

When W = M-1, this simplifies to

M ? = ( X T W X ) - 1 . {\displaystyle M^{\beta }=(X^{\rm {T}}WX)^{-1}.}

When unit weights are used (W = I, the identity matrix), it is implied that the experimental errors are uncorrelated and all equal: M = ?2I, where ?2 is the a priori variance of an observation. In any case, ?2 is approximated by the reduced chi-squared ? ? 2 {\displaystyle \chi _{\nu }^{2}} :

M ? = ? ? 2 ( X T X ) - 1 , {\displaystyle M^{\beta }=\chi _{\nu }^{2}(X^{\rm {T}}X)^{-1},}
? ? 2 = S / ? , {\displaystyle \chi _{\nu }^{2}=S/\nu ,}

where S is the minimum value of the (weighted) objective function:

S = r T W r . {\displaystyle S=r^{\rm {T}}Wr.}

The denominator, ? = m - n {\displaystyle \nu =m-n} , is the number of degrees of freedom; see effective degrees of freedom for generalizations for the case of correlated observations.

In all cases, the variance of the parameter ? i {\displaystyle \beta _{i}} is given by M i i ? {\displaystyle M_{ii}^{\beta }} and the covariance between parameters ? i {\displaystyle \beta _{i}} and ? j {\displaystyle \beta _{j}} is given by M i j ? {\displaystyle M_{ij}^{\beta }} . Standard deviation is the square root of variance, ? i = M i i ? {\displaystyle \sigma _{i}={\sqrt {M_{ii}^{\beta }}}} , and the correlation coefficient is given by ? i j = M i j ? / ( ? i ? j ) {\displaystyle \rho _{ij}=M_{ij}^{\beta }/(\sigma _{i}\sigma _{j})} . These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors, which, by definition, cannot be quantified. Note that even though the observations may be uncorrelated, the parameters are typically correlated.

Parameter confidence limits

It is often assumed, for want of any concrete evidence but often appealing to the central limit theorem--see Normal distribution#Occurrence--that the error on each observation belongs to a normal distribution with a mean of zero and standard deviation ? {\displaystyle \sigma } . Under that assumption the following probabilities can be derived for a single scalar parameter estimate in terms of its estimated standard error s e ? {\displaystyle se_{\beta }} (given here):

68% that the interval ? ^ ± s e ? {\displaystyle {\hat {\beta }}\pm se_{\beta }} encompasses the true coefficient value
95% that the interval ? ^ ± 2 s e ? {\displaystyle {\hat {\beta }}\pm 2se_{\beta }} encompasses the true coefficient value
99% that the interval ? ^ ± 2.5 s e ? {\displaystyle {\hat {\beta }}\pm 2.5se_{\beta }} encompasses the true coefficient value

The assumption is not unreasonable when m >> n. If the experimental errors are normally distributed the parameters will belong to a Student's t-distribution with m - n degrees of freedom. When m >> n Student's t-distribution approximates a normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject to sampling error.

When the number of observations is relatively small, Chebychev's inequality can be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2 or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.

Residual values and correlation

The residuals are related to the observations by

r ^ = y - X ? ^ = y - H y = ( I - H ) y {\displaystyle \mathbf {\hat {r}} =\mathbf {y} -X{\hat {\boldsymbol {\beta }}}=\mathbf {y} -H\mathbf {y} =(I-H)\mathbf {y} }

where H is the idempotent matrix known as the hat matrix:

H = X ( X T W X ) - 1 X T W {\displaystyle H=X\left(X^{\rm {T}}WX\right)^{-1}X^{\rm {T}}W}

and I is the identity matrix. The variance-covariance matrix of the residuals, Mr is given by

M r = ( I - H ) M ( I - H ) T . {\displaystyle M^{\mathbf {r} }=\left(I-H\right)M\left(I-H\right)^{\rm {T}}.}

Thus the residuals are correlated, even if the observations are not.

When W = M - 1 {\displaystyle W=M^{-1}} ,

M r = ( I - H ) M . {\displaystyle M^{\mathbf {r} }=\left(I-H\right)M.}

The sum of residual values is equal to zero whenever the model function contains a constant term. Left-multiply the expression for the residuals by XT:

X T r ^ = X T y - X T X ? ^ = X T y - ( X T X ) ( X T X ) - 1 X T y = 0 {\displaystyle X^{\rm {T}}{\hat {\mathbf {r} }}=X^{\rm {T}}\mathbf {y} -X^{\rm {T}}X{\hat {\boldsymbol {\beta }}}=X^{\rm {T}}\mathbf {y} -(X^{\rm {T}}X)(X^{\rm {T}}X)^{-1}X^{\rm {T}}\mathbf {y} =\mathbf {0} }

Say, for example, that the first term of the model is a constant, so that X i 1 = 1 {\displaystyle X_{i1}=1} for all i. In that case it follows that

? i m X i 1 r ^ i = ? i m r ^ i = 0. {\displaystyle \sum _{i}^{m}X_{i1}{\hat {r}}_{i}=\sum _{i}^{m}{\hat {r}}_{i}=0.}

Thus, in the motivational example, above, the fact that the sum of residual values is equal to zero it is not accidental but is a consequence of the presence of the constant term, ?, in the model.

If experimental error follows a normal distribution, then, because of the linear relationship between residuals and observations, so should residuals, but since the observations are only a sample of the population of all possible observations, the residuals should belong to a Student's t-distribution. Studentized residuals are useful in making a statistical test for an outlier when a particular residual appears to be excessively large.


math - Feature based alignment & Non linear least square - Stack ...
src: i.stack.imgur.com


Objective function

The optimal value of the objective function, found by substituting in the optimal expression for the coefficient vector, can be written as (assuming unweighted observations)

S = y T ( I - H ) T ( I - H ) y = y T ( I - H ) y , {\displaystyle S=\mathbf {y} ^{\rm {T}}(I-H)^{\rm {T}}(I-H)\mathbf {y} =\mathbf {y} ^{\rm {T}}(I-H)\mathbf {y} ,}

the latter equality holding, since (I - H) is symmetric and idempotent. It can be shown from this that under an appropriate assignment of weights the expected value of S is m - n. If instead unit weights are assumed, the expected value of S is ( m - n ) ? 2 {\displaystyle (m-n)\sigma ^{2}} , where ? 2 {\displaystyle \sigma ^{2}} is the variance of each observation.

If it is assumed that the residuals belong to a normal distribution, the objective function, being a sum of weighted squared residuals, will belong to a chi-squared ( ? 2 {\displaystyle \chi ^{2}} ) distribution with m - n degrees of freedom. Some illustrative percentile values of ? 2 {\displaystyle \chi ^{2}} are given in the following table.

m - n ? 0.50 2 ? 0.95 2 ? 0.99 2 10 9.34 18.3 23.2 25 24.3 37.7 44.3 100 99.3 124 136 {\displaystyle {\begin{array}{r|ccc}m-n&\chi _{0.50}^{2}&\chi _{0.95}^{2}&\chi _{0.99}^{2}\\\hline 10&9.34&18.3&23.2\\25&24.3&37.7&44.3\\100&99.3&124&136\end{array}}}

These values can be used for a statistical criterion as to the goodness of fit. When unit weights are used, the numbers should be divided by the variance of an observation.


Least Squares Regression Line on the TI83 TI84 Calculator - YouTube
src: i.ytimg.com


Constrained linear least squares

Often it is of interest to solve a linear least squares problem with an additional constraint on the solution. With constrained linear least squares, the original equation

X ? = y {\displaystyle \mathbf {X} {\boldsymbol {\beta }}=\mathbf {y} }

must be fit as closely as possible (in the least squares sense) while ensuring that some other property of ? {\displaystyle {\boldsymbol {\beta }}} is maintained. There are often special-purpose algorithms for solving such problems efficiently. Some examples of constraints are given below:

  • Equality constrained least squares: the elements of ? {\displaystyle {\boldsymbol {\beta }}} must exactly satisfy L ? = d {\displaystyle \mathbf {L} {\boldsymbol {\beta }}=\mathbf {d} } (see Ordinary least squares#Constrained estimation).
  • Regularized least squares: the elements of ? {\displaystyle {\boldsymbol {\beta }}} must satisfy ? L ? - y ? <= ? {\displaystyle \|\mathbf {L} {\boldsymbol {\beta }}-\mathbf {y} \|\leq \alpha } (choosing ? {\displaystyle \alpha } in proportion to the noise standard deviation of y prevents over-fitting).
  • Non-negative least squares (NNLS): The vector ? {\displaystyle {\boldsymbol {\beta }}} must satisfy the vector inequality ? >= 0 {\displaystyle {\boldsymbol {\beta }}\geq {\boldsymbol {0}}} defined componentwise--that is, each component must be either positive or zero.
  • Box-constrained least squares: The vector ? {\displaystyle {\boldsymbol {\beta }}} must satisfy the vector inequalities l b <= ? <= u b {\displaystyle {\boldsymbol {lb}}\leq {\boldsymbol {\beta }}\leq {\boldsymbol {ub}}} , each of which is defined componentwise.
  • Integer-constrained least squares: all elements of ? {\displaystyle {\boldsymbol {\beta }}} must be integers (instead of real numbers).
  • Phase-constrained least squares: all elements of ? {\displaystyle {\boldsymbol {\beta }}} must have the same phase (or must be real rather than complex numbers, i.e. phase = 0).

When the constraint only applies to some of the variables, the mixed problem may be solved using separable least squares by letting X = [ X 1 X 2 ] {\displaystyle \mathbf {X} =[\mathbf {X_{1}} \mathbf {X_{2}} ]} and ? T = [ ? 1 T ? 2 T ] {\displaystyle \mathbf {\beta } ^{\rm {T}}=[\mathbf {\beta _{1}} ^{\rm {T}}\mathbf {\beta _{2}} ^{\rm {T}}]} represent the unconstrained (1) and constrained (2) components. Then substituting the least-squares solution for ? 1 {\displaystyle \mathbf {\beta _{1}} } , i.e.

? 1 ^ = X 1 + ( y - X 2 ? 2 ) {\displaystyle {\hat {\boldsymbol {\beta _{1}}}}=\mathbf {X_{1}} ^{+}(\mathbf {y} -\mathbf {X_{2}} {\boldsymbol {\beta _{2}}})}

back into the original expression gives (following some rearrangement) an equation that can be solved as a purely constrained problem in ? 2 {\displaystyle \mathbf {\beta _{2}} } .

P X 2 ? 2 = P y , {\displaystyle \mathbf {P} \mathbf {X_{2}} {\boldsymbol {\beta _{2}}}=\mathbf {P} \mathbf {y} ,}

where P := I - X 1 X 1 + {\displaystyle \mathbf {P} :=\mathbf {I} -\mathbf {X_{1}} \mathbf {X_{1}} ^{+}} is a projection matrix. Following the constrained estimation of ? 2 ^ {\displaystyle {\hat {\boldsymbol {\beta _{2}}}}} the vector ? 1 ^ {\displaystyle {\hat {\boldsymbol {\beta _{1}}}}} is obtained from the expression above.


Analytical solution for Orthogonal Linear Least Squares in two ...
src: www.mathworks.com


Typical uses and applications

  • Polynomial fitting: models are polynomials in an independent variable, x:
    • Straight line: f ( x , ? ) = ? 1 + ? 2 x {\displaystyle f(x,{\boldsymbol {\beta }})=\beta _{1}+\beta _{2}x} .
    • Quadratic: f ( x , ? ) = ? 1 + ? 2 x + ? 3 x 2 {\displaystyle f(x,{\boldsymbol {\beta }})=\beta _{1}+\beta _{2}x+\beta _{3}x^{2}} .
    • Cubic, quartic and higher polynomials. For regression with high-order polynomials, the use of orthogonal polynomials is recommended.
  • Numerical smoothing and differentiation -- this is an application of polynomial fitting.
  • Multinomials in more than one independent variable, including surface fitting
  • Curve fitting with B-splines
  • Chemometrics, Calibration curve, Standard addition, Gran plot, analysis of mixtures

Uses in data fitting

The primary application of linear least squares is in data fitting. Given a set of m data points y 1 , y 2 , ... , y m , {\displaystyle y_{1},y_{2},\dots ,y_{m},} consisting of experimentally measured values taken at m values x 1 , x 2 , ... , x m {\displaystyle x_{1},x_{2},\dots ,x_{m}} of an independent variable ( x i {\displaystyle x_{i}} may be scalar or vector quantities), and given a model function y = f ( x , ? ) , {\displaystyle y=f(x,{\boldsymbol {\beta }}),} with ? = ( ? 1 , ? 2 , ... , ? n ) , {\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),} it is desired to find the parameters ? j {\displaystyle \beta _{j}} such that the model function "best" fits the data. In linear least squares, linearity is meant to be with respect to parameters ? j , {\displaystyle \beta _{j},} so

f ( x , ? ) = ? j = 1 n ? j ? j ( x ) . {\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{n}\beta _{j}\phi _{j}(x).}

Here, the functions ? j {\displaystyle \phi _{j}} may be nonlinear with respect to the variable x.

Ideally, the model function fits the data exactly, so

y i = f ( x i , ? ) {\displaystyle y_{i}=f(x_{i},{\boldsymbol {\beta }})}

for all i = 1 , 2 , ... , m . {\displaystyle i=1,2,\dots ,m.} This is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of the residuals

r i ( ? ) = y i - f ( x i , ? ) ,   ( i = 1 , 2 , ... , m ) {\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f(x_{i},{\boldsymbol {\beta }}),\ (i=1,2,\dots ,m)}

so to minimize the function

S ( ? ) = ? i = 1 m r i 2 ( ? ) . {\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}^{2}({\boldsymbol {\beta }}).}

After substituting for r i {\displaystyle r_{i}} and then for f {\displaystyle f} , this minimization problem becomes the quadratic minimization problem above with

X i j = ? j ( x i ) , {\displaystyle X_{ij}=\phi _{j}(x_{i}),}

and the best fit can be found by solving the normal equations.


2554 Math 3 lecture 7 Ch 6.4 orthogonal projection and least ...
src: i.ytimg.com


Further discussion

The numerical methods for linear least squares are important because linear regression models are among the most important types of model, both as formal statistical models and for exploration of data-sets. The majority of statistical computer packages contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to round-off error.

Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be

  • Computations where a number of similar, and often nested, models are considered for the same data-set. That is, where models with the same dependent variable but different sets of independent variables are to be considered, for essentially the same set of data-points.
  • Computations for analyses that occur in a sequence, as the number of data-points increases.
  • Special considerations for very extensive data-sets.

Fitting of linear models by least squares often, but not always, arise in the context of statistical analysis. It can therefore be important that considerations of computation efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the linear least squares problem.

Rounding errors

Matrix calculations, like any other, are affected by rounding errors. An early summary of these effects, regarding the choice of computation methods for matrix inversion, was provided by Wilkinson.


Least squares line (KristaKingMath) - YouTube
src: i.ytimg.com


See also

  • Line-line intersection#Nearest point to non-intersecting lines, an application

MATH 2311 Section 5.2 & 5.3. Correlation Coefficient. - ppt download
src: images.slideplayer.com


References


Least Squares Linear Regression StatCrunch - YouTube
src: i.ytimg.com


Further reading

  • Bevington, Philip R.; Robinson, Keith D. (2003). Data Reduction and Error Analysis for the Physical Sciences. McGraw-Hill. ISBN 0-07-247227-8. 
  • Barlow, Jesse L. (1993), "Chapter 9: Numerical aspects of Solving Linear Least Squares Problems", in Rao, C. R., Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8 
  • Björck, Åke (1996). Numerical methods for least squares problems. Philadelphia: SIAM. ISBN 0-89871-360-9. 
  • Goodall, Colin R. (1993), "Chapter 13: Computation using the QR decomposition", in Rao, C. R., Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8 
  • National Physical Laboratory (1961), "Chapter 1: Linear Equations and Matrices: Direct Methods", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office 
  • National Physical Laboratory (1961), "Chapter 2: Linear Equations and Matrices: Direct Methods on Automatic Computers", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office 

Chem Math 252 Chapter 5 Regression. Linear & Nonlinear Regression ...
src: images.slideplayer.com


External links

  • Least Squares Fitting - From MathWorld
  • Least Squares Fitting-Polynomial - From MathWorld

Source of the article : Wikipedia

Comments
0 Comments