Sponsored Links

Rabu, 27 Desember 2017

Sponsored Links

Linear complementarity problem - YouTube
src: i.ytimg.com

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.


Video Linear complementarity problem



Formulation

Given a real matrix M and vector q, the linear complementarity problem LCP(M, q) seeks vectors z and w which satisfy the following constraints:

  • w , z ? 0 , {\displaystyle w,z\geqslant 0,} (that is, each component of these two vectors is non-negative)
  • z T w = 0 {\displaystyle z^{T}w=0} or equivalently ? i w i z i = 0. {\displaystyle \sum \nolimits _{i}w_{i}z_{i}=0.} This is the complementarity condition, since it implies that, for all i {\displaystyle i} , at most one of w i {\displaystyle w_{i}} and z i {\displaystyle z_{i}} can be positive.
  • w = M z + q {\displaystyle w=Mz+q}

A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that LCP(M, q) have a solution for every q, then M is a Q-matrix. If M is such that LCP(M, q) have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary.

The vector w is a slack variable, and so is generally discarded after z is found. As such, the problem can also be formulated as:

  • M z + q ? 0 {\displaystyle Mz+q\geqslant 0}
  • z ? 0 {\displaystyle z\geqslant 0}
  • z T ( M z + q ) = 0 {\displaystyle z^{\mathrm {T} }(Mz+q)=0} (the complementarity condition)

Maps Linear complementarity problem



Convex quadratic-minimization: Minimum conditions

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function

f ( z ) = z T ( M z + q ) {\displaystyle f(z)=z^{T}(Mz+q)}

subject to the constraints

M z + q ? 0 {\displaystyle {Mz}+q\geqslant 0}
z ? 0 {\displaystyle z\geqslant 0}

These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem.

If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice.

Also, a quadratic-programming problem stated as minimize f ( x ) = c T x + 1 2 x T Q x {\displaystyle f(x)=c^{T}x+{\tfrac {1}{2}}x^{T}Qx} subject to A x ? b {\displaystyle Ax\geqslant b} as well as x ? 0 {\displaystyle x\geqslant 0} with Q symmetric

is the same as solving the LCP with

q = [ c - b ] , M = [ Q - A T A 0 ] {\displaystyle q={\begin{bmatrix}c\\-b\end{bmatrix}},\qquad M={\begin{bmatrix}Q&-A^{T}\\A&0\end{bmatrix}}}

This is because the Karush-Kuhn-Tucker conditions of the QP problem can be written as:

{ v = Q x - A T ? + c s = A x - b x , ? , v , s ? 0 x T v + ? T s = 0 {\displaystyle {\begin{cases}v=Qx-A^{T}{\lambda }+c\\s=Ax-b\\x,{\lambda },v,s\geqslant 0\\x^{T}v+{\lambda }^{T}s=0\end{cases}}}

with v the Lagrange multipliers on the non-negativity constraints, ? the multipliers on the inequality constraints, and s the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables (x, s) with its set of KKT vectors (optimal Lagrange multipliers) being (v, ?). In that case,

z = [ x ? ] , w = [ v s ] {\displaystyle z={\begin{bmatrix}x\\\lambda \end{bmatrix}},\qquad w={\begin{bmatrix}v\\s\end{bmatrix}}}

If the non-negativity constraint on the x is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as Q is non-singular (which is guaranteed if it is positive definite). The multipliers v are no longer present, and the first KKT conditions can be rewritten as:

Q x = A T ? - c {\displaystyle Qx=A^{T}{\lambda }-c}

or:

x = Q - 1 ( A T ? - c ) {\displaystyle x=Q^{-1}(A^{T}{\lambda }-c)}

pre-multiplying the two sides by A and subtracting b we obtain:

A x - b = A Q - 1 ( A T ? - c ) - b {\displaystyle Ax-b=AQ^{-1}(A^{T}{\lambda }-c)-b\,}

The left side, due to the second KKT condition, is s. Substituting and reordering:

s = ( A Q - 1 A T ) ? + ( - A Q - 1 c - b ) {\displaystyle s=(AQ^{-1}A^{T}){\lambda }+(-AQ^{-1}c-b)\,}

Calling now

M := ( A Q - 1 A T ) Q := ( - A Q - 1 c - b ) {\displaystyle {\begin{aligned}M&:=(AQ^{-1}A^{T})\\Q&:=(-AQ^{-1}c-b)\end{aligned}}}

we have an LCP, due to the relation of complementarity between the slack variables s and their Lagrange multipliers ?. Once we solve it, we may obtain the value of x from ? through the first KKT condition.

Finally, it is also possible to handle additional equality constraints:

A e q x = b e q {\displaystyle A_{eq}x=b_{eq}}

This introduces a vector of Lagrange multipliers ?, with the same dimension as b e q {\displaystyle b_{eq}} .

It is easy to verify that the M and Q for the LCP system s = M ? + Q {\displaystyle s=M{\lambda }+Q} are now expressed as:

M := [ A 0 ] [ Q A e q T - A e q 0 ] - 1 [ A T 0 ] Q := - [ A 0 ] [ Q A e q T - A e q 0 ] - 1 [ c b e q ] - b {\displaystyle {\begin{aligned}M&:={\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\\0\end{bmatrix}}\\Q&:=-{\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}c\\b_{eq}\end{bmatrix}}-b\end{aligned}}}

From ? we can now recover the values of both x and the Lagrange multiplier of equalities ?:

[ x ? ] = [ Q A e q T - A e q 0 ] - 1 [ A T ? - c - b e q ] {\displaystyle {\begin{bmatrix}x\\\mu \end{bmatrix}}={\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\lambda -c\\-b_{eq}\end{bmatrix}}}

In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods. LCP problems can be solved also by the criss-cross algorithm, conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive. Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.


Computing glacier geometry in nonlinear complementarity problem ...
src: i.ytimg.com


See also

  • Complementarity theory
  • Physics engine Impulse/constraint type physics engines for games use this approach.
  • Contact dynamics Contact dynamics with the nonsmooth approach.
  • Bimatrix games can be reduced to a LCP.

A review of friction models in interacting joints for durability ...
src: friction.tsinghuajournals.com


Notes


Download Linear Complementarity, Linear And Nonlinear Programming 1997
src: pbs.twimg.com


References

  • Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 0-12-192350-9. MR 1150683. 
  • Cottle, R. W.; Pang, J.-S.; Venkateswaran, V. (March-April 1989). "Sufficient matrices and the linear complementarity problem". Linear Algebra and its Applications. 114-115: 231-249. doi:10.1016/0024-3795(89)90463-1. MR 0986877. 
  • Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (pdf). Optimization Methods and Software. 21 (2): 247-266. doi:10.1080/10556780500095009. 
  • Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming. 64 (1): 365-370. doi:10.1007/BF01582581. MR 1286455. 
  • den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the criss-cross method" (pdf). Linear Algebra and its Applications. 187: 1-14. doi:10.1016/0024-3795(93)90124-7. 
  • Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. MR 0949214. Updated and free PDF version at Katta G. Murty's website. Archived from the original on 2010-04-01. 
  • Fukuda, Komei; Terlaky, Tamás (1997). Thomas M. Liebling and Dominique de Werra, eds. "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming: Series B. Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997. Amsterdam: North-Holland Publishing Co. 79 (1--3): 369-395. doi:10.1007/BF02614325. MR 1464775. Postscript preprint. CS1 maint: Uses editors parameter (link)
  • Todd, Michael J. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. Series B. 39 (2): 105-133. doi:10.1016/0095-8956(85)90042-5. MR 0811116. 
  • R. Chandrasekaran. "Bimatrix games" (PDF). pp. 5-7. Retrieved 18 December 2015. 

Criss-cross algorithm - YouTube
src: i.ytimg.com


Further reading

  • R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1:103-125, 1968.
  • Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. Degeneracy in optimization problems. Springer Netherlands. 46-47 (1): 203-233. CiteSeerX 10.1.1.36.7658 . doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. 

Raúl Bajo on Twitter:
src: pbs.twimg.com


External links

  • LCPSolve -- A simple procedure in GAUSS to solve a linear complementarity problem
  • Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs

Source of the article : Wikipedia

Comments
0 Comments