Sponsored Links

Rabu, 30 Mei 2018

Sponsored Links

Fixed Point Iteration - YouTube
src: i.ytimg.com

In numerical analysis, fixed-point iteration is a method of computing fixed points of iterated functions.

More specifically, given a function f {\displaystyle f} defined on the real numbers with real values and given a point x 0 {\displaystyle x_{0}} in the domain of f {\displaystyle f} , the fixed point iteration is

x n + 1 = f ( x n ) , n = 0 , 1 , 2 , ... {\displaystyle x_{n+1}=f(x_{n}),\,n=0,1,2,\dots }

which gives rise to the sequence x 0 , x 1 , x 2 , ... {\displaystyle x_{0},x_{1},x_{2},\dots } which is hoped to converge to a point x {\displaystyle x} . If f {\displaystyle f} is continuous, then one can prove that the obtained x {\displaystyle x} is a fixed point of f {\displaystyle f} , i.e.,

f ( x ) = x . {\displaystyle f(x)=x.\,}

More generally, the function f {\displaystyle f} can be defined on any metric space with values in that same space.


Video Fixed-point iteration



Examples

  • A first simple and useful example is the Babylonian method for computing the square root of a>0, which consists in taking f ( x ) = 1 2 ( a x + x ) {\displaystyle f(x)={\frac {1}{2}}\left({\frac {a}{x}}+x\right)} , i.e. the mean value of x and a/x, to approach the limit x = a {\displaystyle x={\sqrt {a}}} (from whatever starting point x 0 >> 0 {\displaystyle x_{0}\gg 0} ). This is a special case of Newton's method quoted below.
  • The fixed-point iteration x n + 1 = cos x n {\displaystyle x_{n+1}=\cos x_{n}\,} converges to the unique fixed point of the function f ( x ) = cos x {\displaystyle f(x)=\cos x\,} for any starting point x 0 . {\displaystyle x_{0}.} This example does satisfy the assumptions of the Banach fixed point theorem. Hence, the error after n steps satisfies | x n - x 0 | <= q n 1 - q | x 1 - x 0 | = C q n {\displaystyle |x_{n}-x_{0}|\leq {q^{n} \over 1-q}|x_{1}-x_{0}|=Cq^{n}} (where we can take q = 0.85 {\displaystyle q=0.85} , if we start from x 0 = 1 {\displaystyle x_{0}=1} .) When the error is less than a multiple of q n {\displaystyle q^{n}} for some constant q, we say that we have linear convergence. The Banach fixed-point theorem allows one to obtain fixed-point iterations with linear convergence.
  • The fixed-point iteration x n + 1 = 2 x n {\displaystyle x_{n+1}=2x_{n}\,} will diverge unless x 0 = 0 {\displaystyle x_{0}=0} . We say that the fixed point of f ( x ) = 2 x {\displaystyle f(x)=2x\,} is repelling.
  • The requirement that f is continuous is important, as the following example shows. The iteration
x n + 1 = { x n 2 , x n ? 0 1 , x n = 0 {\displaystyle x_{n+1}={\begin{cases}{\frac {x_{n}}{2}},&x_{n}\neq 0\\1,&x_{n}=0\end{cases}}}

converges to 0 for all values of x 0 {\displaystyle x_{0}} . However, 0 is not a fixed point of the function

f ( x ) = { x 2 , x ? 0 1 , x = 0 {\displaystyle f(x)={\begin{cases}{\frac {x}{2}},&x\neq 0\\1,&x=0\end{cases}}}

as this function is not continuous at x = 0 {\displaystyle x=0} , and in fact has no fixed points.


Maps Fixed-point iteration



Applications

  • Newton's method for finding roots of a given differentiable function f ( x ) {\displaystyle f(x)} is
x n + 1 = x n - f ( x n ) f ? ( x n ) . {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.}
If we write g ( x ) = x - f ( x ) f ? ( x ) {\displaystyle g(x)=x-{\frac {f(x)}{f'(x)}}} , we may rewrite the Newton iteration as the fixed-point iteration
x n + 1 = g ( x n ) {\displaystyle x_{n+1}=g(x_{n})} .
If this iteration converges to a fixed point x of g, then
x = g ( x ) = x - f ( x ) f ? ( x ) {\displaystyle x=g(x)=x-{\frac {f(x)}{f'(x)}}} , so f ( x ) / f ? ( x ) = 0. {\displaystyle f(x)/f'(x)=0.}
The reciprocal of anything is nonzero, therefore f(x) = 0: x is a root of f. Under the assumptions of the Banach fixed point theorem, the Newton iteration, framed as the fixed point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e.,
| x n - x | < C q 2 n {\displaystyle |x_{n}-x|<Cq^{2^{n}}} , under certain circumstances.
  • Halley's method is similar to Newton's method but, when it works correctly, its error is | x n - x | < C q 3 n {\displaystyle |x_{n}-x|<Cq^{3^{n}}} (cubic convergence). In general, it is possible to design methods that converge with speed C q k n {\displaystyle Cq^{k^{n}}} for any k ? N {\displaystyle k\in \mathbb {N} } . As a general rule, the higher the k, the less stable it is, and the more computationally expensive it gets. For these reasons, higher order methods are typically not used.
  • Runge-Kutta methods and numerical ordinary differential equation solvers in general can be viewed as fixed point iterations. Indeed, the core idea when analyzing the A-stability of ODE solvers is to start with the special case y ? = a y {\displaystyle y'=ay} , where a is a complex number, and to check whether the ODE solver converges to the fixed point y = 0 {\displaystyle y=0} whenever the real part of a is negative.
  • The Picard-Lindelöf theorem, which shows that ordinary differential equations have solutions, is essentially an application of the Banach fixed point theorem to a special sequence of functions which forms a fixed point iteration, constructing the solution to the equation. Solving an ODE in this way is called Picard iteration, Picard's method, or the Picard iterative process.
  • The iteration capability in Excel can be used to find solutions to the Colebrook equation to an accuracy of 15 significant figures.
  • Some of the "successive approximation" schemes used in dynamic programming to solve Bellman's functional equation are based on fixed point iterations in the space of the return function.

Solution of Nonlinear Equations ( Root Finding Problems ) - ppt ...
src: slideplayer.com


Properties

  • If a function f {\displaystyle f} defined on the real line with real values is Lipschitz continuous with Lipschitz constant L < 1 {\displaystyle L<1} , then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess x 0 . {\displaystyle x_{0}.} This theorem can be generalized to any metric space.
Proof of this theorem:
Since f {\displaystyle f} is Lipschitz continuous with Lipschitz constant L < 1 {\displaystyle L<1} , then for the sequence { x n , n = 0 , 1 , 2 , ... } {\displaystyle \{x_{n},n=0,1,2,\ldots \}} , we have:
| x 2 - x 1 | = | f ( x 1 ) - f ( x 0 ) | <= L | x 1 - x 0 | , | x 3 - x 2 | = | f ( x 2 ) - f ( x 1 ) | <= L | x 2 - x 1 | , ? {\displaystyle {\begin{aligned}|x_{2}-x_{1}|=|f(x_{1})-f(x_{0})|&\leq L|x_{1}-x_{0}|,\\|x_{3}-x_{2}|=|f(x_{2})-f(x_{1})|&\leq L|x_{2}-x_{1}|,\\&\,\,\,\vdots \end{aligned}}}
and
| x n - x n - 1 | = | f ( x n - 1 ) - f ( x n - 2 ) | <= L | x n - 1 - x n - 2 | . {\displaystyle |x_{n}-x_{n-1}|=|f(x_{n-1})-f(x_{n-2})|\leq L|x_{n-1}-x_{n-2}|.}
Combining the above inequalities yields:
| x n - x n - 1 | <= L n - 1 | x 1 - x 0 | . {\displaystyle |x_{n}-x_{n-1}|\leq L^{n-1}|x_{1}-x_{0}|.}
Since L < 1 {\displaystyle L<1} , L n - 1 -> 0 {\displaystyle L^{n-1}\rightarrow 0} as n -> ? . {\displaystyle n\rightarrow \infty .}
Therefore, we can show { x n } {\displaystyle \{x_{n}\}} is a Cauchy sequence and thus it converges to a point x * {\displaystyle x^{*}} .
For the iteration x n = f ( x n - 1 ) {\displaystyle x_{n}=f(x_{n-1})} , let n {\displaystyle n} go to infinity on both sides of the equation, we obtain x * = f ( x * ) {\displaystyle x^{*}=f(x^{*})} . This shows that x * {\displaystyle x^{*}} is the fixed point for f {\displaystyle f} . So we proved the iteration will eventually converge to a fixed-point.
This property is very useful because not all iterations can arrive at a convergent fixed-point. When constructing a fixed-point iteration, it is very important to make sure it converges. There are several fixed-point theorems to guarantee the existence of the fixed point, but since the iteration function is continuous, we can usually use the above theorem to test if an iteration converges or not. The proof of the generalized theorem to metric space is similar.
  • The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.

ch5 4: Numerical Solutions of nonlinear equations. Fixed Point ...
src: i.ytimg.com


See also

  • Root-finding algorithm
  • Fixed-point theorem
  • Fixed-point combinator
  • Banach fixed-point theorem
  • Cobweb plot
  • Markov chain
  • Infinite compositions of analytic functions
  • Iterated function
  • Convergence and fixed point

Solution of Nonlinear Equations ( Root Finding Problems ) - ppt ...
src: slideplayer.com


References

  • Burden, Richard L.; Faires, J. Douglas (1985). "2.2 Fixed-Point Iteration". Numerical Analysis (3rd ed.). PWS Publishers. ISBN 0-87150-857-5. .

Fixed point iteration method (to find the root of the equation ...
src: i.ytimg.com


External links

  • Fixed-point algorithms online
  • Fixed-point iteration online calculator (Mathematical Assistant on Web)

Source of the article : Wikipedia

Comments
0 Comments