Sponsored Links

Minggu, 24 Desember 2017

Sponsored Links

Differential Equations: Euler's Method - YouTube
src: i.ytimg.com

In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge-Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis (published 1768-70).

The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor-corrector method.


Video Euler method



Informal geometrical description

Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated.

The idea is that while the curve is initially unknown, its starting point, which we denote by A 0 , {\displaystyle A_{0},} is known (see the picture on top right). Then, from the differential equation, the slope to the curve at A 0 {\displaystyle A_{0}} can be computed, and so, the tangent line.

Take a small step along that tangent line up to a point A 1 . {\displaystyle A_{1}.} Along this small step, the slope does not change too much, so A 1 {\displaystyle A_{1}} will be close to the curve. If we pretend that A 1 {\displaystyle A_{1}} is still on the curve, the same reasoning as for the point A 0 {\displaystyle A_{0}} above can be used. After several steps, a polygonal curve A 0 A 1 A 2 A 3 ... {\displaystyle A_{0}A_{1}A_{2}A_{3}\dots } is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite:

y ? ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 . {\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0}.}

Choose a value h {\displaystyle h} for the size of every step and set t n = t 0 + n h {\displaystyle t_{n}=t_{0}+nh} . Now, one step of the Euler method from t n {\displaystyle t_{n}} to t n + 1 = t n + h {\displaystyle t_{n+1}=t_{n}+h} is:

y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).}

The value of y n {\displaystyle y_{n}} is an approximation of the solution to the ODE at time t n {\displaystyle t_{n}} : y n ? y ( t n ) {\displaystyle y_{n}\approx y(t_{n})} . The Euler method is explicit, i.e. the solution y n + 1 {\displaystyle y_{n+1}} is an explicit function of y i {\displaystyle y_{i}} for i <= n {\displaystyle i\leq n} .

While the Euler method integrates a first-order ODE, any ODE of order N can be represented as a first-order ODE: to treat the equation

y ( N ) ( t ) = f ( t , y ( t ) , y ? ( t ) , ... , y ( N - 1 ) ( t ) ) {\displaystyle y^{(N)}(t)=f(t,y(t),y'(t),\ldots ,y^{(N-1)}(t))} ,

we introduce auxiliary variables z 1 ( t ) = y ( t ) , z 2 ( t ) = y ? ( t ) , ... , z N ( t ) = y ( N - 1 ) ( t ) {\displaystyle z_{1}(t)=y(t),z_{2}(t)=y'(t),\ldots ,z_{N}(t)=y^{(N-1)}(t)} and obtain the equivalent equation

z ? ( t ) = ( z 1 ? ( t ) ? z N - 1 ? ( t ) z N ? ( t ) ) = ( y ? ( t ) ? y ( N - 1 ) ( t ) y ( N ) ( t ) ) = ( z 2 ( t ) ? z N ( t ) f ( t , z 1 ( t ) , ... , z N ( t ) ) ) {\displaystyle \mathbf {z} '(t)={\begin{pmatrix}z_{1}'(t)\\\vdots \\z_{N-1}'(t)\\z_{N}'(t)\end{pmatrix}}={\begin{pmatrix}y'(t)\\\vdots \\y^{(N-1)}(t)\\y^{(N)}(t)\end{pmatrix}}={\begin{pmatrix}z_{2}(t)\\\vdots \\z_{N}(t)\\f(t,z_{1}(t),\ldots ,z_{N}(t))\end{pmatrix}}}

This is a first-order system in the variable z ( t ) {\displaystyle \mathbf {z} (t)} and can be handled by Euler's method or, in fact, by any other scheme for first-order systems.


Maps Euler method



Example

Given the initial value problem

y ? = y , y ( 0 ) = 1 , {\displaystyle y'=y,\quad y(0)=1,}

we would like to use the Euler method to approximate y ( 4 ) {\displaystyle y(4)} .

Using step size equal to 1 (h = 1)

The Euler method is

y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).\qquad \qquad }

so first we must compute f ( t 0 , y 0 ) {\displaystyle f(t_{0},y_{0})} . In this simple differential equation, the function f {\displaystyle f} is defined by f ( t , y ) = y {\displaystyle f(t,y)=y} . We have

f ( t 0 , y 0 ) = f ( 0 , 1 ) = 1. {\displaystyle f(t_{0},y_{0})=f(0,1)=1.\qquad \qquad }

By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point ( 0 , 1 ) {\displaystyle (0,1)} . Recall that the slope is defined as the change in y {\displaystyle y} divided by the change in t {\displaystyle t} , or ? y / ? t {\displaystyle \Delta y/\Delta t} .

The next step is to multiply the above value by the step size h {\displaystyle h} , which we take equal to one here:

h ? f ( y 0 ) = 1 ? 1 = 1. {\displaystyle h\cdot f(y_{0})=1\cdot 1=1.\qquad \qquad }

Since the step size is the change in t {\displaystyle t} , when we multiply the step size and the slope of the tangent, we get a change in y {\displaystyle y} value. This value is then added to the initial y {\displaystyle y} value to obtain the next value to be used for computations.

y 0 + h f ( y 0 ) = y 1 = 1 + 1 ? 1 = 2. {\displaystyle y_{0}+hf(y_{0})=y_{1}=1+1\cdot 1=2.\qquad \qquad }

The above steps should be repeated to find y 2 {\displaystyle y_{2}} , y 3 {\displaystyle y_{3}} and y 4 {\displaystyle y_{4}} .

y 2 = y 1 + h f ( y 1 ) = 2 + 1 ? 2 = 4 , y 3 = y 2 + h f ( y 2 ) = 4 + 1 ? 4 = 8 , y 4 = y 3 + h f ( y 3 ) = 8 + 1 ? 8 = 16. {\displaystyle {\begin{aligned}y_{2}&=y_{1}+hf(y_{1})=2+1\cdot 2=4,\\y_{3}&=y_{2}+hf(y_{2})=4+1\cdot 4=8,\\y_{4}&=y_{3}+hf(y_{3})=8+1\cdot 8=16.\end{aligned}}}

Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors.

The conclusion of this computation is that y 4 = 16 {\displaystyle y_{4}=16} . The exact solution of the differential equation is y ( t ) = e t {\displaystyle y(t)=e^{t}} , so y ( 4 ) = e 4 ? 54.598 {\displaystyle y(4)=e^{4}\approx 54.598} . Thus, the approximation of the Euler method is not very good in this case. However, as the figure shows, its behaviour is qualitatively correct.

Using other step sizes

As suggested in the introduction, the Euler method is more accurate if the step size h {\displaystyle h} is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure.

The error recorded in the last column of the table is the difference between the exact solution at t = 4 {\displaystyle t=4} and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details.

Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order.

We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, people usually employ alternative, higher-order methods such as Runge-Kutta methods or linear multistep methods, especially if a high accuracy is desired.


Euler Method / オイラーの法則 - railway to ihatov
src: yuki.nz


Derivation

The Euler method can be derived in a number of ways. Firstly, there is the geometrical description mentioned above.

Another possibility is to consider the Taylor expansion of the function y {\displaystyle y} around t 0 {\displaystyle t_{0}} :

y ( t 0 + h ) = y ( t 0 ) + h y ? ( t 0 ) + 1 2 h 2 y ? ( t 0 ) + O ( h 3 ) . {\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\frac {1}{2}}h^{2}y''(t_{0})+O(h^{3}).}

The differential equation states that y ? = f ( t , y ) {\displaystyle y'=f(t,y)} . If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises. The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge-Kutta methods.

A closely related derivation is to substitute the forward finite difference formula for the derivative,

y ? ( t 0 ) ? y ( t 0 + h ) - y ( t 0 ) h {\displaystyle y'(t_{0})\approx {\frac {y(t_{0}+h)-y(t_{0})}{h}}}

in the differential equation y ? = f ( t , y ) {\displaystyle y'=f(t,y)} . Again, this yields the Euler method. A similar computation leads to the midpoint rule and the backward Euler method.

Finally, one can integrate the differential equation from t 0 {\displaystyle t_{0}} to t 0 + h {\displaystyle t_{0}+h} and apply the fundamental theorem of calculus to get:

y ( t 0 + h ) - y ( t 0 ) = ? t 0 t 0 + h f ( t , y ( t ) ) d t . {\displaystyle y(t_{0}+h)-y(t_{0})=\int _{t_{0}}^{t_{0}+h}f(t,y(t))\,\mathrm {d} t.}

Now approximate the integral by the left-hand rectangle method (with only one rectangle):

? t 0 t 0 + h f ( t , y ( t ) ) d t ? h f ( t 0 , y ( t 0 ) ) . {\displaystyle \int _{t_{0}}^{t_{0}+h}f(t,y(t))\,\mathrm {d} t\approx hf(t_{0},y(t_{0})).}

Combining both equations, one finds again the Euler method. This line of thought can be continued to arrive at various linear multistep methods.


Euler's method for differential equations - YouTube
src: i.ytimg.com


Local truncation error

The local truncation error of the Euler method is error made in a single step. It is the difference between the numerical solution after one step, y 1 {\displaystyle y_{1}} , and the exact solution at time t 1 = t 0 + h {\displaystyle t_{1}=t_{0}+h} . The numerical solution is given by

y 1 = y 0 + h f ( t 0 , y 0 ) . {\displaystyle y_{1}=y_{0}+hf(t_{0},y_{0}).\quad }

For the exact solution, we use the Taylor expansion mentioned in the section Derivation above:

y ( t 0 + h ) = y ( t 0 ) + h y ? ( t 0 ) + 1 2 h 2 y ? ( t 0 ) + O ( h 3 ) . {\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\frac {1}{2}}h^{2}y''(t_{0})+O(h^{3}).}

The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations:

L T E = y ( t 0 + h ) - y 1 = 1 2 h 2 y ? ( t 0 ) + O ( h 3 ) . {\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\frac {1}{2}}h^{2}y''(t_{0})+O(h^{3}).}

This result is valid if y {\displaystyle y} has a bounded third derivative.

This shows that for small h {\displaystyle h} , the local truncation error is approximately proportional to h 2 {\displaystyle h^{2}} . This makes the Euler method less accurate (for small h {\displaystyle h} ) than other higher-order techniques such as Runge-Kutta methods and linear multistep methods, for which the local truncation error is proportional to a higher power of the step size.

A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If y {\displaystyle y} has a continuous second derivative, then there exists a ? ? [ t 0 , t 0 + h ] {\displaystyle \xi \in [t_{0},t_{0}+h]} such that

L T E = y ( t 0 + h ) - y 1 = 1 2 h 2 y ? ( ? ) . {\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\frac {1}{2}}h^{2}y''(\xi ).}

In the above expressions for the error, the second derivative of the unknown exact solution y {\displaystyle y} can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation y ? = f ( t , y ) {\displaystyle y'=f(t,y)} that

y ? ( t 0 ) = ? f ? t ( t 0 , y ( t 0 ) ) + ? f ? y ( t 0 , y ( t 0 ) ) f ( t 0 , y ( t 0 ) ) . {\displaystyle y''(t_{0})={\partial f \over \partial t}(t_{0},y(t_{0}))+{\partial f \over \partial y}(t_{0},y(t_{0}))\,f(t_{0},y(t_{0})).}

Chapter 6 Differential Equations - ppt video online download
src: slideplayer.com


Global truncation error

The global truncation error is the error at a fixed time t {\displaystyle t} , after however many steps the methods needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be ( t - t 0 ) / h {\displaystyle (t-t_{0})/h} , which is proportional to 1 / h {\displaystyle 1/h} , and the error committed in each step is proportional to h 2 {\displaystyle h^{2}} (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to h {\displaystyle h} .

This intuitive reasoning can be made precise. If the solution y {\displaystyle y} has a bounded second derivative and f {\displaystyle f} is Lipschitz continuous in its second argument, then the global truncation error (GTE) is bounded by

| GTE | <= h M 2 L ( e L ( t - t 0 ) - 1 ) {\displaystyle |{\text{GTE}}|\leq {\frac {hM}{2L}}(e^{L(t-t_{0})}-1)\qquad \qquad }

where M {\displaystyle M} is an upper bound on the second derivative of y {\displaystyle y} on the given interval and L {\displaystyle L} is the Lipschitz constant of f {\displaystyle f} .

The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to h {\displaystyle h} . For this reason, the Euler method is said to be first order.


Euler's method for differential equations example #1 - YouTube
src: i.ytimg.com


Numerical stability

The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation

y ? = - 2.3 y , y ( 0 ) = 1. {\displaystyle y'=-2.3y,\qquad y(0)=1.}

The exact solution is y ( t ) = e - 2.3 t {\displaystyle y(t)=e^{-2.3t}} , which decays to zero as t -> ? {\displaystyle t\to \infty } . However, if the Euler method is applied to this equation with step size h = 1 {\displaystyle h=1} , then the numerical solution is qualitatively wrong: it oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance h = 0.7 {\displaystyle h=0.7} , then the numerical solution does decay to zero.

If the Euler method is applied to the linear equation y ? = k y {\displaystyle y'=ky} , then the numerical solution is unstable if the product h k {\displaystyle hk} is outside the region

{ z ? C | | z + 1 | <= 1 } , {\displaystyle \{z\in \mathbf {C} \mid |z+1|\leq 1\},}

illustrated on the right. This region is called the (linear) instability region. In the example, k {\displaystyle k} equals -2.3, so if h = 1 {\displaystyle h=1} then h k = - 2.3 {\displaystyle hk=-2.3} which is outside the stability region, and thus the numerical solution is unstable.

This limitation --along with its slow convergence of error with h-- means that the Euler method is not often used, except as a simple example of numerical integration.


Chapter 6 Differential Equations - ppt video online download
src: slideplayer.com


Rounding errors

The discussion up to now has ignored the consequences of rounding error. In step n of the Euler method, the rounding error is roughly of the magnitude ?yn where ? is the machine epsilon. Assuming that the rounding errors are all of approximately the same size, the combined rounding error in N steps is roughly N?y0 if all errors points in the same direction. Since the number of steps is inversely proportional to the step size h, the total rounding error is proportional to ? / h. In reality, however, it is extremely unlikely that all rounding errors point in the same direction. If instead it is assumed that the rounding errors are independent rounding variables, then the total rounding error is proportional to ? / h {\displaystyle \varepsilon /{\sqrt {h}}} .

Thus, for extremely small values of the step size, the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.


Euler's Method with CASIO Scientific Calculator fx-991ES - YouTube
src: i.ytimg.com


Modifications and extensions

A simple modification of the Euler method which eliminates the stability problems noted in the previous section is the backward Euler method:

y n + 1 = y n + h f ( t n + 1 , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n+1},y_{n+1}).}

This differs from the (standard, or forward) Euler method in that the function f {\displaystyle f} is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has y n + 1 {\displaystyle y_{n+1}} on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly.

Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method.

More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article:

y n + 1 = y n + h f ( t n + 1 2 h , y n + 1 2 h f ( t n , y n ) ) . {\displaystyle y_{n+1}=y_{n}+hf{\Big (}t_{n}+{\tfrac {1}{2}}h,y_{n}+{\tfrac {1}{2}}hf(t_{n},y_{n}){\Big )}.}

This leads to the family of Runge-Kutta methods.

The other possibility is to use more past values, as illustrated by the two-step Adams-Bashforth method:

y n + 1 = y n + 3 2 h f ( t n , y n ) - 1 2 h f ( t n - 1 , y n - 1 ) . {\displaystyle y_{n+1}=y_{n}+{\tfrac {3}{2}}hf(t_{n},y_{n})-{\tfrac {1}{2}}hf(t_{n-1},y_{n-1}).}

This leads to the family of linear multistep methods.


Solving ODEs with Python - an Euler's Method Example on Vimeo
src: i.vimeocdn.com


In popular culture

In the film Hidden Figures, Katherine Johnson resorts to Euler's method in figuring out how to get astronaut John Glenn back down from orbit.


Euler Method/Excel and MATLAB - YouTube
src: i.ytimg.com


See also

  • Crank-Nicolson method
  • Dynamic errors of numerical methods of ODE discretization
  • Gradient descent similarly uses finite steps, here to find minima of functions
  • List of Runge-Kutta methods
  • Linear multistep method
  • Numerical integration (for calculating definite integrals)
  • Numerical methods for ordinary differential equations

ENES 300
src: www.csee.umbc.edu


Notes


Euler and Improved Euler's method through MS Excel. - YouTube
src: i.ytimg.com


References

  • Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0 .
  • Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8 .
  • Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-471-96758-3 .
  • Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0 .
  • Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2 
  • Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3 .
  • Lakoba, Taras I. (2012), Simple Euler method and its modifications (PDF) (Lecture notes for MATH334, University of Vermont), retrieved 29 February 2012 .

Chapter 6 Differential Equations - ppt video online download
src: slideplayer.com


External links

  • Media related to Euler method at Wikimedia Commons
  • Euler method implementations in different languages by Rosetta Code

Source of the article : Wikipedia

Comments
0 Comments