Skip to content
Snippets Groups Projects
Commit b2f9e76a authored by Jialei Ding's avatar Jialei Ding
Browse files

Update 4 files

- /book/numerical_methods/2-derivative.ipynb
- /book/numerical_methods/3-taylor-series-expansion.ipynb
- /book/numerical_methods/taylor-series-exercise.ipynb
- /book/numerical_methods/4-numerical-integration.ipynb
parent c6a410a4
No related branches found
No related tags found
1 merge request!381Review Jialei
%% Cell type:markdown id: tags:
# The First Derivative
%% Cell type:markdown id: tags:
```{note}
**Important things to retain from this block:**
* Understand what the derivative represents
* Recognize that the derivative can be approximated in different ways
```
%% Cell type:markdown id: tags:
:::{card} **Definition**:
The derivative of a function $f(x)$ evaluated at the point $x_0$ is
$$
\frac{df}{dx}\bigg\rvert_{x_0}=f'(x_0)=\lim_{x \to x_0}\frac{f(x)-f(x_0)}{x-x_0}.
$$
:::
:::{card} **Numerically**:
$x - x_0$ **cannot tend to 0!** Thus:
$$
f'(x_0) \approx \frac{f(x)-f(x_0)}{\Delta x}, \hspace{3mm} \text{where } \Delta x=x-x_0.
$$
:::
```{figure} figs/derivative_new2.png
:name: derivative_new2
derivatives of a function $f(x)$ at a specific points $x_0$ and $x_1$
```
**What does the derivative represents?** A rate of change! In this case, how fast $f(x)$ changes with respect to $x$ (or in the direction of $x$). The derivative can also be with respect to time or another independent variable (e.g., $y,z,t,etc$). Look at the figure to the right. The first derivative evaluated at $x_0$ is illustrated as a simple slope. You can also see that the rate of change at $x_0$ is larger than at $x_1$.
**What does the derivative represents?** A rate of change! In this case, how fast $f(x)$ changes with respect to $x$ (or in the direction of $x$). The derivative can also be with respect to time or another independent variable (e.g., $y,z,t,etc$). Look at the figure above. The first derivative evaluated at $x_0$ is illustrated as a simple slope, as is the first derivative evaluated at $x_1$ . You can also see that the rate of change at $x_0$ is larger than at $x_1$.
%% Cell type:markdown id: tags:
## Numerical Freedom to compute derivatives
In the Figure above, the derivative approximation was illustrated arbitarly using two points: the one at which the derivate was evaluated and another point in front of it. However, there are more possibilities. Instead of using points at $x_{-1,0,1}$ a more general notation is used: $x_{i-1,i,i+1}$. The simplest ways to approximate the derivative evaluated at the pont $x_i$ use two points:
In the Figure above, the derivative approximation was illustrated arbitarly using two points: the one at which the derivate was evaluated and another point in front of it. However, there are more possibilities. Instead of using absolute points at $x_{-1,0,1}$ a more general notation is used: $x_{i-1,i,i+1}$. The simplest ways to approximate the derivative evaluated point $x_i$ is to use two points:
$$
\text{forward: }\hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i})}{x_{i+1}-x_i} \hspace{5mm} \text{backward: } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i})-f(x_{i-1})}{x_{i}-x_{i-1}} \hspace{5mm} \text{central } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}
$$
```{figure} figs/derivative_ways.png
:name: derivative_ways
graphs illustrating three different ways to approximate derivatives.
```
%% Cell type:markdown id: tags:
......
%% Cell type:markdown id: tags:
# Taylor Series Expansion
%% Cell type:markdown id: tags:
```{note}
**Important things to retain from this block:**
* Understand how to compute Taylor series expansion of a given function around a given point and its limitations
* Understand how to compute Taylor series expansion (TSE) of a given function around a given point and its limitations
* Understand how numerical derivatives are derived from TSE
* Understand that the magnitude error depends on the expression used
**Things you do not need to know:**
* Any kind of Taylor expansion of a function by heart
```
%% Cell type:markdown id: tags:
## Definition
The Taylor Series Expansion (TSE) of an arbitrary function $f(x)$ around $x=x_i$ is a polynomial of varying order given by
$$f(x) = f(x_i) + (x-x_i)f'(x_i)+\frac{(x-x_i)^2}{2!}f''(x_i)+ \frac{(x-x_i)^3}{3!} f'''(x_i)+ ...$$
This series is **exact** as long as we include infinite terms. We, however, are limited to a **truncated** expression: an **approximation** of the real function.
Using only 3 terms and defining $\Delta x=x-x_i$, the TSE can be rewritten as
$$f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+\frac{\Delta x^2}{2!}f''(x_i)+ \frac{\Delta x^3}{3!} f'''(x_i)+ \mathcal{O}(\Delta x)^4.$$
Here $\Delta x$ is the distance between the point we "know" and the desired point. $\mathcal{O}(\Delta x)^4$ means that we do not take into account the terms associated to $\Delta x^4$ and therefore **that is the truncation error order**. From here we can also conclude that **the larger the step $\Delta x$, the larger the error**!
Now let's see an example.
---
:::{card} Example
Compute $e^{0.2}$ using 3 terms of TSE around $x_i=0$
```{admonition} Solution
:class: tip, dropdown
We want to evaluate $e^x=e^{x_i+\Delta x}=e^{0.2}$. Therefore, $\Delta x=0.2$. The value of $f(x_i=0)=e^0=1$. For the case of this exponential, the derivatives have the same value $f'(x_i=0)=f''(x_i=0)=f'''(x_i=0)=e^0=1$. The TSE looks like
$$e^{x_i+\Delta x} \approx e^0 + \Delta x e^0 + \frac{\Delta x^2}{2!}e^0 + \frac{\Delta x^3}{3!}e^0 \approx 1 + \Delta x + \frac{\Delta x^2}{2} + \frac{\Delta x^3}{6}\approx 1 + 0.2 + 0.02 + 0.00133 = 1.22133$$
```
Compute the TSE polynomial truncated until $\mathcal{O}(\Delta x)^6$ for $f(x)=\sin(x)$ around $x_i=0$
```{admonition} Solution
:class: tip, dropdown
Applying the definition of TSE:
$$
\sin(x) \approx \sin(x_i+\Delta x) \approx \sin(0) + x\cos(0) - \frac{x^2}{2}\sin(0) - \frac{x^3}{6}\cos(0) + \frac{x^4}{24}\sin(0) + \frac{x^5}{120}\cos(0) +\mathcal{O}(\Delta x)^6
$$
$$
\sin(x) \approx x-\frac{x^3}{6}+\frac{x^5}{120} \text{ wtih an error }\mathcal{O}(\Delta x)^6.
$$
Note that $x$ and $\Delta x$ can be written interchangeably when $x_i=0$ as $\Delta x=x-x_i$. If this would not be the case, for example if $x_i=\pi$ the result is completely different.
```
Compute the TSE polynomial truncated until $\mathcal{O}(\Delta x)^6$ for $f(x)=\sin(x)$ around $x_i=\pi$
```{admonition} Solution
:class: tip, dropdown
Applying the definition of TSE:
$$\sin(x) \approx \sin(x_i+\Delta x) \approx \sin(\pi) + (x-\pi)\cos(\pi) - \frac{(x-\pi)^2}{2}\sin(\pi) - \frac{(x-\pi)^3}{6}\cos(\pi) + \frac{(x-\pi)^4}{24}\sin(\pi) + \frac{(x-\pi)^5}{120}\cos(\pi) +\mathcal{O}(\Delta x)^6$$
$$\sin(x) \approx \sin(x_i+\Delta x) \approx 1 - \frac{(x-\pi)^2}{2} + \frac{(x-\pi)^4}{24} \text{ with an error } \mathcal{O}(\Delta x)^6$$
```
:::
---
%% Cell type:markdown id: tags:
:::{card}
How good is this polynomial? How does the result varies on the number of terms used?
Press `rocket` -->`Live Code` to interact with the figure
Press `rocket` -->`Live Code` to interact with the figure. You should then click 'run all' in the cell below.
:::
%% Cell type:code id: tags:thebe-remove-input-init,auto-execute-page
``` python
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import widgets, interact
def taylor_plt(order_aprox):
x = np.linspace(-2*np.pi,2*np.pi,100)
plt.plot(x, np.sin(x), label="sin(x)")
if order_aprox == '1st order':
plt.plot(x, x, label = "1st order approximation")
elif order_aprox == '3rd order':
plt.plot(x, x-(1/6*x**3), label = "3rd order approximation")
elif order_aprox == '5th order':
plt.plot(x, x-(1/6*x**3)+(1/120*x**5), label = "5th order approximation")
elif order_aprox == '7th order':
plt.plot(x, x-(1/6*x**3)+(1/120*x**5)-(1/5040*(x)**7), label = "7th order approximation")
plt.ylim(-5,5)
plt.axis('off')
plt.legend()
plt.show();
```
%% Cell type:code id: tags:
``` python
#run to interact with the figure
interact(taylor_plt, order_aprox = widgets.ToggleButtons(options=['1st order', '3rd order', '5th order','7th order']));
```
%% Output
%% Cell type:markdown id: tags:
Relevant conclusions:
- The 1st order, which depends only on the first derivative evaluation, is a straight line.
- The more terms used (larger order) the smaller the error.
- The farther from the starting point (e.g., in the plots $x_i=0$), the larger the error.
- The further from the starting point (e.g., in the plots $x_i=0$), the larger the error.
%% Cell type:markdown id: tags:
## Use of TSE to define the first derivative
The TSE definition when $\Delta x = x_{i+1} - x_i$ can be rewritten to solve for the first derivative:
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i)}{\Delta x} - \frac{\Delta x}{2!}f''(x_i)- \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
By truncating the derivative to avoid the calculation of the second derivative, we find the **forward difference**
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i)}{\Delta x} + \mathcal{O}(\Delta x).
$$
This is the same definition of the numerical derivative! Now we have the added knowledge that this comes with an error of the order of the step.
<br><br>
The **backward difference** can be found by redefining $-\Delta x=x_i-x_{i-1}$
$$
f'(x_i)=\frac{f(x_i)-f(x_i-\Delta x)}{\Delta x} + \frac{\Delta x}{2!}f''(x_i)- \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
$$
f'(x_i)=\frac{f(x_i)-f(x_i-\Delta x)}{\Delta x}+ \mathcal{O}(\Delta x).$$
<br><br>
The **central difference** can be found by summing the forward and backward difference expressions of the derivative and dividing it by 2:
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i-\Delta x)}{2\Delta x} - \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i-\Delta x)}{2\Delta x}+ \mathcal{O}(\Delta x^2).
$$
The second derivative terms cancel each other, therefore **the order error is the step size squared!** From here, it is obvious that the central difference is more accurate. You can notice it as well intuitively in the figure of the previous chapter.
The second derivative terms cancel each other out, therefore **the order error is the step size squared!** From here, it is obvious that the central difference is more accurate. You can notice it as well intuitively in the figure of the previous chapter.
%% Cell type:markdown id: tags:
:::{card} Exercises
Derive the forward difference for the first derivative $f'(x_i)$ with a first order error $\mathcal{O}(\Delta x)$ using Taylor series
```{admonition} Solution
:class: tip, dropdown
Start from the TSE
(tip: first derivative and first order error (1+1=2), we need to do a TSE up truncated until the 2nd order):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+ \mathcal{O}(\Delta x)^2
$$
Rearange to get:
$$
- \Delta x f'(x_i)= -f(x_i+\Delta x)+ f(x_i)+\mathcal{O}(\Delta x)^2
$$
Divide by $ - \Delta x $:
$$
f'(x_i)= \frac{f(x_i+\Delta x)- f(x_i)}{\Delta x } + \mathcal{O}(\Delta x)
$$
```
Use taylor series to derive the backward difference for the first derivative $f'(x_i)$ with a first order error $\mathcal{O}(\Delta x)$
```{admonition} Solution
:class: tip, dropdown
Start from the TSE
(tip: first derivative and first order error, 1+1=2, we need to do a TSE truncated until the 2nd order):
$$f(x_i-\Delta x) = f(x_i) - \Delta x f'(x_i)+ \mathcal{O}(\Delta x)^2$$
Rearange to get:
$$ \Delta x f'(x_i)= -f(x_i-\Delta x)+ f(x_i)+\mathcal{O}(\Delta x)^2 $$
Divide by $ \Delta x $:
$$ f'(x_i)= \frac{ f(x_i)-f(x_i+\Delta x)}{\Delta x } + \mathcal{O}(\Delta x) $$
```
Use taylor series to derive the Central difference for the first derivative $f'(x_i)$ with a 2nd order error $\mathcal{O}(\Delta x)^2$
```{admonition} Solution
:class: tip, dropdown
Start from the TSE of $f(x_i+\Delta x)$ and $f(x_i-\Delta x)$
(tip: first derivative and second order error (1+2=3), we need to do a TSE truncated until the 3rd order):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+ \frac{\Delta x^2}{2} + f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (1)
$$
$$
f(x_i-\Delta x) = f(x_i) - \Delta x f'(x_i)+ \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (2)
$$
Rearrange both equations:
$$
\Delta x f'(x_i)=f(x_i+\Delta x) - f(x_i) - \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (3)
$$
$$
\Delta x f'(x_i) = - f(x_i-\Delta x)+ f(x_i) + \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (4)
$$
Take the combination $2\Delta x f'(x_i) + \Delta x f'(x_i)$ by adding equation 3 and 4:
$$
2\Delta x f'(x_i) = f(x_i+\Delta x) - f(x_i-\Delta x) - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (4)
$$
Divide by $2\Delta x $ :
$$
f'(x_i) = \frac{f(x_i+\Delta x) - f(x_i-\Delta x)}{2\Delta x } - f''(x_i)\mathcal{O}(\Delta x)^2 \hspace{5mm} (4)
$$
:::
%% Cell type:markdown id: tags:
## TSE to define second derivatives
There are equations that require second derivatives. The diffusion equation is one of those used in every field of knowledge. The 1-D diffusion equation reads
There are equations that require second derivatives. One example is the diffusion equation. The 1-D diffusion equation reads:
$$
\frac{\partial f}{\partial t}=v\frac{\partial^2 f}{\partial x^2} \text{ where } v \text{ is the diffusion coefficient.}
$$
For the moment we will use TSE to find **only** a numerical expression of the second derivative $\frac{\partial^2 f}{\partial x^2}$.
The procedure is simple but cumbersome. The general idea is to isolate the second derivative in the TSE without being a dependency on other derivatives. Below you can find more details about the algebraic manipulation (if you are curious) but you do not need to know it. Here is the result:
The procedure is simple but cumbersome. The general idea is to isolate the second derivative in the TSE without there being a dependency on other derivatives. Below you can find more details about the algebraic manipulation (if you are curious) but you do not need to know it. Here is the result:
$$
f''(x_i)=\frac{f(x_i+2\Delta x)-2f(x_i+\Delta x)+f(x_i)}{\Delta x^2}+ \mathcal{O}(\Delta x).
$$
This is the **forward** difference approximation of the second derivative. Two aspects come to mind: one more point is needed to compute the second derivative and the error (of the simplest second derivative) is also of the order of the step. There are also **backward** and **central** approximations of the second derivative (not shown here).
%% Cell type:markdown id: tags:
**Derivation of the forward difference approximation of the second derivative**
First define a TSE for a point two steps farther from $x_i$:
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+\frac{(2\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Now multiply by two the TSE for a point one step farher from $x_i$:
Now multiply the TSE by two for a point one step farher from $x_i$:
$$
2f(x_i+\Delta x) = 2f(x_i) + 2\Delta x f'(x_i)+\frac{2\Delta x^2}{2!}f''(x_i) + \mathcal{O}(\Delta x)^3.
$$
By substracting the first expression from the second one the first derivative dissapears:
$$
f(x_i+2\Delta x) - 2f(x_i+\Delta x) = -f(x_i) + \frac{2\Delta x^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
By solving for $f''$ we obtain the **forward** expression:
$$
f''(x_i)=\frac{f(x_i+2\Delta x)-2f(x_i+\Delta x)+f(x_i)}{\Delta x^2}+ \mathcal{O}(\Delta x).
$$
%% Cell type:markdown id: tags:
## Higher-accuracy Finite-Difference Approximations
So far we have found expressions with a relative large magnitude error $\mathcal{O}(\Delta x)$ with the exception of the central difference approximation. Sometimes a higher accuracy is desired for which better expressions can be found. The procedure is similar to the algebraic manipulation to find the **forward** approximation of the second derivative: a series of TSE are defined at varying distances from $x_i$ and after algebraic manipulation a more accurate expression is found. For example, the **forward** approximation of the first derivative:
$$
f'(x_i)=\frac{-f(x_i+2\Delta x)+4f(x_i+\Delta x)-3f(x_i)}{2\Delta x}+ \mathcal{O}(\Delta x^2).
$$
The error magnitude has improved to $\mathcal{O}(\Delta x^2)$ at the expense of using one more point. The accuracy can be even better by using more points. It is important to note that central differences are more accurate than forward and backward differences when using the same number of points.
%% Cell type:markdown id: tags:
:::{card} Exercise
Derive a first derivative $f('x)$ with an 2nd error order $\mathcal{O}(\Delta x^2)$ finite-difference equations, using the following nodes $x_i$, $x_i+\Delta x$ and $x_i+2\Delta x$ or $x_i$, $x_{i+1} x$ and $x_{i+2}$
The finite-difference equations will have the following form:
$$
f'(x_i)= \frac{\alpha f(x_i)+ \beta f(x_i+\Delta x) + \gamma f(x_i+2\Delta x)}{\Delta x} + \mathcal{O}(\Delta x^2)
$$
Use the Taylor series to find $\alpha$, $\beta$ and $\gamma$
```{admonition} Solution
:class: tip, dropdown
Taylor series for $f(x_i+\Delta x)$ and $f(x_i+2\Delta x)$ (Tip: first order derivative and second error order, 1+2=3, meaning a truncation until 3rd order ):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+\frac{(\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+\frac{(2\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Times $f(x_i+\Delta x)$ by 4 and expand the term $\frac{(2\Delta x)^2}{2!}f''(x_i)$ in TSE $f(x_i+2\Delta x)$:
$$
4f(x_i+\Delta x)= 4f(x_i) + 4\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ 4\mathcal{O}(\Delta x)^3.
$$
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Now take $4f(x_i+\Delta x)-f(x_i+2\Delta x)$:
$$
4f(x_i+\Delta x)-f(x_i+2\Delta x)= 4f(x_i) + 4\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ 4\mathcal{O}(\Delta x)^3 - f(x_i) + 2\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
$$
= 3f(x_i)+ 2\Delta x f'(x_i) + 3\mathcal{O}(\Delta x)^3
$$
Rearrange for f'(x_i):
$$
- 2\Delta x f'(x_i) = 3f(x_i)-4f(x_i+\Delta x)+f(x_i+2\Delta x)+ 3\mathcal{O}(\Delta x)^3
$$
Divide by $-2 \Delta x$:
$$
f'(x_i)= \frac{-\frac{3}{2}f(x_i)+2f(x_i+\Delta x)-\frac{1}{2} f(x_i+2\Delta x)}{\Delta x} +\mathcal{O}(\Delta x)^2
$$
***The solution is: $\alpha=-\frac{3}{2}, \beta= 2, \gamma= -\frac{1}{2}$***
```
:::
......
%% Cell type:markdown id: tags:
# Numerical Integration
%% Cell type:markdown id: tags:
NOTE
```{note}
**Important things to retain from this block:**
* Knowing how to apply different discrete integration techniques
* Having an idea on how the truncation errors change from technique to technique
* Know how to apply different discrete integration techniques
* Have an idea on how the truncation errors change from technique to technique
```
%% Cell type:markdown id: tags:
## Motivation
Numerical integration is used often in diverse fields of engineering. For example to develop advanced numerical model techniques or to analyze field or laboratory data where a desired physical quantity is an integral of other measured quantitites. So, the analytic integral is not always known and has to be computed (approximated) numerically. For example, the variance density spectrum (in ocean waves) is a discrete signal obtained from the water level variations and its **integral** is a quantity that characterizes the wave field.
%% Cell type:markdown id: tags:
## Discrete Integration
%% Cell type:markdown id: tags:
In a 1 dimension situation, the integral
$$
I=\int_a^b f(x)dx
$$
is defined between points and represents the area under the curve $f$ limited by $[a,b]$. A visual way to approximate that area would be to use a squared page and count the number of squares that fall under that curve. In the example figure, you can count 11 squares.
**The numerical integration is an approximation** to that area done in a strict way. There are several paths to follow using polynomial representation. These widely used formulas are known as Newton-Cotes formulas.
* First, we split the interval of integration (a,b) on several partial intervals.
* Then, we replace $f(x)$ by the interpolation polynomial in each interval and calculate the integral over it.
* Finally, sum all the areas to find (approximate) the integral.
```{figure} figs/integral_illustration.png
:name: integral_illustration
two integral illustration showing the area under the curve for interval [a,b]
```
%% Cell type:markdown id: tags:
### Left Riemann Integration
```{figure} figs/left_riemann_new.png
:name: left_riemann_new.png
Illustrating the left Riemann Sum over interval [a,b]
```
Using the lowest polynomial order leads to the Riemann integral also known as the rectangle rule. Basically, the function is divided in segments with a known $\Delta x$ and the area of each segment is represented by a rectangle. Its height, related to the discretized points of the function, can be defined by the left or by the right.
Therefore, we get the left Riemann integration formula given by
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} f(x_i)\Delta x, \hspace{5mm} \Delta x = \frac{b - a}{n}
$$
where $n$ is the number of segments.
### Right Riemann Integration
```{figure} figs/right_riemann_new.png
:name: right_riemann_new.png
Illustrating the right Riemann Sum over interval [a,b]
```
In an analogous way, the formula for the right Riemann integration is:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n} f(x_i)\Delta x \approx \sum_{i=0}^{n-1} f(x_{i+1})\Delta x
$$
Note the difference respect to the left Riemann integration, specially the starting and end points.
### Midpoint Rule
```{figure} figs/midpoint_new.png
:name: midpoint_new.png
Illustrating the right Midpoint Rule over interval [a,b]
```
The midpoint rule defines the height of the rectangle at the midpoint between the two end points of each interval:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} f(\frac{x_{i}+x_{i+1}}{2}) \Delta x
$$
### Trapezoidal Rule
```{figure} figs/trapezoidal_new.png
:name: trapezoidal_new.png
Illustrating the Trapezoidal Rule over interval [a,b]
```
The trapezoidal rule uses a polynomial of the first order. As a result the area to be computed is a trapezoid (the top of the rectangle has a slope now) where the bases are the values assumed by the function in the end points of each of the partial intervals, and its height is just the difference between two consecutive points. Mathematically it reduces to the average of two rectangles:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} \frac{f(x_{i}) + f(x_{i+1})}{2} \Delta x
$$
### Simpson’s Rule
```{figure} figs/Simpson_s.png
:name: Simpson_s.png
Illustrating the Simpson Rule over interval [a,b]
```
Simpson’s rule uses a second order polynomial. It computes the integral of the function $f(x)$ by fitting parabolas needing three consecutive function points and adding the areas under them (see $p_1(x)$ and $p_2(x)$ in the figure). In formula form:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n/2} \frac{f(x_{2i-2}) + 4f(x_{2i-1}) + f(x_{2i})}{6} 2\Delta x
$$
Note that the interval length is multiplied with two, since three points are needed.
%% Cell type:markdown id: tags:
:::{card} ***Exercise***
Estimate the integral $\int_{0.5}^1 x^4$ with a total number of steps n=2:
- Left Rieman Sum
- Midpoint rule
- Trapezoidal
and then compute the real error $\epsilon= | f(x)- \hat{f(x)|}$, where f(x) is the exact integral and $\hat{f(x)}$ the approximated integral
`````{admonition} Tip
:class: tip
Tip: feel free to use the code cell below as a calculator.
`````
```{admonition} **Solution**
:class: tip, dropdown
- find $\Delta x$,
$$
\frac{b-a}{n}=\frac{1-0.5}{2}=0.25
$$
**Left Rieman Sum**:
$$
\sum_{i=0}^{n-1} f(x_i)\Delta x = f(x_0) \Delta x + f(x_1) \Delta x
$$
$$
0.5^4 \cdot 0.25 + 0.75^4 \cdot 0.25 = 0.0947265625
$$
**Midpoint Rule**:
$$
\sum_{i=0}^{n-1} f(\frac{x_{i}+x_{i+1}}{2}) \Delta x = f(\frac{x_0+x_1}{2}) \Delta x + f(\frac{x_1+x_2}{2}) \Delta x
$$
$$
0.625^4 0.25 + 0.875^4 0.25= 0.1846923828125
$$
**Trapezoidal Rule**:
$$
\sum_{i=0}^{n-1} \frac{f(x_{i}) + f(x_{i+1})}{2} \Delta x = \frac{f(x_{0}) + f(x_{1})}{2} \Delta x + \frac{f(x_{1}) + f(x_{2})}{2} \Delta x
$$
$$
\frac{f(0.5) + f(0.75)}{2} \Delta x + \frac{f(0.75) + f(1)}{2} \Delta x = \frac{(0.5^4 +0.75^4)}{2} \cdot 0.25 + \frac{(0.75^4 +1^4)}{2} \cdot 0.25= 0.2119140625
$$
**Exact**
$$
\int_{0.5}^1 x^4$ = (\frac{1}{5}x^5)\bigg|_{0.5}^{1} = \frac{1}{5}1^5- \frac{1}{5}0.5^5= 0.19375
$$
**Error**
$$
\epsilon= | f(x)- \hat{f(x)}|
$$
error Left Rieman = | 0.19375 - 0.0947265625| = 0.0990234375
error Midpoint Rule = | 0.19375 - 0.1846923828125| = 0.009057617187500006
error Trapezoidal Rule = | 0.19375 - 0.2119140625| = 0.018164062499999994
```
:::
%% Cell type:code id: tags:
``` python
#feel free to use this as your calculator for the exercise above
def f(x):
return x**4
f_left_rieman=
f_Midpoint=
f_trapezoidal=
f_exact=
print(f"f_left_rieman = {f_left_rieman}")
print(f"f_Midpoint = {f_Midpoint}")
print(f"f_trapezoidal = {f_trapezoidal}")
print(f"f_exact = {f_exact}")
print(f"Error left Riemann = {abs(f_exact-f_left_rieman)}")
print(f"Error Midpoint = {abs(f_exact-f_Midpoint)}")
print(f"Error trapezoidal = {abs(f_exact-f_trapezoidal)}")
```
%% Cell type:markdown id: tags:
### Truncation Errors
The truncation error for each method is the difference between the exact integral and the approximated integral. In the case of the left Riemann method:
$$
\text{Left Riemann Error }= \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x
$$
Now, we can use the Taylor Series Expansion of $f(x)$ instead of $f(x)$ itself and solve that integral. If you are curious about the mathematical steps, further below you can find the detailed derivation. For the moment, let's see the absolute errors for the different techniques.
$$
\text{Left Riemann Error }= \left| \bar f'(b-a)\Delta x /2 \right| \text { therefore } \mathcal O(\Delta x)
$$
$$
\text{Right Riemann Error }= \left| \bar f'(b-a)\Delta x /2 \right| \text { therefore } \mathcal O(\Delta x)
$$
$$
\text{Midpoint Error }= \left| \bar f''(b-a)\Delta x^2 /2 \right| \text { therefore } \mathcal O(\Delta x^2)
$$
$$
\text{Trapezoidal Error }= \left| \bar f''(b-a)\Delta x^2 /2 \right| \text { therefore } \mathcal O(\Delta x^2)
$$
$$
\text{Simpsons Error }= \left| \bar f''''(b-a)\Delta x^4 /2 \right| \text { therefore } \mathcal O(\Delta x^4)
$$
#### Analysis
As we can see, for the left and right Riemann integration techniques we find errors $\sim \dfrac{1}{n}$, while for the midpoint and trapezoidal rules, we find them $\sim\dfrac{1}{n^2}$, and with Simpson's rule $\sim\dfrac{1}{n^4}$.
%% Cell type:markdown id: tags:
#### Truncation Error Derivation for Left Riemann integration
```{note}
You don't need to know the derivation for the exam.
```
%% Cell type:markdown id: tags:
:::{card} Derivation
$$
\text{Left Riemann Error }= | \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x | \hspace{5mm} (1)
$$
```{admonition} **Derivation**
:class: tip, dropdown
Start by using a Taylor series expansion:
$$
f(x)= f(x_i)+ (x-x_i)f'(\xi(x)) + \mathcal{O}(\Delta x)^2 \hspace{5mm}(2)
$$
Substitute this Taylor expansion (2) in the term $\int_a^b f(x)dx$ in equation (1):
$$
\int_a^b \bigg[ f(x_i)+ (x-x_i)f'(x_i)+ \mathcal{O}(\Delta x)^2 \bigg] dx \hspace{5mm} (3)
$$
Rewrite this as the sum of the integrals over the sub-intervals $[x_i,x_{i+1}]$:
$$
= \sum_{i=0}^{n-1} \int_{x_i}^{x_{x+1}} \bigg[ f(x_i)+ (x-x_i)f'(x_i)+ \frac{1}{2}(x-x_i)^2f''(x)+ \mathcal{O}(\Delta x)^2 \bigg]dx \hspace{5mm} (4)
$$
Using integration by substitution, $u= x-x_i$, therefore $\frac{du}{dx}=1$ meaning du=dx
$$
=\sum_{i=0}^{n-1} \int_{x_i}^{x_{x+1}} \bigg[ f(x_i)+ (u)f'(x_i) + \mathcal{O}(\Delta x)^2 \bigg]du \hspace{5mm} (5)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ (u)f(x_i)+ \frac{(u)^2}{2}f'(x_i)+ \mathcal{O}(\Delta x)^2 \bigg] \bigg|^{x_{i+1}}_{x_{i}} \hspace{5mm} (6)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ (x_{i+1}-x_i)f(x_i)+ \frac{(x_{i+1}-x_i)^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 \bigg] \hspace{5mm} (7)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ \Delta xf(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}\Delta x^2 \bigg] \hspace{5mm} (8)
$$
Plug equation (8) in equation (1):
$$
|\sum_{i=0}^{n-1} \bigg[ (\Delta x)f(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2\bigg] - \sum_{i=0}^{n-1} f(x_i)\Delta x | \hspace{5mm} (9)
$$
$$
=|\sum_{i=0}^{n-1} \bigg[ \Delta xf(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 - f(x_i)\Delta x\bigg]| \hspace{5mm} (10)
$$
The term $\Delta xf(x_i)$ falls away:
$$
=|\sum_{i=0}^{n-1} \bigg[ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 \bigg]| \hspace{5mm} (11)
$$
$$
= n \cdot (b-a)\frac{\Delta x^2}{2}f'(x_i) \hspace{5mm} (12)
$$
$\Delta x = \frac{b-a}{n}$ this gives:
```
$$
\text{Left Riemann Error }= | \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x | = | \bar{f}'(b-a) \frac{\Delta x}{2} |
$$
The resulting left Riemann error is proportional to the average of the derivatives, the length of the domain (b-a) and the step size ($\Delta x$). This result tells us that for steeper functions the error will be higher than for milder ones. For example, a function of a straight horizontal line with average derivatives equal to 0 would have an error equal to 0! Also, the error will be larger if the domain is larger, which is kind of obvious. As those first two terms (average derivative and domain length) are inherent to the problem requirements, we say that the error is of the order $\Delta x$.
:::
%% Cell type:markdown id: tags:
:::{card} Exercise
Estimate the integral $\int_{1}^{3} \frac{5}{x^4} dx$ with a step-size $\Delta x= 0.5$
- Right Riemann Sum
- Simpson Rule
`````{admonition} Tip
:class: tip
Tip: feel free to use the code cell below as a calculator.
`````
```{admonition} **Solution**
:class: tip, dropdown
Find $n$
$$
n = \frac{b-a}{\Delta x}= \frac{3-1}{0.5}= 4
$$
***Right Riemann Sum***
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n} f(x_i)\Delta x \approx \sum_{i=0}^{n-1} f(x_{i+1})\Delta x
$$
$$
= \frac{5}{1.5^4} \cdot 0.5 +\frac{5}{2^4} \cdot 0.5 + \frac{5}{2.5^4} \cdot 0.5 + \frac{5}{3^4} \cdot 0.5
$$
$$
= 0.7449413580246914
$$
***Simpsons Rule***
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n/2} \frac{f(x_{2i-2}) + 4f(x_{2i-1}) + f(x_{2i})}{6} 2\Delta x
$$
$$
\frac{ \frac{5}{1^4}+ 4\frac{5}{1.5^4} + \frac{5}{2^4}}{6} 2\cdot 0.5 +\frac{ \frac{5}{2^4}+ 4\frac{5}{2.5^4} + \frac{5}{3^4}}{6} 2\cdot 0.5 = 1.69155761316872
$$
***Exact***
$$
\int_{1}^{3} \frac{5}{x^4} \, dx = -\frac{5}{3x^3}\bigg|_1^3 = 1.60493827160
$$
***error***
$$
\text{Right Riemann error} = | 1.6049382716 - 0.744941358| = 0.8599969
$$
$$
\text{Simpsons rule error} = | 1.6049382716 - 0.744941358| = 0.0866193
$$
```
:::
%% Cell type:code id: tags:
``` python
#feel free to use this as your calculator for the exercise above
def f(x):
return 5/(x**4)
f_right_riemann =
f_simps =
f_exact =
print(f"f_right_riemann = {f_right_riemann}")
print(f"f_simps = {f_simps}")
print(f"f_exact = {f_exact}")
print(f"Error right Riemann = {abs(f_exact-f_right_riemann)}")
print(f"Error Simpsons = {abs(f_exact-f_simps)}")
```
%% Cell type:markdown id: tags:
......
%% Cell type:markdown id: tags:
# Exercises on Taylor expansion
This page shows some exercises on calculation Taylor expansion. If you reload this page, you'll get new values.
This page shows some exercises on calculating Taylor expansions. If you reload this page, you'll get new values.
Click `rocket` -->`Live Code` to start practising.
%% Cell type:code id: tags:thebe-remove-input-init
``` python
import sympy as sp
import numpy as np
from sympy import pi, latex
from sympy.printing.mathml import mathml
import operator
import ipywidgets as widgets
from IPython.display import display, Latex, display_jpeg, Math, Markdown
sp.init_printing(use_latex=True)
check_equation = lambda eq1, eq2: sp.simplify(eq1 - eq2) == 0
def check_answer(variable_name, expected, comparison=operator.eq):
output = widgets.Output()
correct_output = widgets.Output()
button = widgets.Button(description="Check answer")
show_correct_button = widgets.Button(description="Show correct answer")
def _inner_check(button):
with output:
if comparison(globals()[variable_name], expected):
output.outputs = [{'name': 'stdout', 'text': 'Correct!',
'output_type': 'stream'}]
correct_output.clear_output() # Clear the correct answer display if they got it right
else:
output.outputs = [{'name': 'stdout', 'text': 'Incorrect!',
'output_type': 'stream'}]
def _show_correct_answer(button):
with correct_output:
correct_output.clear_output() # Clear previous outputs
latex(Math(display(f"The correct answer is: {expected}")))
button.on_click(_inner_check)
show_correct_button.on_click(_show_correct_answer)
display(button, output, show_correct_button, correct_output)
```
%% Cell type:markdown id: tags:
## Exercise 1
Calculate the taylor series expension of:
Calculate the taylor series expansion of:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
x, y = sp.symbols('x, y')
a_1 = sp.Integer(np.random.randint(2,6))
b_1 = sp.Integer(np.random.randint(-10,10))
c_1 = sp.Integer(np.random.randint(-5,5))
eq1_original = a_1 * x**2 + b_1*x
eq1_correct = sp.series(eq1_original,x,c_1)
eq1_answer = 0
display(eq1_original)
#display(eq1_correct)
```
%% Cell type:markdown id: tags:
around:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
display(sp.Eq(x,c_1))
```
%% Cell type:markdown id: tags:
Discard any $O(x^3)$ terms.
Fill in your answer and run the cell before clicking 'Check answer'.
%% Cell type:code id: tags:auto-execute-page,disable-download-page
``` python
eq1_answer =
```
%% Cell type:code id: tags:thebe-remove-input-init
``` python
check_answer("eq1_answer",eq1_correct, check_equation)
```
%% Cell type:markdown id: tags:
## Exercise 2
Calculate the taylor series expension of:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
a_2 = sp.Integer(np.random.randint(1,7))
c_2 = sp.Integer(np.random.randint(-5,5))
eq2_original = a_2*sp.tan(x)
display(eq2_original)
eq2_correct = sp.series(eq2_original,x,c_2*sp.pi,3).removeO()
#display(eq2_correct)
eq2_answer = 0
```
%% Cell type:markdown id: tags:
around:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
display(sp.Eq(x,c_2*sp.pi))
```
%% Cell type:markdown id: tags:
discard any $O(x^3)$ terms.
Fill in your answer and run the cell before clicking 'Check answer'. Furthermore, use `pi` for $\pi$:
%% Cell type:code id: tags:
``` python
eq2_answer =
```
%% Cell type:code id: tags:thebe-remove-input-init
``` python
check_answer("eq2_answer",eq2_correct, check_equation)
```
%% Cell type:markdown id: tags:
## Exercise 3
Calculate the taylor series expension of:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
a_3 = sp.Integer(np.random.randint(1,10))
c_3 = sp.Integer(np.random.randint(-1,1))
eq3_original = a_3 / (1 - x)
display(eq3_original)
eq3_correct = sp.series(eq3_original,x,c_3,3).removeO()
#display(eq3_correct)
eq3_answer = 0
```
%% Cell type:markdown id: tags:
around:
%% Cell type:code id: tags:thebe-remove-input-init
``` python
display(sp.Eq(x,c_3))
```
%% Cell type:markdown id: tags:
discard any $O(x^3)$ terms.
Fill in your answer and run the cell before clicking 'Check answer':
%% Cell type:code id: tags:
``` python
eq3_answer =
```
%% Cell type:code id: tags:thebe-remove-input-init
``` python
check_answer("eq3_answer",eq3_correct, check_equation)
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment