Skip to content
Snippets Groups Projects
Commit e53d6d30 authored by Mona Devos's avatar Mona Devos
Browse files

fix typos

parent 59275d31
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
# Revision of Concepts
%% Cell type:markdown id: tags:
## Classification of Differential Equations
%% Cell type:markdown id: tags:
```{note}
**Important things to retain from this block:**
* Identify characteristics of differential equations
* Understanding how numerical and analytical solutions might differ as we get to more complex problems which need simplification
* Remember that analytical solutions are not always possible
```
%% Cell type:markdown id: tags:
Differential equations can be classified in Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). **ODEs** have derivatives with respect to a **single** independent variable (either time **or** space), for example
$$
\frac{d x(t)}{d t} = \cos t
$$
describes the change rate of the variable $x$ in **time $t$**.
describes the rate of change of the variable $x$ in **time $t$**.
**PDEs** have derivatives with respect to **multiple** independent variables (often time **and** space), for example:
$$
\frac{\partial c(x,t)}{\partial t} + u\frac{\partial c(x,t)}{\partial x} = 0
$$
describes the propagation of the concentration $c$ in **time $t$** along **dimension $x$**.
The classification can be done more precisely, by its order and linearity (linear or non-linear). The order refers to the highest derivative while non-linear equations are those in which the dependent variable or its derivative(s) are non linear (exponential, sine or with a power different than 1). See the following examples:
$$
\frac{d^3y}{dx^3} - x\frac{d^2y}{dx^2} + y = 0 \qquad \text{third-order linear ODE}
$$
$$
\frac{dy}{dx} = y^2 + x \qquad \text{first-order non-linear ODE}
$$
$$
\frac{d^2y}{dx^2} + y\left(\frac{dy}{dx}\right)^2 = \sin(y) \qquad \text{second-order non-linear ODE}
$$
%% Cell type:markdown id: tags:
## Analytical vs Numerical Solutions
%% Cell type:markdown id: tags:
Equations can be solved in two ways: anaylitically and numerically. The analytical solution is **exact** while a numerical solution requires computational methods to **approximate** the solution. So why would we use anything other than analytical solutions? Well, analytical solutions are difficult/impossible to find for complex equations, specially when the problem involves a complex geometry. This complexity will be treated later in the book. For now, let's illustriate analytical and numerical solutions considering a simple problem.
Equations can be solved in two ways: analytically and numerically. The analytical solution is **exact** while a numerical solution requires computational methods to **approximate** the solution. So why would we use anything other than analytical solutions? Well, analytical solutions are difficult/impossible to find for complex equations, specially when the problem involves a complex geometry. This complexity will be treated later in the book. For now, let's illustrate analytical and numerical solutions considering a simple problem.
:::{card}
> Find the value $x$ for $f(x)=3x-2$ when $f(x) = 0$ in the interval $[0,1]$.
**Analytical Solution**
$$ f(x) = 0 \Leftrightarrow 3x-2=0 \Leftrightarrow x = \frac{2}{3}=0.666666666666666..$$
Note that there is no need for any extra computation. You can say that there is only **one computation needed**: the assignment of the $x$ value.
:::
%% Cell type:markdown id: tags:
:::{card} **Numerical Solution**
The code below shows an iterative method to find the x value.
:::
%% Cell type:code id: tags:
``` python
def f(x):
return 3*x-2
dx = 0.001
for i in range(1000):
x = dx*i
if f(x) * f(x+dx) < 0:
print("Number of computations needed to find a solution",i*4)
break
print("Answer x = ", x+dx)
```
%% Output
Number of computations needed to find a solution 2664
Answer x = 0.667
%% Cell type:markdown id: tags:
Note that the 2664 computations needed is highly dependent on the method (this one is not very efficient). Here the search starts at x=0 and increases in steps of 0.001, which also limits the accuracy of the solution.
:::{card} Exercise
Let us now look to another simple example:
> Find the value $x$ for $f(x) = 3\sin(x)-2$ when $f(x) = 0$ in the interval $[0,1]$.
**Analytical Solution**
$$ f(x) = 0 \Leftrightarrow 3\sin(x)-2=0 \Leftrightarrow x = \arcsin\left(\frac{2}{3}\right) = 0.7297276562269663..$$
:::
%% Cell type:markdown id: tags:
:::{card}
**Numerical Solution**
:::
%% Cell type:code id: tags:
``` python
import math
def f(x):
return 3*math.sin(x)-2
dx = 0.001
for i in range(1000):
x = dx*i
if f(x) * f(x+dx) < 0:
print("Number of computations needed to find a solution",i*4)
break
print("Answer x = ", x+dx)
```
%% Output
Number of computations needed to find a solution 2916
Answer x = 0.73
%% Cell type:markdown id: tags:
Note that there is no attempt to solve f(x) directly. Only the function is defined and the exact same steps of the previous problem are followed.
......
%% Cell type:markdown id: tags:
# The First Derivative
%% Cell type:markdown id: tags:
```{note}
**Important things to retain from this block:**
* Understand what the derivative represents
* Recognize that the derivative can be approximated in different ways
```
%% Cell type:markdown id: tags:
:::{card} **Definition**:
The derivative of a function $f(x)$ evaluated at the point $x_0$ is
$$
\frac{df}{dx}\bigg\rvert_{x_0}=f'(x_0)=\lim_{x \to x_0}\frac{f(x)-f(x_0)}{x-x_0}.
$$
:::
:::{card} **Numerically**:
$x - x_0$ **cannot tend to 0!** Thus:
$$
f'(x_0) \approx \frac{f(x)-f(x_0)}{\Delta x}, \hspace{3mm} \text{where } \Delta x=x-x_0.
$$
:::
```{figure} figs/derivative_new2.png
:name: derivative_new2
derivatives of a function $f(x)$ at a specific points $x_0$ and $x_1$
```
**What does the derivative represents?** A rate of change! In this case, how fast $f(x)$ changes with respect to $x$ (or in the direction of $x$). The derivative can also be with respect to time or another independent variable (e.g., $y,z,t,etc$). Look at figure 1. The first derivative evaluated at $x_0$ is illustrated as a simple slope. You can also see that the rate of change at $x_0$ is larger than at $x_1$.
%% Cell type:markdown id: tags:
## Numerical Freedom to compute derivatives
In the Figure above, the derivative approximation was illustrated arbitarly using two points: the one at which the derivate was evaluated and another point in front of it. However, there are more possibilities. Instead of using points at $x_{-1,0,1}$ a more general notation is used: $x_{i-1,i,i+1}$. The simplest ways to approximate the derivative evaluated at the pont $x_i$ use two points:
$$
\text{forward: }\hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i})}{x_{i+1}-x_i} \hspace{5mm} \text{backward: } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i})-f(x_{i-1})}{x_{i}-x_{i-1}} \hspace{5mm} \text{central } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}
\text{forward: }\hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i})}{x_{i+1}-x_i} \hspace{5mm} \text{backward: } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i})-f(x_{i-1})}{x_{i}-x_{i-1}} \hspace{5mm} \text{central: } \hspace{3mm} \frac{df}{dx}\bigg\rvert_{x_i}\approx\frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}
$$
```{figure} figs/derivative_ways.png
:name: derivative_ways
graphs illustrating three different ways to approximate derivatives.
```
%% Cell type:markdown id: tags:
......
%% Cell type:markdown id: tags:
# Taylor Series Expansion
%% Cell type:markdown id: tags:
```{note}
**Important things to retain from this block:**
* Understand how to compute Taylor series expansion of a given function around a given point and its limitations
* Understand how numerical derivatives are derived from TSE
* Understand that the magnitude error depends on the expression used
* Understand that the magnitude of the error depends on the expression used
**Things you do not need to know:**
* Any kind of Taylor expansion of a function by heart
```
%% Cell type:markdown id: tags:
## Definition
The Taylor Series Expansion (TSE) of an arbitrary function $f(x)$ around $x=x_i$ is a polynomial of varying order given by
$$f(x) = f(x_i) + (x-x_i)f'(x_i)+\frac{(x-x_i)^2}{2!}f''(x_i)+ \frac{(x-x_i)^3}{3!} f'''(x_i)+ ...$$
This series is **exact** as long as we include infinite terms. We, however, are limited to a **truncated** expression: an **approximation** of the real function.
Using only 3 terms and defining $\Delta x=x-x_i$, the TSE can be rewritten as
$$f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+\frac{\Delta x^2}{2!}f''(x_i)+ \frac{\Delta x^3}{3!} f'''(x_i)+ \mathcal{O}(\Delta x)^4.$$
Here $\Delta x$ is the distance between the point we "know" and the desired point. $\mathcal{O}(\Delta x)^4$ means that we do not take into account the terms associated to $\Delta x^4$ and therefore **that is the truncation error order**. From here we can also conclude that **the larger the step $\Delta x$, the larger the error**!
Now let's see an example.
---
:::{card} Example
Compute $e^{0.2}$ using 3 terms of TSE around $x_i=0$
```{admonition} Solution
:class: tip, dropdown
We want to evaluate $e^x=e^{x_i+\Delta x}=e^{0.2}$. Therefore, $\Delta x=0.2$. The value of $f(x_i=0)=e^0=1$. For the case of this exponential, the derivatives have the same value $f'(x_i=0)=f''(x_i=0)=f'''(x_i=0)=e^0=1$. The TSE looks like
$$e^{x_i+\Delta x} \approx e^0 + \Delta x e^0 + \frac{\Delta x^2}{2!}e^0 + \frac{\Delta x^3}{3!}e^0 \approx 1 + \Delta x + \frac{\Delta x^2}{2} + \frac{\Delta x^3}{6}\approx 1 + 0.2 + 0.02 + 0.00133 = 1.22133$$
```
Compute the TSE polynomial truncated until $\mathcal{O}(\Delta x)^6$ for $f(x)=\sin(x)$ around $x_i=0$
```{admonition} Solution
:class: tip, dropdown
Applying the definition of TSE:
$$
\sin(x) \approx \sin(x_i+\Delta x) \approx \sin(0) + x\cos(0) - \frac{x^2}{2}\sin(0) - \frac{x^3}{6}\cos(0) + \frac{x^4}{24}\sin(0) + \frac{x^5}{120}\cos(0) +\mathcal{O}(\Delta x)^6
$$
$$
\sin(x) \approx x-\frac{x^3}{6}+\frac{x^5}{120} \text{ wtih an error }\mathcal{O}(\Delta x)^6.
\sin(x) \approx x-\frac{x^3}{6}+\frac{x^5}{120} \text{ with an error }\mathcal{O}(\Delta x)^6.
$$
Note that $x$ and $\Delta x$ can be written interchangeably when $x_i=0$ as $\Delta x=x-x_i$. If this would not be the case, for example if $x_i=\pi$ the result is completely different.
```
Compute the TSE polynomial truncated until $\mathcal{O}(\Delta x)^6$ for $f(x)=\sin(x)$ around $x_i=\pi$
```{admonition} Solution
:class: tip, dropdown
Applying the definition of TSE:
$$\sin(x) \approx \sin(x_i+\Delta x) \approx \sin(\pi) + (x-\pi)\cos(\pi) - \frac{(x-\pi)^2}{2}\sin(\pi) - \frac{(x-\pi)^3}{6}\cos(\pi) + \frac{(x-\pi)^4}{24}\sin(\pi) + \frac{(x-\pi)^5}{120}\cos(\pi) +\mathcal{O}(\Delta x)^6$$
$$\sin(x) \approx \sin(x_i+\Delta x) \approx 1 - \frac{(x-\pi)^2}{2} + \frac{(x-\pi)^4}{24} \text{ with an error } \mathcal{O}(\Delta x)^6$$
```
:::
---
%% Cell type:markdown id: tags:
:::{card}
How good is this polynomial? How does the result varies on the number of terms used?
Press `rocket` -->`Live Code` to interact with the figure
:::
%% Cell type:code id: tags:thebe-remove-input-init,auto-execute-page
``` python
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import widgets, interact
def taylor_plt(order_aprox):
x = np.linspace(-2*np.pi,2*np.pi,100)
plt.plot(x, np.sin(x), label="sin(x)")
if order_aprox == '1st order':
plt.plot(x, x, label = "1st order approximation")
elif order_aprox == '3rd order':
plt.plot(x, x-(1/6*x**3), label = "3rd order approximation")
elif order_aprox == '5th order':
plt.plot(x, x-(1/6*x**3)+(1/120*x**5), label = "5th order approximation")
elif order_aprox == '7th order':
plt.plot(x, x-(1/6*x**3)+(1/120*x**5)-(1/5040*(x)**7), label = "7th order approximation")
plt.ylim(-5,5)
plt.axis('off')
plt.legend()
plt.show();
```
%% Cell type:code id: tags:
``` python
#run to interact with the figure
interact(taylor_plt, order_aprox = widgets.ToggleButtons(options=['1st order', '3rd order', '5th order','7th order']));
```
%% Output
%% Cell type:markdown id: tags:
Relevant conclusions:
- The 1st order, which depends only on the first derivative evaluation, is a straight line.
- The more terms used (larger order) the smaller the error.
- The farther from the starting point (e.g., in the plots $x_i=0$), the larger the error.
%% Cell type:markdown id: tags:
## Use of TSE to define the first derivative
The TSE definition when $\Delta x = x_{i+1} - x_i$ can be rewritten to solve for the first derivative:
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i)}{\Delta x} - \frac{\Delta x}{2!}f''(x_i)- \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
By truncating the derivative to avoid the calculation of the second derivative, we find the **forward difference**
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i)}{\Delta x} + \mathcal{O}(\Delta x).
$$
This is the same definition of the numerical derivative! Now we have the added knowledge that this comes with an error of the order of the step.
<br><br>
The **backward difference** can be found by redefining $-\Delta x=x_i-x_{i-1}$
$$
f'(x_i)=\frac{f(x_i)-f(x_i-\Delta x)}{\Delta x} + \frac{\Delta x}{2!}f''(x_i)- \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
$$
f'(x_i)=\frac{f(x_i)-f(x_i-\Delta x)}{\Delta x}+ \mathcal{O}(\Delta x).$$
<br><br>
The **central difference** can be found by summing the forward and backward difference expressions of the derivative and dividing it by 2:
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i-\Delta x)}{2\Delta x} - \frac{\Delta x^2}{3!} f'''(x_i) - ...
$$
$$
f'(x_i)=\frac{f(x_i+\Delta x)-f(x_i-\Delta x)}{2\Delta x}+ \mathcal{O}(\Delta x^2).
$$
The second derivative terms cancel each other, therefore **the order error is the step size squared!** From here, it is obvious that the central difference is more accurate. You can notice it as well intuitively in the figure of the previous chapter.
%% Cell type:markdown id: tags:
:::{card} Exercises
Derive the forward difference for the first derivative $f'(x_i)$ with a first order error $\mathcal{O}(\Delta x)$ using Taylor series
```{admonition} Solution
:class: tip, dropdown
Start from the TSE
(tip: first derivative and first order error (1+1=2), we need to do a TSE up truncated until the 2nd order):
(tip: first derivative and first order error (1+1=2), we need to do a TSE truncated until the 2nd order):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+ \mathcal{O}(\Delta x)^2
$$
Rearange to get:
$$
- \Delta x f'(x_i)= -f(x_i+\Delta x)+ f(x_i)+\mathcal{O}(\Delta x)^2
$$
Divide by $ - \Delta x $:
$$
f'(x_i)= \frac{f(x_i+\Delta x)- f(x_i)}{\Delta x } + \mathcal{O}(\Delta x)
$$
```
Use taylor series to derive the backward difference for the first derivative $f'(x_i)$ with a first order error $\mathcal{O}(\Delta x)$
```{admonition} Solution
:class: tip, dropdown
Start from the TSE
(tip: first derivative and first order error, 1+1=2, we need to do a TSE truncated until the 2nd order):
$$f(x_i-\Delta x) = f(x_i) - \Delta x f'(x_i)+ \mathcal{O}(\Delta x)^2$$
Rearange to get:
Rearrange to get:
$$ \Delta x f'(x_i)= -f(x_i-\Delta x)+ f(x_i)+\mathcal{O}(\Delta x)^2 $$
Divide by $ \Delta x $:
$$ f'(x_i)= \frac{ f(x_i)-f(x_i+\Delta x)}{\Delta x } + \mathcal{O}(\Delta x) $$
```
Use taylor series to derive the Central difference for the first derivative $f'(x_i)$ with a 2nd order error $\mathcal{O}(\Delta x)^2$
```{admonition} Solution
:class: tip, dropdown
Start from the TSE of $f(x_i+\Delta x)$ and $f(x_i-\Delta x)$
(tip: first derivative and second order error (1+2=3), we need to do a TSE truncated until the 3rd order):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+ \frac{\Delta x^2}{2} + f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (1)
$$
$$
f(x_i-\Delta x) = f(x_i) - \Delta x f'(x_i)+ \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (2)
$$
Rearrange both equations:
$$
\Delta x f'(x_i)=f(x_i+\Delta x) - f(x_i) - \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (3)
$$
$$
\Delta x f'(x_i) = - f(x_i-\Delta x)+ f(x_i) + \frac{\Delta x^2}{2} - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (4)
$$
Take the combination $2\Delta x f'(x_i) + \Delta x f'(x_i)$ by adding equation 3 and 4:
$$
2\Delta x f'(x_i) = f(x_i+\Delta x) - f(x_i-\Delta x) - f''(x_i)\mathcal{O}(\Delta x)^3 \hspace{5mm} (4)
$$
Divide by $2\Delta x $ :
$$
f'(x_i) = \frac{f(x_i+\Delta x) - f(x_i-\Delta x)}{2\Delta x } - f''(x_i)\mathcal{O}(\Delta x)^2 \hspace{5mm} (4)
$$
:::
%% Cell type:markdown id: tags:
## TSE to define second derivatives
There are equations that require second derivatives. The diffusion equation is one of those used in every field of knowledge. The 1-D diffusion equation reads
$$
\frac{\partial f}{\partial t}=v\frac{\partial^2 f}{\partial x^2} \text{ where } v \text{ is the diffusion coefficient.}
$$
For the moment we will use TSE to find **only** a numerical expression of the second derivative $\frac{\partial^2 f}{\partial x^2}$.
The procedure is simple but cumbersome. The general idea is to isolate the second derivative in the TSE without being a dependency on other derivatives. Below you can find more details about the algebraic manipulation (if you are curious) but you do not need to know it. Here is the result:
$$
f''(x_i)=\frac{f(x_i+2\Delta x)-2f(x_i+\Delta x)+f(x_i)}{\Delta x^2}+ \mathcal{O}(\Delta x).
$$
This is the **forward** difference approximation of the second derivative. Two aspects come to mind: one more point is needed to compute the second derivative and the error (of the simplest second derivative) is also of the order of the step. There are also **backward** and **central** approximations of the second derivative (not shown here).
%% Cell type:markdown id: tags:
**Derivation of the forward difference approximation of the second derivative**
First define a TSE for a point two steps farther from $x_i$:
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+\frac{(2\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Now multiply by two the TSE for a point one step farher from $x_i$:
$$
2f(x_i+\Delta x) = 2f(x_i) + 2\Delta x f'(x_i)+\frac{2\Delta x^2}{2!}f''(x_i) + \mathcal{O}(\Delta x)^3.
$$
By substracting the first expression from the second one the first derivative dissapears:
$$
f(x_i+2\Delta x) - 2f(x_i+\Delta x) = -f(x_i) + \frac{2\Delta x^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
By solving for $f''$ we obtain the **forward** expression:
$$
f''(x_i)=\frac{f(x_i+2\Delta x)-2f(x_i+\Delta x)+f(x_i)}{\Delta x^2}+ \mathcal{O}(\Delta x).
$$
%% Cell type:markdown id: tags:
## Higher-accuracy Finite-Difference Approximations
So far we have found expressions with a relative large magnitude error $\mathcal{O}(\Delta x)$ with the exception of the central difference approximation. Sometimes a higher accuracy is desired for which better expressions can be found. The procedure is similar to the algebraic manipulation to find the **forward** approximation of the second derivative: a series of TSE are defined at varying distances from $x_i$ and after algebraic manipulation a more accurate expression is found. For example, the **forward** approximation of the first derivative:
$$
f'(x_i)=\frac{-f(x_i+2\Delta x)+4f(x_i+\Delta x)-3f(x_i)}{2\Delta x}+ \mathcal{O}(\Delta x^2).
$$
The error magnitude has improved to $\mathcal{O}(\Delta x^2)$ at the expense of using one more point. The accuracy can be even better by using more points. It is important to note that central differences are more accurate than forward and backward differences when using the same number of points.
%% Cell type:markdown id: tags:
:::{card} Exercise
Derive a first derivative $f('x)$ with an 2nd error order $\mathcal{O}(\Delta x^2)$ finite-difference equations, using the following nodes $x_i$, $x_i+\Delta x$ and $x_i+2\Delta x$ or $x_i$, $x_{i+1} x$ and $x_{i+2}$
The finite-difference equations will have the following form:
$$
f'(x_i)= \frac{\alpha f(x_i)+ \beta f(x_i+\Delta x) + \gamma f(x_i+2\Delta x)}{\Delta x} + \mathcal{O}(\Delta x^2)
$$
Use the Taylor series to find $\alpha$, $\beta$ and $\gamma$
```{admonition} Solution
:class: tip, dropdown
Taylor series for $f(x_i+\Delta x)$ and $f(x_i+2\Delta x)$ (Tip: first order derivative and second error order, 1+2=3, meaning a truncation until 3rd order ):
$$
f(x_i+\Delta x) = f(x_i) + \Delta x f'(x_i)+\frac{(\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+\frac{(2\Delta x)^2}{2!}f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Times $f(x_i+\Delta x)$ by 4 and expand the term $\frac{(2\Delta x)^2}{2!}f''(x_i)$ in TSE $f(x_i+2\Delta x)$:
$$
4f(x_i+\Delta x)= 4f(x_i) + 4\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ 4\mathcal{O}(\Delta x)^3.
$$
$$
f(x_i+2\Delta x) = f(x_i) + 2\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
Now take $4f(x_i+\Delta x)-f(x_i+2\Delta x)$:
$$
4f(x_i+\Delta x)-f(x_i+2\Delta x)= 4f(x_i) + 4\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ 4\mathcal{O}(\Delta x)^3 - f(x_i) + 2\Delta x f'(x_i)+2(\Delta x)^2f''(x_i)+ \mathcal{O}(\Delta x)^3.
$$
$$
= 3f(x_i)+ 2\Delta x f'(x_i) + 3\mathcal{O}(\Delta x)^3
$$
Rearrange for f'(x_i):
Rearrange for $f'(x_i)$:
$$
- 2\Delta x f'(x_i) = 3f(x_i)-4f(x_i+\Delta x)+f(x_i+2\Delta x)+ 3\mathcal{O}(\Delta x)^3
$$
Divide by $-2 \Delta x$:
$$
f'(x_i)= \frac{-\frac{3}{2}f(x_i)+2f(x_i+\Delta x)-\frac{1}{2} f(x_i+2\Delta x)}{\Delta x} +\mathcal{O}(\Delta x)^2
$$
***The solution is: $\alpha=-\frac{3}{2}, \beta= 2, \gamma= -\frac{1}{2}$***
```
:::
......
%% Cell type:markdown id: tags:
# Numerical Integration
%% Cell type:markdown id: tags:
NOTE
```{note}
**Important things to retain from this block:**
* Knowing how to apply different discrete integration techniques
* Having an idea on how the truncation errors change from technique to technique
```
%% Cell type:markdown id: tags:
## Motivation
Numerical integration is used often in diverse fields of engineering. For example to develop advanced numerical model techniques or to analyze field or laboratory data where a desired physical quantity is an integral of other measured quantitites. So, the analytic integral is not always known and has to be computed (approximated) numerically. For example, the variance density spectrum (in ocean waves) is a discrete signal obtained from the water level variations and its **integral** is a quantity that characterizes the wave field.
%% Cell type:markdown id: tags:
## Discrete Integration
%% Cell type:markdown id: tags:
In a 1 dimension situation, the integral
In a 1 dimensional situation, the integral
$$
I=\int_a^b f(x)dx
$$
is defined between points and represents the area under the curve $f$ limited by $[a,b]$. A visual way to approximate that area would be to use a squared page and count the number of squares that fall under that curve. In the example figure, you can count 11 squares.
**The numerical integration is an approximation** to that area done in a strict way. There are several paths to follow using polynomial representation. These widely used formulas are known as Newton-Cotes formulas.
* First, we split the interval of integration (a,b) on several partial intervals.
* Then, we replace $f(x)$ by the interpolation polynomial in each interval and calculate the integral over it.
* Finally, sum all the areas to find (approximate) the integral.
```{figure} figs/integral_illustration.png
:name: integral_illustration
two integral illustration showing the area under the curve for interval [a,b]
```
%% Cell type:markdown id: tags:
### Left Riemann Integration
```{figure} figs/left_riemann_new.png
:name: left_riemann_new.png
Illustrating the left Riemann Sum over interval [a,b]
```
Using the lowest polynomial order leads to the Riemann integral also known as the rectangle rule. Basically, the function is divided in segments with a known $\Delta x$ and the area of each segment is represented by a rectangle. Its height, related to the discretized points of the function, can be defined by the left or by the right.
Therefore, we get the left Riemann integration formula given by
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} f(x_i)\Delta x, \hspace{5mm} \Delta x = \frac{b - a}{n}
$$
where $n$ is the number of segments.
### Right Riemann Integration
```{figure} figs/right_riemann_new.png
:name: right_riemann_new.png
Illustrating the right Riemann Sum over interval [a,b]
```
In an analogous way, the formula for the right Riemann integration is:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n} f(x_i)\Delta x \approx \sum_{i=0}^{n-1} f(x_{i+1})\Delta x
$$
Note the difference respect to the left Riemann integration, specially the starting and end points.
### Midpoint Rule
```{figure} figs/midpoint_new.png
:name: midpoint_new.png
Illustrating the right Midpoint Rule over interval [a,b]
```
The midpoint rule defines the height of the rectangle at the midpoint between the two end points of each interval:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} f(\frac{x_{i}+x_{i+1}}{2}) \Delta x
$$
### Trapezoidal Rule
```{figure} figs/trapezoidal_new.png
:name: trapezoidal_new.png
Illustrating the Trapezoidal Rule over interval [a,b]
```
The trapezoidal rule uses a polynomial of the first order. As a result the area to be computed is a trapezoid (the top of the rectangle has a slope now) where the bases are the values assumed by the function in the end points of each of the partial intervals, and its height is just the difference between two consecutive points. Mathematically it reduces to the average of two rectangles:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=0}^{n-1} \frac{f(x_{i}) + f(x_{i+1})}{2} \Delta x
$$
### Simpson’s Rule
```{figure} figs/Simpson_s.png
:name: Simpson_s.png
Illustrating the Simpson Rule over interval [a,b]
```
Simpson’s rule uses a second order polynomial. It computes the integral of the function $f(x)$ by fitting parabolas needing three consecutive function points and adding the areas under them (see $p_1(x)$ and $p_2(x)$ in the figure). In formula form:
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n/2} \frac{f(x_{2i-2}) + 4f(x_{2i-1}) + f(x_{2i})}{6} 2\Delta x
$$
Note that the interval length is multiplied with two, since three points are needed.
%% Cell type:markdown id: tags:
:::{card} ***Exercise***
Estimate the integral $\int_{0.5}^1 x^4$ with a total number of steps n=2:
- Left Rieman Sum
- Left Riemann Sum
- Midpoint rule
- Trapezoidal
and then compute the real error $\epsilon= | f(x)- \hat{f(x)|}$, where f(x) is the exact integral and $\hat{f(x)}$ the approximated integral
`````{admonition} Tip
:class: tip
Tip: feel free to use the code cell below as a calculator.
`````
```{admonition} **Solution**
:class: tip, dropdown
- find $\Delta x$,
$$
\frac{b-a}{n}=\frac{1-0.5}{2}=0.25
$$
**Left Rieman Sum**:
**Left Riemann Sum**:
$$
\sum_{i=0}^{n-1} f(x_i)\Delta x = f(x_0) \Delta x + f(x_1) \Delta x
$$
$$
0.5^4 \cdot 0.25 + 0.75^4 \cdot 0.25 = 0.0947265625
$$
**Midpoint Rule**:
$$
\sum_{i=0}^{n-1} f(\frac{x_{i}+x_{i+1}}{2}) \Delta x = f(\frac{x_0+x_1}{2}) \Delta x + f(\frac{x_1+x_2}{2}) \Delta x
$$
$$
0.625^4 0.25 + 0.875^4 0.25= 0.1846923828125
$$
**Trapezoidal Rule**:
$$
\sum_{i=0}^{n-1} \frac{f(x_{i}) + f(x_{i+1})}{2} \Delta x = \frac{f(x_{0}) + f(x_{1})}{2} \Delta x + \frac{f(x_{1}) + f(x_{2})}{2} \Delta x
$$
$$
\frac{f(0.5) + f(0.75)}{2} \Delta x + \frac{f(0.75) + f(1)}{2} \Delta x = \frac{(0.5^4 +0.75^4)}{2} \cdot 0.25 + \frac{(0.75^4 +1^4)}{2} \cdot 0.25= 0.2119140625
$$
**Exact**
$$
\int_{0.5}^1 x^4$ = (\frac{1}{5}x^5)\bigg|_{0.5}^{1} = \frac{1}{5}1^5- \frac{1}{5}0.5^5= 0.19375
$$
**Error**
$$
\epsilon= | f(x)- \hat{f(x)}|
$$
error Left Rieman = | 0.19375 - 0.0947265625| = 0.0990234375
error Left Riemann = $| 0.19375 - 0.0947265625|$ = $0.0990234375$
error Midpoint Rule = | 0.19375 - 0.1846923828125| = 0.009057617187500006
error Midpoint Rule = $| 0.19375 - 0.1846923828125|$ = $0.009057617187500006$
error Trapezoidal Rule = | 0.19375 - 0.2119140625| = 0.018164062499999994
error Trapezoidal Rule = $| 0.19375 - 0.2119140625|$ = $0.018164062499999994$
```
:::
%% Cell type:code id: tags:
``` python
#feel free to use this as your calculator for the exercise above
def f(x):
return x**4
f_left_rieman=
f_left_riemann=
f_Midpoint=
f_trapezoidal=
f_exact=
print(f"f_left_rieman = {f_left_rieman}")
print(f"f_left_riemann = {f_left_riemann}")
print(f"f_Midpoint = {f_Midpoint}")
print(f"f_trapezoidal = {f_trapezoidal}")
print(f"f_exact = {f_exact}")
print(f"Error left Riemann = {abs(f_exact-f_left_rieman)}")
print(f"Error left Riemann = {abs(f_exact-f_left_riemann)}")
print(f"Error Midpoint = {abs(f_exact-f_Midpoint)}")
print(f"Error trapezoidal = {abs(f_exact-f_trapezoidal)}")
```
%% Cell type:markdown id: tags:
### Truncation Errors
The truncation error for each method is the difference between the exact integral and the approximated integral. In the case of the left Riemann method:
$$
\text{Left Riemann Error }= \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x
$$
Now, we can use the Taylor Series Expansion of $f(x)$ instead of $f(x)$ itself and solve that integral. If you are curious about the mathematical steps, further below you can find the detailed derivation. For the moment, let's see the absolute errors for the different techniques.
$$
\text{Left Riemann Error }= \left| \bar f'(b-a)\Delta x /2 \right| \text { therefore } \mathcal O(\Delta x)
$$
$$
\text{Right Riemann Error }= \left| \bar f'(b-a)\Delta x /2 \right| \text { therefore } \mathcal O(\Delta x)
$$
$$
\text{Midpoint Error }= \left| \bar f''(b-a)\Delta x^2 /2 \right| \text { therefore } \mathcal O(\Delta x^2)
$$
$$
\text{Trapezoidal Error }= \left| \bar f''(b-a)\Delta x^2 /2 \right| \text { therefore } \mathcal O(\Delta x^2)
$$
$$
\text{Simpsons Error }= \left| \bar f''''(b-a)\Delta x^4 /2 \right| \text { therefore } \mathcal O(\Delta x^4)
$$
#### Analysis
As we can see, for the left and right Riemann integration techniques we find errors $\sim \dfrac{1}{n}$, while for the midpoint and trapezoidal rules, we find them $\sim\dfrac{1}{n^2}$, and with Simpson's rule $\sim\dfrac{1}{n^4}$.
%% Cell type:markdown id: tags:
#### Truncation Error Derivation for Left Riemann integration
```{note}
You don't need to know the derivation for the exam.
```
%% Cell type:markdown id: tags:
:::{card} Derivation
$$
\text{Left Riemann Error }= | \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x | \hspace{5mm} (1)
$$
```{admonition} **Derivation**
:class: tip, dropdown
Start by using a Taylor series expansion:
$$
f(x)= f(x_i)+ (x-x_i)f'(\xi(x)) + \mathcal{O}(\Delta x)^2 \hspace{5mm}(2)
$$
Substitute this Taylor expansion (2) in the term $\int_a^b f(x)dx$ in equation (1):
$$
\int_a^b \bigg[ f(x_i)+ (x-x_i)f'(x_i)+ \mathcal{O}(\Delta x)^2 \bigg] dx \hspace{5mm} (3)
$$
Rewrite this as the sum of the integrals over the sub-intervals $[x_i,x_{i+1}]$:
$$
= \sum_{i=0}^{n-1} \int_{x_i}^{x_{x+1}} \bigg[ f(x_i)+ (x-x_i)f'(x_i)+ \frac{1}{2}(x-x_i)^2f''(x)+ \mathcal{O}(\Delta x)^2 \bigg]dx \hspace{5mm} (4)
$$
Using integration by substitution, $u= x-x_i$, therefore $\frac{du}{dx}=1$ meaning du=dx
$$
=\sum_{i=0}^{n-1} \int_{x_i}^{x_{x+1}} \bigg[ f(x_i)+ (u)f'(x_i) + \mathcal{O}(\Delta x)^2 \bigg]du \hspace{5mm} (5)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ (u)f(x_i)+ \frac{(u)^2}{2}f'(x_i)+ \mathcal{O}(\Delta x)^2 \bigg] \bigg|^{x_{i+1}}_{x_{i}} \hspace{5mm} (6)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ (x_{i+1}-x_i)f(x_i)+ \frac{(x_{i+1}-x_i)^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 \bigg] \hspace{5mm} (7)
$$
$$
=\sum_{i=0}^{n-1} \bigg[ \Delta xf(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}\Delta x^2 \bigg] \hspace{5mm} (8)
$$
Plug equation (8) in equation (1):
$$
|\sum_{i=0}^{n-1} \bigg[ (\Delta x)f(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2\bigg] - \sum_{i=0}^{n-1} f(x_i)\Delta x | \hspace{5mm} (9)
$$
$$
=|\sum_{i=0}^{n-1} \bigg[ \Delta xf(x_i)+ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 - f(x_i)\Delta x\bigg]| \hspace{5mm} (10)
$$
The term $\Delta xf(x_i)$ falls away:
$$
=|\sum_{i=0}^{n-1} \bigg[ \frac{\Delta x^2}{2}f'(x_i))+ \mathcal{O}(\Delta x)^2 \bigg]| \hspace{5mm} (11)
$$
$$
= n \cdot (b-a)\frac{\Delta x^2}{2}f'(x_i) \hspace{5mm} (12)
$$
$\Delta x = \frac{b-a}{n}$ this gives:
```
$$
\text{Left Riemann Error }= | \int_a^b f(x) dx - \sum_{i=0}^{n-1} f(x_i)\Delta x | = | \bar{f}'(b-a) \frac{\Delta x}{2} |
$$
The resulting left Riemann error is proportional to the average of the derivatives, the length of the domain (b-a) and the step size ($\Delta x$). This result tells us that for steeper functions the error will be higher than for milder ones. For example, a function of a straight horizontal line with average derivatives equal to 0 would have an error equal to 0! Also, the error will be larger if the domain is larger, which is kind of obvious. As those first two terms (average derivative and domain length) are inherent to the problem requirements, we say that the error is of the order $\Delta x$.
:::
%% Cell type:markdown id: tags:
:::{card} Exercise
Estimate the integral $\int_{1}^{3} \frac{5}{x^4} dx$ with a step-size $\Delta x= 0.5$
- Right Riemann Sum
- Simpson Rule
`````{admonition} Tip
:class: tip
Tip: feel free to use the code cell below as a calculator.
`````
```{admonition} **Solution**
:class: tip, dropdown
Find $n$
$$
n = \frac{b-a}{\Delta x}= \frac{3-1}{0.5}= 4
$$
***Right Riemann Sum***
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n} f(x_i)\Delta x \approx \sum_{i=0}^{n-1} f(x_{i+1})\Delta x
$$
$$
= \frac{5}{1.5^4} \cdot 0.5 +\frac{5}{2^4} \cdot 0.5 + \frac{5}{2.5^4} \cdot 0.5 + \frac{5}{3^4} \cdot 0.5
$$
$$
= 0.7449413580246914
$$
***Simpsons Rule***
$$
\int_{a}^{b} f(x) \, dx \approx \sum_{i=1}^{n/2} \frac{f(x_{2i-2}) + 4f(x_{2i-1}) + f(x_{2i})}{6} 2\Delta x
$$
$$
\frac{ \frac{5}{1^4}+ 4\frac{5}{1.5^4} + \frac{5}{2^4}}{6} 2\cdot 0.5 +\frac{ \frac{5}{2^4}+ 4\frac{5}{2.5^4} + \frac{5}{3^4}}{6} 2\cdot 0.5 = 1.69155761316872
$$
***Exact***
$$
\int_{1}^{3} \frac{5}{x^4} \, dx = -\frac{5}{3x^3}\bigg|_1^3 = 1.60493827160
$$
***error***
$$
\text{Right Riemann error} = | 1.6049382716 - 0.744941358| = 0.8599969
$$
$$
\text{Simpsons rule error} = | 1.6049382716 - 0.744941358| = 0.0866193
$$
```
:::
%% Cell type:code id: tags:
``` python
#feel free to use this as your calculator for the exercise above
def f(x):
return 5/(x**4)
f_right_riemann =
f_simps =
f_exact =
print(f"f_right_riemann = {f_right_riemann}")
print(f"f_simps = {f_simps}")
print(f"f_exact = {f_exact}")
print(f"Error right Riemann = {abs(f_exact-f_right_riemann)}")
print(f"Error Simpsons = {abs(f_exact-f_simps)}")
```
%% Cell type:markdown id: tags:
......
# Bridging the Gap in Civil Engineering & Geoscience
Civil engineering and geosciences now modernised to augment experimental observations with precise numerical techniques, for the analysis and design complex structures. In this regard, the Finite Difference Method (FDM) is a fundamental and indispensable numerical approach, which plays a pivotal role in solving differential equations that govern physical phenomena.
Civil engineering and geosciences now modernised to augment experimental observations with precise numerical techniques, for the analysis and design of complex structures. In this regard, the Finite Difference Method (FDM) is a fundamental and indispensable numerical approach, which plays a pivotal role in solving differential equations that govern physical phenomena.
These phenomena include and are not limited to heat flow through media, the distribution of stresses in solid structures, fluid flow through porous media, and pollutant dispersion in the air and water. Solving the differential equations that govern these phenomena is often infeasible for the complex geometries and boundary conditions that are encountered in engineering.
This is where these numerical methods and FDMs come in indispensable as it systematically discretizes the expressions, transforming them into algebraic equations that can be solved numerically on a computer. This approach provides engineers with the flexibility to model and analyze a wide range of scenarios, optimize designs, ensure safety, reduce costs, and predict failures too.
\ No newline at end of file
This is where these numerical methods and FDM comes in indispensable as it systematically discretizes the expressions, transforming them into algebraic equations that can be solved numerically on a computer. This approach provides engineers with the flexibility to model and analyze a wide range of scenarios, optimize designs, ensure safety, reduce costs, and predict failures too.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment