" ax1.plot(X, label=f\"AR(1) Process with φ={round(phi, 2)}\")\n",
" ax1.plot(X, label=f\"AR(1) Process with φ={round(phi, 2)}\")\n",
" ax1.set_ylabel(\"Value\")\n",
" ax1.set_ylabel(\"Value\")\n",
" ax1.set_title(\"Simulated AR(1) Process\")\n",
" ax1.set_title(\"Simulated AR(1) Process\")\n",
" ax1.set_xlabel(\"Time\")\n",
" ax1.set_ylim(-4, 4)\n",
" ax1.set_ylim(-4, 4)\n",
" ax1.legend()\n",
" ax1.legend()\n",
" ax1.grid(True)\n",
" ax1.grid(True)\n",
...
@@ -205,12 +191,14 @@
...
@@ -205,12 +191,14 @@
" # Plot the white noise in the second subplot\n",
" # Plot the white noise in the second subplot\n",
" lags = 20\n",
" lags = 20\n",
" plot_acf(X, ax=ax2, lags=lags, title=\"ACF of White Noise\")\n",
" plot_acf(X, ax=ax2, lags=lags, title=\"ACF of White Noise\")\n",
" ax2.set_xlabel(\"Time\")\n",
" ax2.set_xlabel(\"Lag\")\n",
" ax2.set_ylabel(\"ACF Value\")\n",
" ax2.set_ylabel(\"ACF Value\")\n",
" ax2.set_title(\"ACF of AR(1) Process\")\n",
" ax2.set_title(\"ACF of AR(1) Process\")\n",
" ax2.set_xlim(-0.5, lags)\n",
" ax2.set_xlim(-0.5, lags)\n",
" ax2.grid(True)\n",
" ax2.grid(True)\n",
"\n",
"\n",
" \n",
"\n",
" # Display the plot\n",
" # Display the plot\n",
" plt.tight_layout()\n",
" plt.tight_layout()\n",
" plt.show()\n",
" plt.show()\n",
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
(AR)=
(AR)=
# AR process
# AR process
The code on this page can be used interactively: click {fa}`rocket` --> {guilabel}`Live Code` in the top right corner, then wait until the message {guilabel}`Python interaction ready!` appears.
The code on this page can be used interactively: click {fa}`rocket` --> {guilabel}`Live Code` in the top right corner, then wait until the message {guilabel}`Python interaction ready!` appears.
The main goal is to introduce the AutoRegressive (AR) model to describe a **stationary stochastic process**. Hence the AR model can be applied on time series where e.g. trend and seasonality are not present / removed, and only noise remains, or after applying other methods [to obtain a stationary time series](stationarize).
The main goal is to introduce the AutoRegressive (AR) model to describe a **stationary stochastic process**. Hence the AR model can be applied on time series where e.g. trend and seasonality are not present / removed, and only noise remains, or after applying other methods [to obtain a stationary time series](stationarize).
## Process definition
## Process definition
In an AR model, we forecast the variable of interest using a linear combination of its past values. A zero mean AR process of orders $p$ can be written as follows:
In an AR model, we forecast the variable of interest using a linear combination of its past values. A zero mean AR process of orders $p$ can be written as follows:
Each observation is made up of a **random error** $e_t$ at that epoch, a linear combination of **past observations**. The errors $e_t$ are uncorrelated purely random noise process, known also as white noise. We note the process should still be stationary, satisfying
Each observation is made up of a **random error** $e_t$ at that epoch, a linear combination of **past observations**. The errors $e_t$ are uncorrelated purely random noise process, known also as white noise. We note the process should still be stationary, satisfying
This indicates that parts of the total variability of the process come from the signal and noise of past epochs, and only a (small) portion belongs to the noise of that epoch (denoted as $e_t$).
This indicates that parts of the total variability of the process come from the signal and noise of past epochs, and only a (small) portion belongs to the noise of that epoch (denoted as $e_t$).
### First-order AR(1) process
### First-order AR(1) process
We will just focus on explaining $p=1$, i.e. the AR(1) process. A **zero-mean first order autoregressive** process can be written as follows
We will just focus on explaining $p=1$, i.e. the AR(1) process. A **zero-mean first order autoregressive** process can be written as follows
where $e_t$ is an i.i.d. noise process, e.g. distributed as $e_t\sim N(0,\sigma_{e}^2)$. See later the definition of $\sigma_{e}^2$.
where $e_t$ is an i.i.d. noise process, e.g. distributed as $e_t\sim N(0,\sigma_{e}^2)$. See later the definition of $\sigma_{e}^2$.
:::{card} Exercise
:::{card} Exercise
In a zero-mean first order autoregressive process, abbreviated as AR(1), we have $m=3$ observations, $\phi=0.8$, and the generated white noise errors are $e = [e_1,\, e_2,\, e_3]^T=[1,\, 2,\, -1]^T$. What is the generated AR(1) process $S = [S_1,\, S_2,\, S_3]^T$?
In a zero-mean first order autoregressive process, abbreviated as AR(1), we have $m=3$ observations, $\phi=0.8$, and the generated white noise errors are $e = [e_1,\, e_2,\, e_3]^T=[1,\, 2,\, -1]^T$. What is the generated AR(1) process $S = [S_1,\, S_2,\, S_3]^T$?
a. $S = \begin{bmatrix}1 & 2.8 & 1.24\end{bmatrix}^T$
a. $S = \begin{bmatrix}1 & 2.8 & 1.24\end{bmatrix}^T$
b. $S = \begin{bmatrix} 0 & 2 & 0.6 \end{bmatrix}^T$
b. $S = \begin{bmatrix} 0 & 2 & 0.6 \end{bmatrix}^T$
c. $S = \begin{bmatrix} 1 & 2 & -1 \end{bmatrix}^T$
c. $S = \begin{bmatrix} 1 & 2 & -1 \end{bmatrix}^T$
```{admonition} Solution
```{admonition} Solution
:class: tip, dropdown
:class: tip, dropdown
The correct answer is **a**. The AR(1) process can be initialized as $S_1=e_1=1$. The next values can be obtained through:
The correct answer is **a**. The AR(1) process can be initialized as $S_1=e_1=1$. The next values can be obtained through:
$$ S_t = \phi S_{t-1} + e_t $$
$$ S_t = \phi S_{t-1} + e_t $$
Giving $S_2=0.8 S_1 + e_2 = 0.8\cdot 1 + 2 = 2.8$ and $S_3=0.8 S_2 + e_3 = 0.8\cdot 2.8 - 1= 1.24$, so we have:
Giving $S_2=0.8 S_1 + e_2 = 0.8\cdot 1 + 2 = 2.8$ and $S_3=0.8 S_2 + e_3 = 0.8\cdot 2.8 - 1= 1.24$, so we have:
Initializing $S_1=e_1$, with $\mathbb{E}(S_1)=\mathbb{E}(e_1)=0$ and $\mathbb{D}(S_1)=\mathbb{D}(e_1)=\sigma^2$. Following this, multiple applications of the above "autoregressive" formula ($S_t = \phi S_{t-1} + e_t$) gives:
Initializing $S_1=e_1$, with $\mathbb{E}(S_1)=\mathbb{E}(e_1)=0$ and $\mathbb{D}(S_1)=\mathbb{D}(e_1)=\sigma^2$. Following this, multiple applications of the above "autoregressive" formula ($S_t = \phi S_{t-1} + e_t$) gives:
All the error components, $e_t$, are uncorrelated such that $Cov(e_t,e_{t+\tau})=0$ if $\tau \neq 0$, and with variance $\sigma_{e}^2$ which still needs to be determined.
All the error components, $e_t$, are uncorrelated such that $Cov(e_t,e_{t+\tau})=0$ if $\tau \neq 0$, and with variance $\sigma_{e}^2$ which still needs to be determined.
* Autocovariance function $\implies$ $c_{\tau}=\sigma^2\phi^\tau$
* Autocovariance function $\implies$ $c_{\tau}=\sigma^2\phi^\tau$
* Normalized autocovariance function (ACF) $\implies$ $\rho_\tau=c_{\tau}/c_0=\phi^\tau$
* Normalized autocovariance function (ACF) $\implies$ $\rho_\tau=c_{\tau}/c_0=\phi^\tau$
* Larger value of $\phi$ indicates a long-memory random process
* Larger value of $\phi$ indicates a long-memory random process
* If $\phi=0$, this is called *purely random process* (white noise)
* If $\phi=0$, this is called *purely random process* (white noise)
* ACF is even, $c_{\tau}=c_{-\tau}=c_{|\tau|}$ and so is $\rho_{\tau}=\rho_{-\tau}=\rho_{|\tau|}$
* ACF is even, $c_{\tau}=c_{-\tau}=c_{|\tau|}$ and so is $\rho_{\tau}=\rho_{-\tau}=\rho_{|\tau|}$
Later in this section we will see how the coefficient $\phi$ can be estimated.
Later in this section we will see how the coefficient $\phi$ can be estimated.
## Simulated example
## Simulated example
If you have run the python code on this page, an interactive plot will be displayed below. You can change the value of $\phi$ and the number of observations $m$ to see how the AR(1) process changes. At the start, the process is initialized with $\phi = 0.8$. Try moving the slider and see the response of the ACF; pay special attention when $\phi=0$ and when $\phi$ becomes negative.
If you have run the python code on this page, an interactive plot will be displayed below. You can change the value of $\phi$ and the number of observations $m$ to see how the AR(1) process changes. At the start, the process is initialized with $\phi = 0.8$. Try moving the slider and see the response of the ACF; pay special attention when $\phi=0$ and when $\phi$ becomes negative.
Lastly, focus on the case where $\phi=1$ and $\phi=-1$. What do you observe? You will notice that the function will "explode". This makes intuitive sense, since the effect of the previous epoch is not dampened, but rather amplified. This also means that the process is not stationary anymore. So, the AR(1) process is stationary if $|\phi|<1$.
Lastly, focus on the case where $\phi=1$ and $\phi=-1$. What do you observe? You will notice that the function will "explode". This makes intuitive sense, since the effect of the previous epoch is not dampened, but rather amplified. This also means that the process is not stationary anymore. So, the AR(1) process is stationary if $|\phi|<1$.
If the values of $p$ and of the AR($p$) process are known, the question is: **how can we estimate the coefficients $\phi_1,...,\phi_p$**
If the values of $p$ and of the AR($p$) process are known, the question is: **how can we estimate the coefficients $\phi_1,...,\phi_p$**
Here, we only elaborate on AR(1) using best linear unbiased estimation (BLUE) to estimate $\phi_1$. The method can be generalized to estimate the parameters of an AR($p$) process.
Here, we only elaborate on AR(1) using best linear unbiased estimation (BLUE) to estimate $\phi_1$. The method can be generalized to estimate the parameters of an AR($p$) process.
**Example: Parameter estimation of AR(1)**
**Example: Parameter estimation of AR(1)**
The AR(1) process is of the form
The AR(1) process is of the form
$$S_t=\phi_1 S_{t-1}+e_t$$
$$S_t=\phi_1 S_{t-1}+e_t$$
In order to estimate the $\phi_i$ we can set up the following linear model of observation equations (starting from $t=2$):
In order to estimate the $\phi_i$ we can set up the following linear model of observation equations (starting from $t=2$):