13.1 ARCH and GARCH Models

13.1.1 ARCH(1): Definition and Properties

The ARCH model of order 1, ARCH(1), is defined as follows:

Definition 13.1 (ARCH(1))  
The process $ \varepsilon_t$, $ t \in \mathbb{Z}$, is ARCH(1), if $ \mathop{\text{\rm\sf E}}[\varepsilon_t \mid {\cal F}_{t-1}]=0$,

$\displaystyle \sigma_t^2=\omega+\alpha \varepsilon_{t-1}^2$

(13.2)


with $ \omega>0, \: \alpha \geq 0$and

13.1.2 Estimation of ARCH(1) Models

Theorem 12.5 says that an ARCH(1) process can be represented as an AR(1) process in $ X_t^2$. A simple Yule-Walker estimator uses this property:

$\displaystyle \hat{\alpha}^{(0)} = \frac{\sum_{t=2}^n (\varepsilon_t^2 - \hat{\...
...- \hat{\omega}^{(0)})}
{\sum_{t=2}^n (\varepsilon_t^2 - \hat{\omega}^{(0)})^2}
$

with $ \hat{\omega}^{(0)} = n^{-1}\sum_{t=1}^n \varepsilon_t^2$. Since the distribution of $ \varepsilon_t^2$is naturally not normal, the Yule-Walker estimator is inefficient. However it can be used as an initial value for iterative estimation methods.

The estimation of ARCH models is normally done using the maximum likelihood (ML) method. Assuming that the returns $ \varepsilon_t$have a conditionally normal distribution, we have:

$\displaystyle p(\varepsilon_t \mid {\cal F}_{t-1})=\frac{1}{\sqrt{2\pi}\sigma_t} \exp\left\{-\frac{1}{2}\frac{\varepsilon_t^2}{\sigma_t^2}\right\},$

(13.7)


The log-likelihood function $ l(\omega,\alpha)$can be written as a function of the parameters $ \omega$and $ \alpha$:

$\displaystyle l(\omega,\alpha)$

$\displaystyle =$

$\displaystyle \sum_{t=2}^n l_t(\omega,\alpha) + \log p_{\varepsilon}(\varepsilon_1)$

(13.8)

 

$\displaystyle =$

$\displaystyle \sum_{t=2}^n \log p(\varepsilon_t \mid {\cal F}_{t-1}) + \log p_{\varepsilon}(\varepsilon_1)$

 

 

$\displaystyle =$

$\displaystyle -\frac{n-1}{2} \log(2\pi) - \frac{1}{2} \sum_{t=2}^n \log (\omega+\alpha \varepsilon_{t-1}^2)$

 

 

 

$\displaystyle - \frac{1}{2} \sum_{t=2}^n \frac{\varepsilon_t^2}{\omega+\alpha \varepsilon_{t-1}^2} + \log p_{\varepsilon}(\varepsilon_1),$

 


where $ p_{\varepsilon}$is the stationary marginal density of $ \varepsilon_t$. A problem is that the analytical expression for $ p_{\varepsilon}$is unknown in ARCH models thus (12.8) can not be calculated. In the conditional likelihood function $ l^b=\log p(\varepsilon_n,\ldots,\varepsilon_2
\mid \varepsilon_1)$the expression $ \log
p_{\varepsilon}(\varepsilon_1)$disappears:

$\displaystyle l^b(\omega,\alpha)$

$\displaystyle =$

$\displaystyle \sum_{t=2}^n l_t(\omega,\alpha)$

(13.9)

 

$\displaystyle =$

$\displaystyle \sum_{t=2}^n \log p(\varepsilon_t \mid {\cal F}_{t-1})$

 

 

$\displaystyle =$

$\displaystyle -\frac{n-1}{2} \log(2\pi) - 1/2 \sum_{t=2}^n \log (\omega+\alpha ...
...)
- 1/2 \sum_{t=2}^n \frac{\varepsilon_t^2}{\omega+\alpha \varepsilon_{t-1}^2}.$

 


For large $ n$the difference $ l - l^b$is negligible.

Figure 12.5 shows the conditional likelihood of a generated ARCH(1) process with $ n=100$. The parameter $ \omega$is chosen so that the unconditional variance is everywhere constant, i.e., with a variance of $ \sigma^2$, $ \omega=(1-\alpha)\sigma^2$. The optimization of the likelihood of an ARCH(1) model can be found by analyzing the graph. Most often we would like to know the precision of the estimator as well. Essentially it is determined by the second derivative of the likelihood at the optimization point by the asymptotic properties of the ML estimator (see Section 12.1.6). Furthermore one has to use numerical methods such as the score algorithm introduced in Section 11.8 to estimate the parameters of the models with a larger order. In this case the first and second partial derivatives of the likelihood must be calculated.

\includegraphics[width=1\defpicwidth]{likarch1.ps}

Fig.: Conditional likelihood function of a generated ARCH(1) process with $ n=100$. The true parameter is $ \alpha=0.5$. 20418SFElikarch1.xpl

With the ARCH(1) model these are

$\displaystyle \frac{\partial l_t^b}{\partial \omega}$

$\displaystyle =$

$\displaystyle \frac{1}{2\sigma_t^2} \left(\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.10)

$\displaystyle \frac{\partial l_t^b}{\partial \alpha}$

$\displaystyle =$

$\displaystyle \frac{1}{2\sigma_t^2} \varepsilon_{t-1}^2 \left(\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.11)

$\displaystyle \frac{\partial^2 l_t^b}{\partial \omega^2}$

$\displaystyle =$

$\displaystyle -\frac{1}{2\sigma_t^4} \left(2\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.12)

$\displaystyle \frac{\partial^2 l_t^b}{\partial \alpha^2}$

$\displaystyle =$

$\displaystyle -\frac{1}{2\sigma_t^4} \varepsilon_{t-1}^4 \left(2\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.13)

$\displaystyle \frac{\partial^2 l_t^b}{\partial \omega \partial \alpha}$

$\displaystyle =$

$\displaystyle -\frac{1}{2\sigma_t^4} \varepsilon_{t-1}^2 \left(2\frac{\varepsilon_t^2}{\sigma_t^2}-1\right).$

(13.14)


The fist order conditions are $ \sum_{t=2}^n
\partial l_t^b / \partial \omega =0$and $ \sum_{t=2}^n \partial l_t^b / \partial \alpha =0$. For the score algorithm the expected value of the second derivative has to be calculated. It is assumed that $ \mathop{\text{\rm\sf E}}[Z_t^2]=\mathop{\text{\rm\sf E}}[(\varepsilon_t/\sigma_t)^2]=1$, so that the expression in the parentheses $ (2\varepsilon_t^2/\sigma_t^2-1)$has an expected value of one. From this it follows that

$\displaystyle \mathop{\text{\rm\sf E}}\left[ \frac{\partial^2 l_t^b}{\partial \...
...ight] = -\frac{1}{2}\mathop{\text{\rm\sf E}}\left[\frac{1}{\sigma_t^4}\right].
$

The expectation of $ \sigma_t^{-4}$is consistently estimated by $ (n-1)^{-1}\sum_{t=2}^n (\omega+\alpha \varepsilon_{t-1}^2)^{-2}$, so that for the estimator of the expected value of the second derivative we have:

$\displaystyle \hat{\mathop{\text{\rm\sf E}}} \frac{\partial^2 l_t^b}{\partial \omega^2} = -\frac{1}{2(n-1)}\sum_{t=2}^{n} \frac{1}{\sigma_t^4}.
$

Similarly the expected value of the second derivative with respect to $ \alpha$follows with

$\displaystyle \mathop{\text{\rm\sf E}}\left[ \frac{\partial^2 l_t^b}{\partial \...
...{2}\mathop{\text{\rm\sf E}}\left[\frac{\varepsilon_{t-1}^4}{\sigma_t^4}\right]
$

and the estimator is

$\displaystyle \hat{\mathop{\text{\rm\sf E}}} \frac{\partial^2 l_t^b}{\partial \...
...a^2} = -\frac{1}{2(n-1)}\sum_{t=2}^{n} \frac{\varepsilon_{t-1}^4}{\sigma_t^4}.
$

Theorem 13.6  
Given $ Z_t \sim
{\text{\rm N}}(0,1)$, it holds that

$\displaystyle \mathop{\text{\rm\sf E}}\left[\left(\frac{\partial l_t^b}{\partia...
...athop{\text{\rm\sf E}}\left[ \frac{\partial^2 l_t^b}{\partial \omega^2}\right]
$

Proof:
This follows immediately from $ \mathop{\text{\rm\sf E}}\left[\left(\frac{\partial
l_t^b}{\partial \omega}\rig...
...\mathop{\text{\rm\sf E}}
\left[\frac{1}{4\sigma_t^4} (Z_t^4 - 2Z_t^2+1)\right]
$
$ = \mathop{\text{\rm\sf E}}\left[\frac{1}{4\sigma_t^4}\right] (3 - 2 + 1).$$ {\Box}$

Obviously Theorem 12.6 also holds for the parameter $ \alpha$in place of $ \omega$. In addition it essentially holds for more general models, for example the estimation of GARCH models in Section 12.1.6. In more complicated models one can replace the second derivative with the square of the first derivative, which is easier to calculate. It is assumed, however, that the likelihood function is correctly specified, i.e., the true distribution of the error terms is normal.

Under the two conditions

  1. $ \mathop{\text{\rm\sf E}}[Z_t \mid {\cal F}_{t-1}]= 0$and $ \mathop{\text{\rm\sf E}}[Z_t^2 \mid {\cal
F}_{t-1}]= 1$
  2. $ \mathop{\text{\rm\sf E}}[\log(\alpha Z_t^2) \mid {\cal F}_{t-1}] <
0$(strict stationarity)

and under certain technical conditions, the ML estimators are consistent. If $ \mathop{\text{\rm\sf E}}[Z_t^4 \mid {\cal F}_{t-1}] < \infty$and $ \omega>0$, $ \alpha>0$hold in addition, then $ \hat{\theta}=(\hat{\omega}, \hat{\alpha})^\top $is asymptotically normally distributed:

$\displaystyle \sqrt{n}(\hat{\theta}-\theta) \stackrel{{\cal L}}{\longrightarrow} {\text{\rm N}} (0, J^{-1} I J^{-1})$

(13.15)


with

$\displaystyle I = \mathop{\text{\rm\sf E}}\left(\frac{\partial l_t(\theta)}{\partial \theta}
\frac{\partial l_t(\theta)}{\partial \theta^\top } \right)
$

and

$\displaystyle J=- \mathop{\text{\rm\sf E}}\left(\frac{\partial^2 l_t(\theta)}{\partial \theta
\partial \theta^\top } \right).
$

If the true distribution of $ Z_t$is normal, then $ I=J$and the asymptotic covariance matrix is simplified to $ J^{-1}$, i.e., the inverse of the Fischer Information matrix. If the true distribution is instead leptokurtic, then the maximum of (12.9) is still consistent, but no longer efficient. In this case the ML method is interpreted as the `Quasi Maximum Likelihood' (QML) method.

In a Monte Carlo simulation study in Shephard (1996) 1000 ARCH(1) processes with $ \omega=0.2$and $ \alpha =0.9$were generated and the parameters were estimated using QML. The results are given in Table 12.2. Obviously with the moderate sample sizes ($ n=500$) the bias is negligible. The variance, however, is still so large that a relatively large proportion (10%) of the estimators are larger than one, which would imply covariance nonstationarity. This, in turn, has a considerable influence on the volatility prediction.

 

Table 12.2: Monte Carlo simulation results for QML estimates of the parameter $ \alpha =0.9$from an ARCH(1) model with $ k=1000$replications. The last column gives the proportion of the estimator that are larger than 1 (according to Shephard (1996)).

$ n$

$ k^{-1}\sum_{j=1}^k \hat{\alpha}_j$

$ \sqrt{k^{-1}\sum_{j=1}^k(\hat{\alpha}_j-\alpha)^2}$

#( $ \alpha_j\ge 1$)

100

0.852

0.257

27%

250

0.884

0.164

24%

500

0.893

0.107

15%

1000

0.898

0.081

10%

 

13.1.3 ARCH($ q$): Definition and Properties

The definition of an ARCH(1) model will be extended for the case that $ q>1$lags, on which the conditional variance depends.

Definition 13.2 (ARCH($ q$))  
The process $ (\varepsilon_t)$, $ t \in \mathbb{Z}$, is ARCH($ q$), when $ \mathop{\text{\rm\sf E}}[\varepsilon_t \mid {\cal F}_{t-1}]=0$,

$\displaystyle \sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2 + \ldots + \alpha_q \varepsilon_{t-q}^2$

(13.16)


with $ \omega>0, \: \alpha_1\geq 0, \ldots, \alpha_q\geq 0$and

The conditional variance $ \sigma_t^2$in an ARCH($ q$) model is also a linear function of the $ q$squared lags.

Theorem 13.7  
Let $ \varepsilon_t$be an ARCH($ q$) process with $ \mathop{\text{\rm Var}}(\varepsilon_t)=\sigma^2<\infty$. Then

$\displaystyle \sigma^2 = \frac{\omega}{1-\alpha_1-\ldots-\alpha_q}
$

with $ \alpha_1+\cdots+\alpha_q < 1$.

Proof:
as in Theorem 12.2. $ {\Box}$

If instead $ \alpha_1+\cdots+\alpha_q \ge 1$, then the unconditional variance does not exist and the process is not covariance-stationary.

Theorem 13.8 (Representation of an ARCH($ q$) Process)  
Let $ \varepsilon_t$be an ARCH($ q$) process with $ \mathop{\text{\rm\sf E}}[\varepsilon_t^4]=c<\infty$. Then

  1. $ \eta_t = \sigma_t^2(Z_t^2-1)$is white noise.
  2. $ \varepsilon_t^2$is an AR($ q$) process with $ \varepsilon_t^2 =
\omega + \sum_{i=1}^q \alpha_i \varepsilon_{t-i}^2 + \eta_t$.

Proof:
as in Theorem 12.5. $ {\Box}$

It is problematic with the ARCH($ q$) model that for some applications a larger order $ q$must be used, since large lags only lose their influence on the volatility slowly. It is suggested as an empirical rule of thumb to use a minimum order of $ q=14$. The disadvantage of a large order is that many parameters have to be estimated under restrictions. The restrictions can be categorized as conditions for stationarity and the strictly positive parameters. If efficient estimation methods are to be used, for example, the maximum likelihood method, the estimation of large dimensional parameter spaces can be numerically quite complicated to obtain.

One possibility of reducing the number of parameters while including a long history is to assume linearly decreasing weights on the lags, i.e.,

$\displaystyle \sigma_t^2=\omega+\alpha \sum_{i=1}^q w_i \varepsilon_{t-i}^2,
$

with

$\displaystyle w_i = \frac{2(q+1-i)}{q(q+1)},
$

so that only two parameters have to be estimated. In Section 12.1.5 we describe a generalized ARCH model, which on the one hand, has a parsimonious parameterization, and on the other hand a flexible lag structure.

13.1.4 Estimation of an ARCH($ q$) Model

For the general ARCH($ q$) model from (12.16) the conditional likelihood is

$\displaystyle l^b(\theta)$

$\displaystyle =$

$\displaystyle \sum_{t=q+1}^n l_t(\theta)$

 

 

$\displaystyle =$

$\displaystyle -\frac{n-1}{2} \log(2\pi) - 1/2 \sum_{t=2}^n \log \sigma_t^2
- 1/2 \sum_{t=q+1}^n \frac{\varepsilon_t^q+1}{\sigma_t^2}$

(13.17)


with the parameter vector $ \theta=(\omega,\alpha_1,\ldots,\alpha_q)^\top $. Although one can find the optimum of ARCH(1) models by analyzing the graph such as Figure 12.5, it is complicated and impractical for a high dimensional parameter space. The maximization of (12.17) with respect to $ \theta$is a non-linear optimization problem, which can be solved numerically. The score algorithm is used empirically not only in ARMA models (see Section 11.8) but also in ARCH models. In order to implement this approach the first and second derivatives of the (conditional) likelihood with respect to the parameters need to be formed. For the ARCH($ q$) model the first derivative is

$\displaystyle \frac{\partial l_t^b}{\partial \theta} = \frac{1}{2\sigma_t^2} \f...
... \sigma_t^2}{\partial \theta} \left(\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.18)


with

$\displaystyle \frac{\partial \sigma_t^2}{\partial \theta} = (1, \varepsilon_{t-1}^2,
\ldots,\varepsilon_{t-q}^2)^\top .
$

The first order condition is $ \sum_{t=q+1}^n
\partial l_t / \partial \theta =0$. For the second derivative and the asymptotic properties of the QML estimator see Section 12.1.6.

13.1.5 Generalized ARCH (GARCH)

The ARCH($ q$) model can be generalized by extending it with autoregressive terms of the volatility.

Definition 13.3 (GARCH($ p,q$))   The process $ (\varepsilon_t)$, $ t \in \mathbb{Z}$, is GARCH($ p,q$), if $ \mathop{\text{\rm\sf E}}[\varepsilon_t \mid {\cal F}_{t-1}]=0$,

$\displaystyle \sigma_t^2 = \omega + \sum_{i=1}^q\alpha_i \varepsilon_{t-i}^2 + \sum_{j=1}^p \beta_j \sigma_{t-j}^2,$

(13.19)


and

The sufficient but not necessary conditions for

$\displaystyle \sigma_t^2 > 0 \quad a.s.,$   ( $\displaystyle {\P}[\sigma_t^2 > 0] = 1$   ) $\displaystyle \index{GARCH}$

(13.20)


are $ \omega>0, \: \alpha_i\geq 0, \: i=1,\ldots,q$and $ \beta_j
\geq 0, \:j=1,\ldots,p$. In the case of the GARCH(1,2) model

$\displaystyle \sigma_t^2$

$\displaystyle =$

$\displaystyle \omega + \alpha_1 \varepsilon_{t-1}^2 + \alpha_2 \varepsilon_{t-2}^2 + \beta_1 \sigma_{t-1}^2$

 

 

$\displaystyle =$

$\displaystyle \frac{\omega}{1-\beta} + \alpha_1 \sum_{j=0}^\infty \beta_1^j \varepsilon_{t-j-1}^2 + \alpha_2 \sum_{j=0}^\infty \beta_1^j \varepsilon_{t-j-2}^2$

 

 

$\displaystyle =$

$\displaystyle \frac{\omega}{1-\beta} + \alpha_1 \varepsilon_{t-1}^2 + (\alpha_1\beta_1+\alpha_2) \sum_{j=0}^\infty \beta_1^j \varepsilon_{t-j-2}^2$

 


with $ 0 \le \beta_1 <1$. $ \omega>0$, $ \alpha_1 \ge 0$and $ \alpha_1 \beta_1 + \alpha_2 \ge 0$are necessary and sufficient conditions for (12.20) assuming that the sum $ \sum_{j=0}^\infty \beta_1^j \varepsilon_{t-j-2}^2$converges.

Theorem 13.9 (Representation of a GARCH($ p,q$) process)  
Let $ \varepsilon_t$be a GARCH($ p,q$) process with $ \mathop{\text{\rm\sf E}}[\varepsilon_t^4]=c<\infty$. Then

  1. $ \eta_t = \sigma_t^2(Z_t^2-1)$is white noise.
  2. $ \varepsilon_t^2$is an ARMA($ m,p$) process with

$\displaystyle \varepsilon_t^2 = \omega + \sum_{i=1}^m \gamma_i \varepsilon_{t-i}^2 - \sum_{j=1}^p \beta_j \eta_{t-j} + \eta_t,$

(13.21)


  1. with $ m=\max(p,q)$, $ \gamma_i = \alpha_i + \beta_i$. $ \alpha_i=0$when $ i>q$, and $ \beta_i=0$when $ i>p$.

Proof:
as in Theorem 12.5. $ {\Box}$

If $ \varepsilon_t$follows a GARCH process, then from Theorem 12.9 we can see that $ \varepsilon_t^2$follows an ARMA model with conditional heteroscedastic error terms $ \eta_t$. As we know if all the roots of the polynomial $ (1-\beta_1 z -\ldots- \beta_p z^p)$lie outside the unit circle, then the ARMA process (12.21) is invertible and can be written as an AR($ \infty$) process. Moveover it follows from Theorem 12.8 that the GARCH($ p,q$) model can be represented as an ARCH($ \infty$) model. Thus one can deduce analogous conclusions from the ARMA models in determining the order $ (p,q)$of the model. There are however essential differences in the definition of the persistence of shocks.

Theorem 13.10 (Unconditional variance of a GARCH($ p,q$) process)  
Let $ \varepsilon_t$be a GARCH($ p,q$) process with $ \mathop{\text{\rm Var}}(\varepsilon_t)=\sigma^2<\infty$. Then

$\displaystyle \sigma^2 = \frac{\omega}{1-\sum_{i=1}^q \alpha_i - \sum_{j=1}^p \beta_j},
$

with $ \sum_{i=1}^q \alpha_i + \sum_{j=1}^p \beta_j < 1$.

Proof:
as in Theorem 12.2. $ {\Box}$

General conditions for the existence of higher moments of the GARCH($ p,q$) models are given in He and Teräsvirta (1999). For the smaller order models and under the assumption of distribution we can derive:

Theorem 13.11 (Fourth moment of a GARCH(1,1) process)  
Let $ \varepsilon_t$be a (semi-)strong GARCH(1,1) process with $ \mathop{\text{\rm Var}}(\varepsilon_t)=\sigma^2<\infty$and $ Z_t \sim {\text{\rm N}}(0,1).$Then $ \mathop{\text{\rm\sf E}}[\varepsilon_t^4]<\infty$holds if and only if $ 3\alpha_1^2+2\alpha_1\beta_1+\beta_1^2<1$. The Kurtosis $ \mathop{\text{\rm Kurt}}(\varepsilon_t)$is given as

$\displaystyle \mathop{\text{\rm Kurt}}[\varepsilon_t]= \frac{\mathop{\text{\rm\...
...]\right)^2} = 3 + \frac{6\alpha_1^2}{1-\beta_1^2-2\alpha_1\beta_1-3\alpha_1^2}.$

(13.22)


Proof:
It can be proved that $ \mathop{\text{\rm\sf E}}[\varepsilon_t^4]=3\mathop{\text{\rm\sf E}}[(\omega+\alpha_1
\varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2)^2]$and the stationarity of $ \varepsilon_t$. $ {\Box}$

The function (12.22) is illustrated in Figure 12.6 for all $ \alpha_1>0$, $ \mathop{\text{\rm Kurt}}[\varepsilon_t]>3$, i.e., the distribution of $ \varepsilon_t$is leptokurtic. We can observe that the kurtosis equals 3 only in the case of the boundary value $ \alpha_1=0$where the conditional heteroscedasticity disappears and a Gaussian white noise takes place. In addition it can be seen in the figure that the kurtosis increases in $ \beta_1$slowly for a given $ \alpha_1$. On the contrary it increases in $ \alpha_1$much faster for a given $ \beta_1$.

\includegraphics[width=1.2\defpicwidth]{kurgarch.ps}

Fig.: Kurtosis of a GARCH(1,1) process according to (12.22). The left axis shows the parameter $ \beta_1$, the right $ \alpha_1$. 21090SFEkurgarch.xpl

Remark 13.3  
Nelson (1990$ a$) shows that the strong GARCH(1,1) process $ X_t$is strictly stationary when $ \mathop{\text{\rm\sf E}}[\log(\alpha_1
Z_t^2+\beta_1)]<0$. If $ Z_t \sim
{\text{\rm N}}(0,1)$, then the conditions for strict stationarity are weaker than those for covariance-stationarity: $ \alpha_1 +\beta_1 <1$.

In practical applications it is frequently shown that models with smaller order sufficiently describe the data. In most cases GARCH(1,1) is sufficient.

A substantial disadvantage of the standard ARCH and GARCH models exists since they can not model asymmetries of the volatility with respect to the sign of past shocks. This results from the squared form of the lagged shocks in (12.16) and (12.19). Therefore they have an effect on the level but no effect on the sign. In other words, bad news (identified by a negative sign) has the same influence on the volatility as good news (positive sign) if the absolute values are the same. Empirically it is observed that bad news has a larger effect on the volatility than good news. In Section 12.2 and 13.1 we will take a closer look at the extensions of the standard models which can be used to calculate these observations.

13.1.6 Estimation of GARCH($ p,q$) Models

Based on the ARMA representation of GARCH processes (see Theorem 12.9) Yule-Walker estimators $ \tilde{\theta}$are considered once again. These estimators are, as can be shown, consistent and asymptotically normally distributed, $ \sqrt{n}(\tilde{\theta}-\theta) \stackrel{{\cal
L}}{\longrightarrow} {\text{\rm N}}(0,\tilde{\Sigma})$. However in the case of GARCH models they are not efficient in the sense that the matrix $ \tilde{\Sigma} - J^{-1} I J^{-1}$is positively definite, where $ J^{-1} I J^{-1}$is the asymptotic covariance matrix of the QML estimator, see (12.25). In the literature there are several experiments on the efficiency of the Yule-Walker and QML estimators in finite samples, see Section 12.4. In most cases maximum likelihood methods are chosen in order to get the efficiency.

The likelihood function of the general GARCH($ p,q$) model (12.19) is identical to (12.17) with the extended parameter vector $ \theta=(\omega,\alpha_1,\ldots,\alpha_q,
\beta_1,\ldots,\beta_p)^\top $. Figure 12.7 displays the likelihood function of a generated GARCH(1,1) process with $ \omega=0.1$, $ \alpha=0.1$, $ \beta=0.8$and $ n=500$. The parameter $ \omega$was chosen so that the unconditional variance is everywhere constant, i.e., with a variance of $ \sigma^2$, $ \omega=(1-\alpha-\beta)\sigma^2$. As one can see, the function is flat on the right, close to the optimum, thus the estimation will be relatively imprecise, i.e., it will have a larger variance. In addition, Figure 12.8 displays the contour plot of the likelihood function.

\includegraphics[width=1.0\defpicwidth]{likgar3d.ps}

Fig.: Likelihood function of a generated GARCH(1,1) process with $ n=500$. The left axis shows the parameter $ \beta$, the right $ \alpha$. The true parameters are $ \omega=0.1$, $ \alpha=0.1$and $ \beta=0.8$. 21492SFElikgarch.xpl

 

\includegraphics[width=1\defpicwidth]{likgarco.ps}

Fig.: Contour plot of the likelihood function of a generated GARCH(1,1) process with $ n=500$. The perpendicular axis displays the parameter $ \beta$, the horizontal $ \alpha$. The true parameters are $ \omega=0.1$, $ \alpha=0.1$and $ \beta=0.8$. 21496SFElikgarch.xpl

The first partial derivatives of (12.17) are

$\displaystyle \frac{\partial l_t}{\partial \theta} = \frac{1}{2\sigma_t^2} \fra...
... \sigma_t^2}{\partial \theta} \left(\frac{\varepsilon_t^2}{\sigma_t^2}-1\right)$

(13.23)


with

$\displaystyle \frac{\partial \sigma_t^2}{\partial \theta} = \vartheta_t + \sum_{j=1}^p \frac{\partial \sigma_{t-j}^2}{\partial \theta}.
$

and $ \vartheta_t=(1, \varepsilon_{t-1}^2,
\ldots,\varepsilon_{t-q}^2,\sigma_{t-1}^2,\ldots,\sigma_{t-p}^2)^\top $. The first order conditions are
$ \sum_{t=q+1}^n
\partial l_t / \partial \theta =0$. The matrix of the second derivatives takes the following form:

$\displaystyle \frac{\partial^2 l_t(\theta)}{\partial \theta \partial
\theta^\top }$

$\displaystyle =$

$\displaystyle \frac{1}{2\sigma_t^4} \frac{\partial
\sigma_t^2}{\partial \theta}...
...^2}
\frac{\partial^2 \sigma_t^2(\theta)}{\partial \theta \partial \theta^\top }$

 

 

$\displaystyle -$

$\displaystyle \frac{\varepsilon_t^2}{\sigma_t^6}\frac{\partial
\sigma_t^2}{\par...
...^4}
\frac{\partial^2 \sigma_t^2(\theta)}{\partial \theta \partial
\theta^\top }$

(13.24)


Under the conditions

  1. $ \mathop{\text{\rm\sf E}}[Z_t \mid {\cal F}_{t-1}]= 0$and $ \mathop{\text{\rm\sf E}}[Z_t^2 \mid {\cal
F}_{t-1}]= 1$,
  2. strict stationarity of $ \varepsilon_t$

and under some technical conditions the ML estimator is consistent. If in addition it holds that $ \mathop{\text{\rm\sf E}}[Z_t^4 \mid {\cal F}_{t-1}] < \infty$, then $ \hat{\theta}$is asymptotically normally distributed:

$\displaystyle \sqrt{n}(\hat{\theta}-\theta) \stackrel{{\cal L}}{\longrightarrow} {\text{\rm N}}_{p+q+1} (0, J^{-1} I J^{-1})$

(13.25)


with

$\displaystyle I = \mathop{\text{\rm\sf E}}\left(\frac{\partial l_t(\theta)}{\partial \theta}
\frac{\partial l_t(\theta)}{\partial \theta^\top } \right)
$

and

$\displaystyle J=- \mathop{\text{\rm\sf E}}\left(\frac{\partial^2 l_t(\theta)}{\partial \theta
\partial \theta^{T}} \right).
$

Theorem 13.12 (Equivalence of $ I$and $ J$)  
If $ Z_t \sim
{\text{\rm N}}(0,1)$, then it holds that $ I=J$.

Proof:
Building the expectations of (12.24) one obtains

$\displaystyle \mathop{\text{\rm\sf E}}\left[\frac{\partial^2 l_t(\theta)}{\part...
...sigma_t^2}{\partial \theta} \frac{\partial \sigma_t^2}{\partial
\theta^\top }.
$

For $ I$we have

$\displaystyle \mathop{\text{\rm\sf E}}\left[\frac{\partial l_t(\theta)}{\partial \theta}\frac{\partial l_t(\theta)}{\partial \theta^{T}}\right]$

$\displaystyle =$

$\displaystyle \mathop{\text{\rm\sf E}}\left[ \frac{1}{4\sigma_t^4} \frac{\parti...
...l \theta}
\frac{\partial \sigma_t^2}{\partial \theta^T}
(Z_t^4-2Z_t^2+1)\right]$

(13.26)

 

$\displaystyle =$

$\displaystyle \mathop{\text{\rm\sf E}}\left[ \frac{1}{4\sigma_t^4} \frac{\parti...
...al
\theta^\top }\right] \{\mathop{\text{\rm Kurt}}(Z_t \mid {\cal
F}_{t-1})-1\}$

 


From the assumption $ Z_t \sim
{\text{\rm N}}(0,1)$it follows that $ \mathop{\text{\rm Kurt}}(Z_t \mid {\cal F}_{t-1})=3$and thus the claim. $ {\Box}$

If the distribution of $ Z_t$is specified correctly, then $ I=J$and the asymptotic variance can be simplified to $ J^{-1}$, i.e., the inverse of the Fisher Information matrix. If this is not the case and it is instead leptokurtic, for example, the maximum of (12.9) is still consistent but no longer efficient. In this case the ML method is interpreted as the `Quasi Maximum Likelihood' (QML) method.

Consistent estimators for the matrices $ I$and $ J$can be obtained by replacing the expectation with the simple average.