<< 5.4 Extensions to Multivariate Distributions.md | 6.2 Rao-Cramér Lower Bound and Efficiency.md >>

Assumptions 6.1.1: Regularity conditions

  • : The cdfs are distinct; i.e.,
  • : The pdfs have common support for all
  • : The point is an interior point in

TODO: What is an interior point? #TODO: What is

Theorem 6.1.1

Assume

  • : True parameter
  • exists

Then, under assumptions (R0) and (R1)

Definition 6.1.1: Maximum Likelihood Estimator

Let

If

Then is a maximum likelihood estimator (mle) of

Link to original

Theorem 6.1.2

Let

  • : Random sample, with
    • pdf
  • Let : Parameter of interest

If is the mle of

Then is the mle of

Theorem 6.1.3

Assume

  • satisfies all regularity conditions
  • : True parameter
  • : differentiable with respect to in

Then the likelihood equation , or equivalently has a solution such that

Corollary 6.1.1

Assume

  • satisfies all regularity conditions
  • : True parameter
  • : differentiable with respect to in

If the likelihood equation has the unique solution

Then is a consistent estimator of

Exercise

MLE on normal distribution

Let

  • : Random sample, with
    • Normal distribution
    • Unknown mean
    • Known variance .

Find the maximum likelihood estimator (MLE) of ().

Answer

The pdf of is:

The likelihood function is:

Solving :

Solving :

The maximum likelihood estimator of is:

This shows that the sample mean is the MLE of the population mean for a normal distribution.

MLE on normal distribution with multiple parameters

Let

  • : Random sample, with
    • Normal distribution
    • Unknown mean
    • Unknown variance

Find the maximum likelihood estimators (MLE) of and .

Answer

The pdf of is:

The likelihood function is:

& = \prod_{i=1}^n \frac{1}{\sqrt{2\pi\theta_2}} e^{-\frac{1}{2\theta_2}(x_i - \theta_1)^2} \\ & = \frac{1}{(2\pi\theta_2)^{n/2}} e^{-\frac{1}{2\theta_2}\sum_{i=1}^n(x_i - \theta_1)^2} \end{align}

Solving :

Finding MLE for (mean):

Solving :

0 & = 0 - 0 - \frac{1}{2\theta_2}\sum_{i=1}^n \frac{\partial}{\partial \theta_1}(x_i - \theta_1)^2 \\ 0 & = -\frac{1}{2\theta_2}\sum_{i=1}^n 2(x_i - \theta_1)(-1) \\ 0 & = \frac{1}{\theta_2}\sum_{i=1}^n (x_i - \theta_1) \\ 0 & = \sum_{i=1}^n x_i - n\theta_1 \\ n\theta_1 & = \sum_{i=1}^n x_i \\ \hat{\theta_1} & = \frac{1}{n}\sum_{i=1}^n x_i = \bar{x} \end{align} $$ **Finding MLE for $\theta_2$ (variance):** Solving $\frac{\partial}{\partial \theta_2}\ln L(\theta_1, \theta_2)=0$: $$ \begin{align} \frac{\partial}{\partial \theta_2} \ln L(\theta_1, \theta_2) & = \frac{\partial}{\partial \theta_2}\left[-\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln(\theta_2) - \frac{1}{2\theta_2}\sum_{i=1}^n(x_i - \theta_1)^2\right] \\ 0 & = 0 - \frac{n}{2} \cdot \frac{1}{\theta_2} - \sum_{i=1}^n(x_i - \theta_1)^2 \cdot \frac{\partial}{\partial \theta_2}\left(\frac{1}{2\theta_2}\right) \\ 0 & = -\frac{n}{2\theta_2} - \sum_{i=1}^n(x_i - \theta_1)^2 \cdot \left(-\frac{1}{2\theta_2^2}\right) \\ 0 & = -\frac{n}{2\theta_2} + \frac{1}{2\theta_2^2}\sum_{i=1}^n(x_i - \theta_1)^2 \\ \frac{n}{2\theta_2} & = \frac{1}{2\theta_2^2}\sum_{i=1}^n(x_i - \theta_1)^2 \\ n\theta_2 & = \sum_{i=1}^n(x_i - \theta_1)^2 \\ \hat{\theta_2} & = \frac{1}{n}\sum_{i=1}^n(x_i - \theta_1)^2 \end{align} $$ Substituting $\hat{\theta_1} = \bar{x}$: $$\hat{\theta_2} = \frac{1}{n}\sum_{i=1}^n(x_i - \bar{x})^2$$ $\therefore$ The maximum likelihood estimators are:

\begin{align} \hat{\theta_1} & = \frac{1}{n}\sum_{i=1}^n X_i && = \bar{x} \ \hat{\theta_2} & = \frac{1}{n}\sum_{i=1}^n(X_i - \bar{X})^2 && =s^2 \end{align}

Note that $\hat{\theta_2}$ is the biased sample variance (dividing by $n$ instead of $n-1$).