SA
메인 내용으로 이동

Maximum Likelihood Estimation

f(Dθ)f(\mathbb{D} | \theta)

θ^ML=argmaxθg(xθ)=argmaxθlng(xθ)\hat{\theta}_\text{ML} = \text{argmax}_{\theta} g(x|\theta) = \text{argmax}_{\theta} \ln g(x | \theta)

θ^ML=argmaxθg(x1, x2, , xnθ)\hat{\theta}_\text{ML} = \text{argmax}_{\theta} g(x_1,~x_2,~\cdots,~x_n | \theta) =argmaxθk=1ng(xkθ)= \text{argmax}_{\theta} \prod\limits_{k=1}^{n} g(x_k|\theta) — i.i.d. / r.s. =argmaxθk=1nlng(xkθ)= \text{argmax}_{\theta} \sum\limits_{k=1}^{n} \ln g(x_k|\theta)

Lθθ=θ^ML=0{\partial L \over \partial \theta} |_{\theta = \hat\theta_\text{ML}} = 0

\therefore Check Lθθ=θ^ML<0{\partial L \over \partial \theta} |_{\theta = \hat\theta_\text{ML}} < 0

h(θ)^ML=h(θ^ML)\hat {h(\theta)}^\text{ML} = h(\hat\theta^\text{ML})

x1,,xnGeometric(P)x_1, \cdots, x_n \sim \text{Geometric} (P) σ2^ML\hat{\sigma^2}^\text{ML}

1p^=Xnp^=1x\hat{1 \over p} = \overline{X_n} \Rightarrow \hat{p} = {1 \over \overline{x}}

σ2^ML=qp2=1pp2^ML=11xn1xn2{\hat{\sigma^2}}^\text{ML} = {q \over p^2} = {\hat{{1-p} \over p^2}}^\text{ML} = 1 - {{1 \over \overline{x_n}} \over {{1 \over \overline{x_n}}^2}}

Max-likelihood: Tries to give the best PDF.

Max-likelihood parameter as θ^\hat \theta

θ^ML=argmaxθf(x1,x2,xnθ)=argmaxθlnf(x1,x2,xnθ)=argmaxθL \hat \theta ^\text{ML} = \text{argmax}_{\theta} f(x_1, x_2, \cdots x^n | \theta) = \text{argmax}_{\theta} \ln f(x_1, x_2, \cdots x^n | \theta) = \text{argmax}_{\theta} L

Assuming IID

=lnk=1nf(xkθ)=argmaxθk=1nlnf(xkθ) = \ln \prod_{k=1}^{n} f(x_k | \theta) = \text{argmax}_{\theta} \sum_{k=1}^{n} \ln f(x_k | \theta)

Maximum Likelihood Estimation

  1. consistent (convergent in probability)
  2. Asymptotically Normal
  3. Invariance Principle g(θ)ML^=g(θ^ML)\hat{g(\theta)_{ML}} = g(\hat\theta_{ML})