Suppose that the problem is to estimate k {\displaystyle k} unknown parameters θ 1 , θ 2 , … , θ k {\displaystyle \theta _{1},\theta _{2},\dots ,\theta _{k}} describing the distribution f W ( w ; θ ) {\displaystyle f_{W}(w;\theta )} of the random variable W {\displaystyle W} .[1] Suppose the first k {\displaystyle k} moments of the true distribution (the "population moments") can be expressed as functions of the θ {\displaystyle \theta } s:
μ 1 ≡ E [ W ] = g 1 ( θ 1 , θ 2 , … , θ k ) , μ 2 ≡ E [ W 2 ] = g 2 ( θ 1 , θ 2 , … , θ k ) , ⋮ μ k ≡ E [ W k ] = g k ( θ 1 , θ 2 , … , θ k ) . {\displaystyle {\begin{aligned}\mu _{1}&\equiv \operatorname {E} [W]=g_{1}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\[4pt]\mu _{2}&\equiv \operatorname {E} [W^{2}]=g_{2}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\&\,\,\,\vdots \\\mu _{k}&\equiv \operatorname {E} [W^{k}]=g_{k}(\theta _{1},\theta _{2},\ldots ,\theta _{k}).\end{aligned}}} Suppose a sample of size n {\displaystyle n} is drawn, and it leads to the values w 1 , … , w n {\displaystyle w_{1},\dots ,w_{n}} . For j = 1 , … , k {\displaystyle j=1,\dots ,k} , let
μ ^ j = 1 n ∑ i = 1 n w i j {\displaystyle {\widehat {\mu }}_{j}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}^{j}} be the j -th sample moment, an estimate of μ j {\displaystyle \mu _{j}} . The method of moments estimator for θ 1 , θ 2 , … , θ k {\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{k}} denoted by θ ^ 1 , θ ^ 2 , … , θ ^ k {\displaystyle {\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\dots ,{\widehat {\theta }}_{k}} is defined as the solution (if there is one) to the equations:[source? ]
μ ^ 1 = g 1 ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) , μ ^ 2 = g 2 ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) , ⋮ μ ^ k = g k ( θ ^ 1 , θ ^ 2 , … , θ ^ k ) . {\displaystyle {\begin{aligned}{\widehat {\mu }}_{1}&=g_{1}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\[4pt]{\widehat {\mu }}_{2}&=g_{2}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\&\,\,\,\vdots \\{\widehat {\mu }}_{k}&=g_{k}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}).\end{aligned}}} Reasons to use it
change
The method of moments is simple and gets consistent estimators (under very weak assumptions). However, these estimators are often biased .
↑ K. O. Bowman and L. R. Shenton, "Estimator: Method of Moments", pp 2092–2098, Encyclopedia of statistical sciences , Wiley (1998).