2014年1月理论题目4-6。


Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDIwNzk3NA==,size_16,color_FFFFFF,t_70)
第四题
Part (a)
Joint likelihood function of random sample is
L(θ)=i=1∏n(θ−ccXic−1e−(Xi/θ)c)=θ−nccn(i=1∏nXi)c−1e−θc1∑i=1nXicl(θ)=logL(θ)=−nclogθ+nlogc+(c−1)i=1∑nlogXi−θc1i=1∑nXic
Take derivative of log-likelihood and let it be zero,
l′(θ)=−θnc+θc+1ci=1∑nXic=0
Solve this equation and we can find MLE of θ,
θ^=(n1i=1∑nXic)1/c
Part (b)
Compute the density function of X1c,
P(X1c≤y)=P(X1≤y1/c)=FX(y1/c)fX1c(y)=c1y1/c−1θ−ccyc−1/ce−(y1/c/θ)c=θ−ce−θ−cy,y>0
Obviously X1c∼EXP(θc),equivalently, Γ(1,θ−c). By additivity of Gamma distribution, ∑i=1nXic∼Γ(n,θ−c). By scale transformation, n1∑i=1nXic∼Γ(n,nθ−c). Let Y=n1∑i=1nXic,
EY1/c=∫0∞y1/cΓ(n)(nθ−c)nyn−1e−nθ−cydy=Γ(n)(nθ−c)n∫0∞y1/c+n−1e−nθ−cydy=Γ(n)(nθ−c)−1/c∫0∞(nθ−cy)1/c+n−1e−nθ−cydnθ−cy=Γ(n)(nθ−c)−1/cΓ(1/c+n)=θΓ(n)n−1/cΓ(1/c+n)⇒E(n1/cΓ(1/c+n)Y1/cΓ(n))=θ
So we get an unbiased estimator of θ. See joint likelihood L(θ), by Neyman-Fisher Theorem, we know ∑i=1nXic is sufficient statistics. Plus,
L(θ)=cn(i=1∏nXi)c−1exp(−θc1i=1∑nXic−nclogθ)
indicating it belongs to exponential family. So ∑i=1nXic is also complete. Since MLE is definitely a function of ∑i=1nXic, by Lehmann-Scheffe Theorem, n1/cΓ(1/c+n)Y1/cΓ(n) is a UMVUE of θ.
Part ( c)
For arbitrary θ1≤θ0≤θ2, let’s compute likelihood ratio
L(θ2)L(θ1)=θ2−nccn(∏i=1nXi)c−1e−θ2c1∑i=1nXicθ1−nccn(∏i=1nXi)c−1e−θ1c1∑i=1nXic=(θ1θ2)ncexp((1/θ2c−1/θ1c)i=1∑nXic)
where (1/θ2c−1/θ1c)<0. To make this likelihood ratio smaller than some number, ∑i=1nXic should be greater than kα such that
P(i=1∑nXic>kα)=α
We have discussed above that ∑i=1nXic∼Γ(n,θ−c), so 2∑i=1nXic/θ0c∼χ2n2. Let kα is the upper α-quantile of χ2n2. And the rejection region is {X1,X2,⋯,Xn:2∑i=1nXic/θ0c>kα}.
第五题
Part (a)
Joint likelihood function of random sample is
L(θ)=i=1∏nθ−1Xi(1−θ)/θ=θ−nexp(2θθ−1[−2i=1∑nlogXi])
By Neyman-Fisher Theorem, T(X)=−2∑i=1nlogXi is sufficient statistics. For two different groups of random sample,
L(θ∣Y)L(θ∣X)=θ−nexp(2θθ−1[−2∑i=1nlogYi])θ−nexp(2θθ−1[−2∑i=1nlogXi])=exp(2θθ−1[2i=1∑nlogYi−2i=1∑nlogXi])
To make this ratio independent of θ, T(X)=T(Y) holds. So T(X) is minimal sufficient statistics.
Part(b)
Compute
P(Y≤y)=P(−2logX1≤y)=P(X1≥e−y/2)=1−FX(e−y/2)fY(y)=21e−y/2fX(e−y/2)=21e−y/2θ−1e−y(1−θ)/2θ=2θ1e−y/2θ,y>0
So Y∼EXP(2θ), equivalently Γ(1,1/2θ).
Part ( c)
By additivity of Gamma distribution, T(X)∼Γ(n,1/2θ). By scale transformation, T(X)/2θ∼Γ(n,1). Let L be 2.5%-quantile of Γ(n,1), U be 97.5%-quantile of Γ(n,1),and then the 95% confidential interval is
L≤T(X)/2θ≤U⇒2UT(X)≤θ≤2LT(X)
Part (d)
The expected length of confidential interval is
E[T(X)(2L1−2U1)]=LUnθ(U−L)
(I can think of a different way but I need to attach the solution here.)

(参考UA MATH564 概率论VI 数理统计基础3 卡方分布的正态近似)
第六题

Part (a)
Joint likelihood of random sample is
L(β)=i=1∏nYi!(xiβ)Yie−xiβ=(i=1∏nYi!xiYi)βnYˉe−nβxˉl(β)=log(i=1∏nYi!xiYi)+nYˉlogβ−nxˉβ
MLE of β is given by argmaxl(β), which is
βnYˉ−nxˉ=0⇒β^=xˉYˉ
Part (b)
Eβ^=xˉ1EYˉ=nxˉ1i=1∑nEYi=nxˉβ∑i=1nxi=βVar(β^)=n2xˉ21i=1∑nVarYi=n2xˉ2β∑i=1nxi=nxˉβ
Part ( c)
Posterior kernel of β is
π(β∣Y)∝L(β)π(β∣w,b0)∝βnYˉe−nβxˉβwb0−1e−wβ=βnYˉ+wb0−1e−β(w+nxˉ)
This is the kernel of Γ(nYˉ+wb0,w+nxˉ1). So posterior density of β is
π(β∣Y)=Γ(nYˉ+wb0)(w+nxˉ)nYˉ+wb0βnYˉ+wb0−1e−β(w+nxˉ),β>0
Part (d)
Posterior mean of β is
E[β∣Y]=w+nxˉnYˉ+wb0=w+nxˉnxˉβ^+w+nxˉwb0
definitely weighted average of β^ and b0. When w→0, E[β∣Y]→β^.