由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
Statistics版 - Why shrinkage estimators are prefered?
相关主题
How to estimate the CILinear Regression
问道ANOVA的题目Estimated Coefficents calculation
R question有谁知道crossover design里面作linear mixed model如何计算coefficient of variation (CV)?
A question about estimator请教:estimate, estimation, and estimator
regression的时候提高自由度对模式有什么好处?请问如何得到covariance matrix比较smooth的estimate?
model里有multicollinearity,该如何处理呢?请教一个bootstrap的问题(包子)
A question about regressionGBM in R
用ARIMA估计出来的point estimate和OLS的不一样can anyone help me or give me any hint on how to answer int
相关话题的讨论汇总
话题: shrinkage话题: estimators话题: why话题: prefered话题: origin
进入Statistics版参与讨论
1 (共1页)
l*********s
发帖数: 5409
1
I often read claims that "xxx regression estimator has desirable features
such as shrinkage " blablabla, what is so nice about shrinkage anyway? @__@
F****r
发帖数: 151
l*********s
发帖数: 5409
3
Two questions.
1. " shrinkage improves MSE because of extra information provided as
constraints" , if so, this means in general true regression coefficients
disperse around the origin. However, true parameters are not known to us,
nor there are anything special about the origin.
2. bias-variance trade-off . The ML estimator of mean is fairly convincing,
however, I wonder whether this is inherently linked with shrinkage. I mean,
why not having estimators with reduced variance biased away from the origin
?

【在 F****r 的大作中提到】
: Maybe this ?
: http://en.wikipedia.org/wiki/Shrinkage_estimator

D*****a
发帖数: 2847
4
实质是bayesian的想法
origin没啥特别的,你可以normalize,让origin代表了你的prior belief的
期望值
比如别人做的实验,说这个参数大概是1.5。
你自己也做了些实验,如果忽略别人的结果,估计的参数是5。
如果你想把别人的结果也考虑进去,就把自己的结果向1.5的方向shrink一下。
结果就在1.5和5之间

convincing,
mean,
origin

【在 l*********s 的大作中提到】
: Two questions.
: 1. " shrinkage improves MSE because of extra information provided as
: constraints" , if so, this means in general true regression coefficients
: disperse around the origin. However, true parameters are not known to us,
: nor there are anything special about the origin.
: 2. bias-variance trade-off . The ML estimator of mean is fairly convincing,
: however, I wonder whether this is inherently linked with shrinkage. I mean,
: why not having estimators with reduced variance biased away from the origin
: ?

l*********s
发帖数: 5409
5
明白了.谢谢

【在 D*****a 的大作中提到】
: 实质是bayesian的想法
: origin没啥特别的,你可以normalize,让origin代表了你的prior belief的
: 期望值
: 比如别人做的实验,说这个参数大概是1.5。
: 你自己也做了些实验,如果忽略别人的结果,估计的参数是5。
: 如果你想把别人的结果也考虑进去,就把自己的结果向1.5的方向shrink一下。
: 结果就在1.5和5之间
:
: convincing,
: mean,

1 (共1页)
进入Statistics版参与讨论
相关主题
can anyone help me or give me any hint on how to answer intregression的时候提高自由度对模式有什么好处?
关于Generalized Linear Mixed Models(GLMMs)的问题model里有multicollinearity,该如何处理呢?
[question] sample estimation of eigenvaluesA question about regression
问一个R的问题,怎么在function中return multiple output?用ARIMA估计出来的point estimate和OLS的不一样
How to estimate the CILinear Regression
问道ANOVA的题目Estimated Coefficents calculation
R question有谁知道crossover design里面作linear mixed model如何计算coefficient of variation (CV)?
A question about estimator请教:estimate, estimation, and estimator
相关话题的讨论汇总
话题: shrinkage话题: estimators话题: why话题: prefered话题: origin