l*********s 发帖数: 5409 | 1 I often read claims that "xxx regression estimator has desirable features
such as shrinkage " blablabla, what is so nice about shrinkage anyway? @__@ | F****r 发帖数: 151 | | l*********s 发帖数: 5409 | 3 Two questions.
1. " shrinkage improves MSE because of extra information provided as
constraints" , if so, this means in general true regression coefficients
disperse around the origin. However, true parameters are not known to us,
nor there are anything special about the origin.
2. bias-variance trade-off . The ML estimator of mean is fairly convincing,
however, I wonder whether this is inherently linked with shrinkage. I mean,
why not having estimators with reduced variance biased away from the origin
?
【在 F****r 的大作中提到】 : Maybe this ? : http://en.wikipedia.org/wiki/Shrinkage_estimator
| D*****a 发帖数: 2847 | 4 实质是bayesian的想法
origin没啥特别的,你可以normalize,让origin代表了你的prior belief的
期望值
比如别人做的实验,说这个参数大概是1.5。
你自己也做了些实验,如果忽略别人的结果,估计的参数是5。
如果你想把别人的结果也考虑进去,就把自己的结果向1.5的方向shrink一下。
结果就在1.5和5之间
convincing,
mean,
origin
【在 l*********s 的大作中提到】 : Two questions. : 1. " shrinkage improves MSE because of extra information provided as : constraints" , if so, this means in general true regression coefficients : disperse around the origin. However, true parameters are not known to us, : nor there are anything special about the origin. : 2. bias-variance trade-off . The ML estimator of mean is fairly convincing, : however, I wonder whether this is inherently linked with shrinkage. I mean, : why not having estimators with reduced variance biased away from the origin : ?
| l*********s 发帖数: 5409 | 5 明白了.谢谢
【在 D*****a 的大作中提到】 : 实质是bayesian的想法 : origin没啥特别的,你可以normalize,让origin代表了你的prior belief的 : 期望值 : 比如别人做的实验,说这个参数大概是1.5。 : 你自己也做了些实验,如果忽略别人的结果,估计的参数是5。 : 如果你想把别人的结果也考虑进去,就把自己的结果向1.5的方向shrink一下。 : 结果就在1.5和5之间 : : convincing, : mean,
|
|