z**********i 发帖数: 12276 | 1 是REPEATED MEASUREMENT,加RANDOM EFFECT前,需要3-5分钟,加了RANDOM EFFECTS,需要
几天.有什么建议吗?
多谢!! |
f******i 发帖数: 4647 | 2 large scale data?
【在 z**********i 的大作中提到】 : 是REPEATED MEASUREMENT,加RANDOM EFFECT前,需要3-5分钟,加了RANDOM EFFECTS,需要 : 几天.有什么建议吗? : 多谢!!
|
z**********i 发帖数: 12276 | 3 74K个OBSERVATIONS,算是比较大了.
【在 f******i 的大作中提到】 : large scale data?
|
z**********i 发帖数: 12276 | 4 我只保留了一个COVARIATE IN THE MODEL, 如果加上其他的,花的时间更长.
加RANDOM 之前,只要2分钟,但如果加了RANDOM,花几个小时或更久.
CODE:
proc nlmixed data=ami1_04Q1 tech=newrap qpoints=50;
parms b0=-2.53 b24=-0.055 rho=0.007 varRin=0.90 cov=-0.011
varRslp=0.0028 ;
eta=(b0+ (b24+Rslp)*nqt+Rin);
expeta = exp(eta);
p = expeta/(1+expeta);
N=totdenom; r=R_totnum;
A=p*(1-rho)/rho;
B=(1-p)*(1-rho)/rho;
loglike=(lgamma(n + 1)-lgamma(r + 1)-lgamma(n-r+1))
+lgamma(A+r)+lgamma(n+B-r)+lgamma(A+B) - lgamma(A+B+n)
-lgamma(A) -lgamma(B);
model r ~ general (loglike);
random Rin Rslp ~ normal([0,0], [varRin, Cov, VarRslp])
subject=hsp_ID;
run;
【在 z**********i 的大作中提到】 : 74K个OBSERVATIONS,算是比较大了.
|
b**********i 发帖数: 1059 | 5 Tough. Maybe you can consider larger convergence criterias. Check the
procudure manual to see how to change those. That's what we do under some
circumstances. Let us know your progress.
011
【在 z**********i 的大作中提到】 : 我只保留了一个COVARIATE IN THE MODEL, 如果加上其他的,花的时间更长. : 加RANDOM 之前,只要2分钟,但如果加了RANDOM,花几个小时或更久. : CODE: : proc nlmixed data=ami1_04Q1 tech=newrap qpoints=50; : parms b0=-2.53 b24=-0.055 rho=0.007 varRin=0.90 cov=-0.011 : varRslp=0.0028 ; : eta=(b0+ (b24+Rslp)*nqt+Rin); : expeta = exp(eta); : p = expeta/(1+expeta); : N=totdenom; r=R_totnum;
|
o****o 发帖数: 8077 | 6 how many HSP_ID do you have? If you have thousands of HSP_ID, then it will
be very slow
is it a balanced longitudinal data? |
z**********i 发帖数: 12276 | 7 I have 4000+ hospitals.
Most of hospitals have 22 continous quarters data, some of them have a
couple of times missing, few hospitals have only a couple of times reporting
. So not balanced.
【在 o****o 的大作中提到】 : how many HSP_ID do you have? If you have thousands of HSP_ID, then it will : be very slow : is it a balanced longitudinal data?
|
z**********i 发帖数: 12276 | 8 Thanks, we will try this in the options.
【在 b**********i 的大作中提到】 : Tough. Maybe you can consider larger convergence criterias. Check the : procudure manual to see how to change those. That's what we do under some : circumstances. Let us know your progress. : : 011
|
o****o 发帖数: 8077 | 9 if you have 4000+ subjects, there is no way you can finish it in minutes.
One iteration may take many minutes
reporting
【在 z**********i 的大作中提到】 : I have 4000+ hospitals. : Most of hospitals have 22 continous quarters data, some of them have a : couple of times missing, few hospitals have only a couple of times reporting : . So not balanced.
|
z**********i 发帖数: 12276 | 10 考虑用WORKBENCH,马上要结题了,还卡到这.
【在 o****o 的大作中提到】 : if you have 4000+ subjects, there is no way you can finish it in minutes. : One iteration may take many minutes : : reporting
|
|
|
s*******2 发帖数: 499 | 11 I once run a data with 7500000 observation and 20 covariates using the mixed
effect model with SAS. It takes sometime, but only 2 hours.
Maybe your computer is not good enough.
Which proc do you use? I know one glm mixed effect proc is very time-comsuming, but the other one is much faster.
【在 z**********i 的大作中提到】 : 是REPEATED MEASUREMENT,加RANDOM EFFECT前,需要3-5分钟,加了RANDOM EFFECTS,需要 : 几天.有什么建议吗? : 多谢!!
|
z**********i 发帖数: 12276 | 12 Thanks.
I am using the NLMIXED, because the outcome is binomial or Poisson
distribution.
mixed
comsuming, but the other one is much faster.
【在 s*******2 的大作中提到】 : I once run a data with 7500000 observation and 20 covariates using the mixed : effect model with SAS. It takes sometime, but only 2 hours. : Maybe your computer is not good enough. : Which proc do you use? I know one glm mixed effect proc is very time-comsuming, but the other one is much faster.
|
s*******2 发帖数: 499 | 13 Please try glimmix. It is much faster.
【在 z**********i 的大作中提到】 : Thanks. : I am using the NLMIXED, because the outcome is binomial or Poisson : distribution. : : mixed : comsuming, but the other one is much faster.
|
x**g 发帖数: 807 | 14 qpoints=50
it is very tough.
011
【在 z**********i 的大作中提到】 : 我只保留了一个COVARIATE IN THE MODEL, 如果加上其他的,花的时间更长. : 加RANDOM 之前,只要2分钟,但如果加了RANDOM,花几个小时或更久. : CODE: : proc nlmixed data=ami1_04Q1 tech=newrap qpoints=50; : parms b0=-2.53 b24=-0.055 rho=0.007 varRin=0.90 cov=-0.011 : varRslp=0.0028 ; : eta=(b0+ (b24+Rslp)*nqt+Rin); : expeta = exp(eta); : p = expeta/(1+expeta); : N=totdenom; r=R_totnum;
|
z**********i 发帖数: 12276 | 15 I was told "NLMIXED allows you to program your own likelihood whereas
GLIMMIX works with common distributions. I don't think GLIMMIX has the beta-
binomial distribution as one of its options and the last time I looked it
also wasn't capable to doing the zero-inflation."
【在 s*******2 的大作中提到】 : Please try glimmix. It is much faster.
|
z**********i 发帖数: 12276 | 16 the default qpoints may be 30, the log said cannot achieve the convengence
or some hint like that, so increase the qpoints. Or else, I just use the
default.
Thanks.
【在 x**g 的大作中提到】 : qpoints=50 : it is very tough. : : 011
|
o****o 发帖数: 8077 | 17 for beta-binomial, try tweak _VARIANCE_ and _MU_ automatic variable to
change the default link and variance function
beta-
【在 z**********i 的大作中提到】 : I was told "NLMIXED allows you to program your own likelihood whereas : GLIMMIX works with common distributions. I don't think GLIMMIX has the beta- : binomial distribution as one of its options and the last time I looked it : also wasn't capable to doing the zero-inflation."
|
z**********i 发帖数: 12276 | 18 没看懂,能给个例子吗?
多谢!
【在 o****o 的大作中提到】 : for beta-binomial, try tweak _VARIANCE_ and _MU_ automatic variable to : change the default link and variance function : : beta-
|
z**********i 发帖数: 12276 | 19 这个就是BETA BINOMIAL加RANDOM EFFECTS,该怎么变一下呢? 多谢!
proc nlmixed data=ami1_04Q1 tech=newrap ;
parms b0=-2.53 b24=-0.055 rho=0.007 varRin=0.90 cov=-0.011
varRslp=0.0028 ;
eta=(b0+ (b24+Rslp)*nqt+Rin);
expeta = exp(eta);
p = expeta/(1+expeta);
N=totdenom; r=R_totnum;
A=p*(1-rho)/rho;
B=(1-p)*(1-rho)/rho;
loglike=(lgamma(n + 1)-lgamma(r + 1)-lgamma(n-r+1))
+lgamma(A+r)+lgamma(n+B-r)+lgamma(A+B) - lgamma(A+B+n)
-lgamma(A) -lgamma(B);
model r ~ general (loglike);
random Rin Rslp ~ normal([0,0], [varRin, Cov, VarRslp])
subject=hsp_ID;
run;
【在 o****o 的大作中提到】 : for beta-binomial, try tweak _VARIANCE_ and _MU_ automatic variable to : change the default link and variance function : : beta-
|