c**********e 发帖数: 2007 | 1 To detect a hazard ratio (HR) of 1.5 (comparing the control
group to the treatment group) (50% improvement in the median
survival time from 6 months for control group to 9 months
for the treatment group with 80% power using a 2-sided
log-rank test at the 2.5% significance level), the primary
analysis was planed to be conducted after X deaths.
How was the sample size determined? Could anybody with
oncology experience give a hint?
Some SAS sample code is as following. The thing was that
some param | e*****n 发帖数: 15 | 2 assuming constant hazard, i.e., exponential distribution for
survival time, the accrual time doesn't matter
【在 c**********e 的大作中提到】 : To detect a hazard ratio (HR) of 1.5 (comparing the control : group to the treatment group) (50% improvement in the median : survival time from 6 months for control group to 9 months : for the treatment group with 80% power using a 2-sided : log-rank test at the 2.5% significance level), the primary : analysis was planed to be conducted after X deaths. : How was the sample size determined? Could anybody with : oncology experience give a hint? : Some SAS sample code is as following. The thing was that : some param
| j*****e 发帖数: 182 | 3 The formula for this type of sample size estimation is quite complicated and
the accrual time definitely matters. First, you need to understand what
accrual time and followup time refer to. Will there be any crossover? Are
you willing to assume the time to event is exponential? What is the median
of time to event? You can download nQuery for a 30 trial. Play with
parameter setting and you will have a better feeling about all kinds of
parameter setting on the sample size.
Even after you get a sam |
|