survival_adapt {goldilocks}R Documentation

Simulate and execute a single adaptive clinical trial design with a time-to-event endpoint

Description

Simulate and execute a single adaptive clinical trial design with a time-to-event endpoint

Usage

survival_adapt(
  hazard_treatment,
  hazard_control = NULL,
  cutpoints = 0,
  N_total,
  lambda = 0.3,
  lambda_time = 0,
  interim_look = NULL,
  end_of_study,
  prior = c(0.1, 0.1),
  block = 2,
  rand_ratio = c(1, 1),
  prop_loss = 0,
  alternative = "greater",
  h0 = 0,
  Fn = 0.05,
  Sn = 0.9,
  prob_ha = 0.95,
  N_impute = 10,
  N_mcmc = 10,
  method = "logrank",
  imputed_final = FALSE
)

Arguments

hazard_treatment

vector. Constant hazard rates under the treatment arm.

hazard_control

vector. Constant hazard rates under the control arm.

cutpoints

vector. Times at which the baseline hazard changes. Default is cutpoints = 0, which corresponds to a simple (non-piecewise) exponential model.

N_total

integer. Maximum sample size allowable

lambda

vector. Enrollment rates across simulated enrollment times. See enrollment for more details.

lambda_time

vector. Enrollment time(s) at which the enrollment rates change. Must be same length as lambda. See enrollment for more details.

interim_look

vector. Sample size for each interim look. Note: the maximum sample size should not be included.

end_of_study

scalar. Length of the study; i.e. time at which endpoint will be evaluated.

prior

vector. The prior distributions for the piecewise hazard rate parameters are each Gamma(a_0, b_0), with specified (known) hyper-parameters a_0 and b_0. The default non-informative prior distribution used is Gamma(0.1, 0.1), which is specified by setting prior = c(0.1, 0.1).

block

scalar. Block size for generating the randomization schedule.

rand_ratio

vector. Randomization allocation for the ratio of control to treatment. Integer values mapping the size of the block. See randomization for more details.

prop_loss

scalar. Overall proportion of subjects lost to follow-up. Defaults to zero.

alternative

character. The string specifying the alternative hypothesis, must be one of "greater" (default), "less" or "two.sided".

h0

scalar. Null hypothesis value of p_\textrm{treatment} - p_\textrm{control} when method = "bayes". Default is h0 = 0. The argument is ignored when method = "logrank" or = "cox"; in those cases the usual test of non-equal hazards is assumed.

Fn

vector of [0, 1] values. Each element is the probability threshold to stop at the i-th look early for futility. If there are no interim looks (i.e. interim_look = NULL), then Fn is not used in the simulations or analysis. The length of Fn should be the same as interim_look, else the values are recycled.

Sn

vector of [0, 1] values. Each element is the probability threshold to stop at the i-th look early for expected success. If there are no interim looks (i.e. interim_look = NULL), then Sn is not used in the simulations or analysis. The length of Sn should be the same as interim_look, else the values are recycled.

prob_ha

scalar [0, 1]. Probability threshold of alternative hypothesis.

N_impute

integer. Number of imputations for Monte Carlo simulation of missing data.

N_mcmc

integer. Number of samples to draw from the posterior distribution when using a Bayesian test (method = "bayes").

method

character. For an imputed data set (or the final data set after follow-up is complete), whether the analysis should be a log-rank (method = "logrank") test, Cox proportional hazards regression model Wald test (method = "cox"), a fully-Bayesian analysis (method = "bayes"), or a chi-square test (method = "chisq"). See Details section.

imputed_final

logical. Should the final analysis (after all subjects have been followed-up to the study end) be based on imputed outcomes for subjects who were LTFU (i.e. right-censored with time <end_of_study)? Default is TRUE. Setting to FALSE means that the final analysis would incorporate right-censoring.

Details

Implements the Goldilocks design method described in Broglio et al. (2014). At each interim analysis, two probabilities are computed:

  1. The posterior predictive probability of eventual success. This is calculated as the proportion of imputed datasets at the current sample size that would go on to be success at the specified threshold. At each interim analysis it is compared to the corresponding element of Sn, and if it exceeds the threshold, accrual/enrollment is suspended and the outstanding follow-up allowed to complete before conducting the pre-specified final analysis.

  2. The posterior predictive probability of final success. This is calculated as the proportion of imputed datasets at the maximum threshold that would go on to be successful. Similar to above, it is compared to the corresponding element of Fn, and if it is less than the threshold, accrual/enrollment is suspended and the trial terminated. Typically this would be a binding decision. If it is not a binding decision, then one should also explore the simulations with Fn = 0.

Hence, at each interim analysis look, 3 decisions are allowed:

  1. Stop for expected success

  2. Stop for futility

  3. Continue to enroll new subjects, or if at maximum sample size, proceed to final analysis.

At each interim (and final) analysis methods as:

Value

A data frame containing some input parameters (arguments) as well as statistics from the analysis, including:

N_treatment:

integer. The number of patients enrolled in the treatment arm for each simulation.

N_control:

integer. The number of patients enrolled in the control arm for each simulation.

est_interim:

scalar. The treatment effect that was estimated at the time of the interim analysis. Note this is not actually used in the final analysis.

est_final:

scalar. The treatment effect that was estimated at the final analysis. Final analysis occurs when either the maximum sample size is reached and follow-up complete, or the interim analysis triggered an early stopping of enrollment/accrual and follow-up for those subjects is complete.

post_prob_ha:

scalar. The corresponding posterior probability from the final analysis. If imputed_final is true, this is calculated as the posterior probability of efficacy (or equivalent, depending on how alternative: and h0 were specified) for each imputed final analysis dataset, and then averaged over the N_impute imputations. If method = "logrank", post_prob_ha is calculated in the same fashion, but value represents 1 - P, where P denotes the frequentist P-value.

stop_futility:

integer. A logical indicator of whether the trial was stopped early for futility.

stop_expected_success:

integer. A logical indicator of whether the trial was stopped early for expected success.

References

Broglio KR, Connor JT, Berry SM. Not too big, not too small: a Goldilocks approach to sample size selection. Journal of Biopharmaceutical Statistics, 2014; 24(3): 685–705.

Examples

# RCT with exponential hazard (no piecewise breaks)
# Note: the number of imputations is small to enable this example to run
#       quickly on CRAN tests. In practice, much larger values are needed.
survival_adapt(
 hazard_treatment = -log(0.85) / 36,
 hazard_control = -log(0.7) / 36,
 cutpoints = 0,
 N_total = 600,
 lambda = 20,
 lambda_time = 0,
 interim_look = 400,
 end_of_study = 36,
 prior = c(0.1, 0.1),
 block = 2,
 rand_ratio = c(1, 1),
 prop_loss = 0.30,
 alternative = "less",
 h0 = 0,
 Fn = 0.05,
 Sn = 0.9,
 prob_ha = 0.975,
 N_impute = 10,
 N_mcmc = 10,
 method = "bayes")

[Package goldilocks version 0.4.0 Index]