@CamDavidsonPilon Thanks so much for your work on lifelines. Very cool. I was particularly excited to see your last post on SaaS churn. https://dataorigami.net/blogs/napkin-folding/churn.
I've been trying to follow along but have run into a couple issues. First, I don't see that 'PiecewiseExponentialRegressionFitter' exists in lifelines. I do see 'PiecewiseExponentialFitter', however. If I use ''PiecewiseExponentialFitter' I get an error:
'object has no attribute 'predict_cumulative_hazard'
it took me a while to see how it was right-censored. But once I considered that the goal is to measure durations, it became more clear.
Yea, like consider someone in that acquired group. You know that their duration is atleast 495 days,(now - jan 2018), and certainly more. Thus we have a lower bound on their duration -> right censoring
Hi @bkos, good question. Generally, censorship makes estimation harder, as we loose information. It might be not obvious from the KM equations, but it affects the denominator $n_i$ - the number of subjects at risk. A censored individual may not be present in this count.
For parametric models, check out section 2 of this article: https://cran.r-project.org/web/packages/flexsurv/vignettes/flexsurv.pdf
time = beta * x + e
where e was gumbel distributed, I ended up with this loss function:def gumbel_sa_loss(y_pred, targ):
failed = targ[...,:1]
e = y_pred - targ[...,1:]
exp_e = torch.exp(e)
failed_loss = failed * (exp_e - e)
censored_loss = - (1 - failed) * torch.log(1 - torch.exp(-exp_e))
log_lik = failed_loss + censored_loss
return log_lik.mean()
You'll need to implement the likelihood in pytorch, and I think a good summary of parametric survival likelihoods is in section 2 of this article: https://cran.r-project.org/web/packages/flexsurv/vignettes/flexsurv.pdf
For AFT models, note that it's log(time) = beta * x + e
(the log is important)
Hello All,
I would like to ask for some help getting started to lifelines, there was simple tasks I could not found an direct way of doing, I am particularly interested on Cox models.
1) I was not able to find a way to retrieve the hazards at time t and the survival at time t, for a Cox PH model, I only get the information about the baselines and the coefficients (which are for some reason called "hazards_"). Of course I could generate the hazards and the survival, with this information but it would be nice to do it directly. If it is not my fault of not being able to find this direct way, I would gladly contribute to lifelines.
2) How does one generate the adjusted (considering the covariates) survival curves for PH Cox model, for the data used to fit the model, before I do any prediction, apparently now there is only support for plotting the baseline survival function.
from math import exp
def hazard( phdata, coef, baseline, i, t ):
cov = phdata.iloc[i].drop(labels=['censored?, 'eventdate'])
base = baseline.at[float(t),'baseline hazard']
haz= base*exp(cov.dot(coef))
return haz
this is what I meant by the hazard at t, for instance. I could not find where it was implemented, is this equivalent to the predict_partial_hazard(X)
?It looks like you want the hazard per subject over time (not the cumulative hazard). The most appropriate way is to do something like cph.predict_cumulative_hazard(phdata).diff()
as the cumulative hazard and the hazard are related in that manner. Using predict_cumulative_hazard
is helpful since it takes care of any strata arguments, and de-meaning necessary.
You mentioned to me privately about why I subtract the mean of the covariates in the prediction methods. The reason is that the algorithm is computed using demeaned data, and hence the baseline hazard "accounts" for the demeaned data and grows or shrinks appropriately (I can't think of a better way to say this without a whiteboard/latex). From the POV of the hazard then, the values are the same. However, the log-partial hazard and the partial hazard will be different. This is okay, as there is no interpretation of the (log-)partial hazard without the baseline hazard (another way to think about this: it's unit-less). The only use of the (log-)partial hazard is determining rankings. That is, the multiplying by the baseline hazard is necessary to recover the hazards.
Hi all, I'm having issues with convergence for the CoxPHFitter in lifelines that I'm not seeing in R. It gives me this message
"ConvergenceError: Convergence halted due to matrix inversion problems. Suspicion is high collinearity. Please see the following tips in the lifelines documentation:
https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model
Matrix is singular."
Even when I reduce the dataset to just the time at observation, whether or not event happened, and a single covariate. Graphically I have verified that there is not a perfect decision boundary - I have also modeled this in R and it has worked perfectly. When looking at the algorithm step output, it does look like that Newton algorithm is diverging:
Iteration 1: norm_delta = 2.29694, step_size = 0.9500, ll = -27814.17290, newton_decrement = 3481.08865, seconds_since_start = 0.0
Iteration 2: norm_delta = 5.84762, step_size = 0.9500, ll = -36925.79270, newton_decrement = 37483.21855, seconds_since_start = 0.0
Iteration 3: norm_delta = 108.73423, step_size = 0.7125, ll = -40227.22948, newton_decrement = 210617.17243, seconds_since_start = 0.1
Iteration 4: norm_delta = 14575.06691, step_size = 0.2095, ll = -1076963.03641, newton_decrement = 106456100.74164, seconds_since_start = 0.1
fit
. For example, cph.fit( ..., step_size=0.30)
(or decrease more if necessary). I would appreciate if you could share the dataset with me as well (privately, at cam.davidson.pilon@gmail.com), as datasets that fail convergence as useful to try new methods against.
@MattB27 the equation on page 7 of the pdf is not what is implemented. Recall, the MLE, I take the log, then differentiate. numer
and denom
in the code refer to the numerator and denominator in the fraction here:
which is the resulting equation after logging + differentiating the eq on page 7
What's the best way to save a lifelines model? (or is this not possible)? I'd like to automate the model to make predictions daily, but retrain only weekly. The model in question is a lognormal AFT.
I tried to use joblib, but it threw a PicklingError: PicklingError: Can't pickle <function unary_to_nary.<locals>.nary_operator.<locals>.nary_f at 0x1a378e9f28>: it's not found as autograd.wrap_util.unary_to_nary.<locals>.nary_operator.<locals>.nary_f
If you have an individual, who has the 'death' event, but then becomes alive again, and then has a 'death' event again. How do you treat this? should you use a time based model, and record the death event something like this [t0 - t1, death] [t2 - t3, death], or do you not record the death event but still you a time based model, recording a gap in between the 'observations' [t0 - t1, t2-t3, death]
OR could you use a standard(non-temportal) model and treat them as separate individuals? What would the mathematical ramifications be to use a standard model like this?