- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

@gabrown, I am able to replicate what you are seeing locally. If I understand correctly, your definition of churn is "fraction of uncensored users who died before 12 months". I think this is going to bias your churn rate up, as you are not taking into account censoring. In an extreme case, where all but one subject is censored, then your def of churn will give 0% or 100%. But, that feels a bit strange, no? If they died early on, and the other subjects were censored later, we should feel that the churn isn't 100%.

Please correct me if I am mistaken, or I am not making sense. Happy to discuss more!

@CamDavidsonPilon can you please help me understand why using

`value_and_grad(negative_log_likelihood)`

in the minimization function, in fitters, helps? Why not simply minimize the `negative_log_likelihood`

directly?
In the same file: it seems that

`class ParametericAFTRegressionFitter(ParametricRegressionFitter)`

contains an extra 'e' :D
Value and grad function is specifically used in the minimization routine.

The routine does minimize the log likelihood, but it also requires information from both f and f-prime. Thatâ€™s what value and grad provides

I have some issues with

`autograd`

and while looking at their documentation I've noticed the note saying that they won't develop it further. Have you thought about migrating to JAX?
Thanks for the quick response, @CamDavidsonPilon . I get your point about by ignoring the censored users who havenâ€™t been there 12 months, we are ignoring, and as churn rate is low, they would be predominately non-churners, so this would add a bias. However, in my analysis dataset, if I only consider users who could have completed 12 months (so there are no censored users with tenure<12) I still see a systematic difference.

If we consider this in the context of survival, how would you measure the survival after 12 months just from the data? As I think this problem would have exactly the same issues.

Thanks for you input, and also for your awesome package!

(and hence have their own variances)

Thank you! So, I have this file. I want to maximize the (log) likelihood there. I get the error:

`ValueError: setting an array element with a sequence`

. I've read from their documentation that "Assignment is hard to support...", but I at this point I can't imagine how it should be rightly implemented.
@sursu I was able to replicate the problem locally. I made some changes to get it to converge: https://gist.github.com/CamDavidsonPilon/161fc665f6fccc91e21a543d1132a192

1) Instead of one large matrix for the

2) the

3) I like BFGS as a first routine to use, feel free to try others though.

`x_`

variables (which may cause problems with autograd), I instead chose a list of small matrices.2) the

`lik`

variable is now incrementing as we go.3) I like BFGS as a first routine to use, feel free to try others though.

@CamDavidsonPilon hmm, I get your point. I have looked at the Kaplan-Meier distribution and it seems to match for the example I provided, however, when I look at the dataset I am interested in, there is a bias after the first 12 months. Is there any assumption about the hazards? We have very spikey hazards, with high hazard once every 12 months, and through the rest of the year the hazard is low. Do you think that would effect the performance?

That is an example of the comparison, where the blue line is the Kaplan-Meier estimate and the red line is the cox regresstion

:wave: Minor lifelines version release: https://github.com/CamDavidsonPilon/lifelines/releases/tag/v0.22.4

@CamDavidsonPilon for what it's worth, a short snippet of a slightly misleading error involving pandas.DataFrame.apply that took me a day to debug

task: use Cox to predict event probability for censored items at the time of their current duration

```
import lifelines as ll
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0, 100, size=(10, 2)), columns=['regressor', 'duration'])
df['event'] = np.random.choice([True, False], 10)
display(df)
# uncomment to lose the bool and fix the TypeError
#df['event'] = df['event'].astype(int)
cf = ll.CoxPHFitter()
cf.fit(df, duration_col='duration', event_col='event')
# select only censored items
df = df[df['event'] == 0]
func = lambda row: cf.predict_survival_function(row[['regressor']], times=row['duration'])
df.apply(func, axis=1)
```

'misleading' cause it will say the regressor column is non-numerical...

Minor lifelines release: https://github.com/CamDavidsonPilon/lifelines/releases/tag/v0.22.5

@data-blade I made your example ^ work as well now

@data-blade I made your example ^ work as well now

Hello, Is it possible to use lifelines to predict ~ queue times? In my example, employees checks client's documents, they don't work on weekends, at night, have a dinner and etc. I want to know if client uploaded the document, predict how long time to wait left.

For example, training data - table contains documents verification process

document_id|start_at|ends_at|duration(ends_at - start_at)

For example, training data - table contains documents verification process

document_id|start_at|ends_at|duration(ends_at - start_at)

Hi @dmitryuk, sure that can be done. Queue times fit perfectly into survival analysis. Since you suggest that when the client uploaded the doc is important, I would suggest that you use that feature (mapped to a cyclic variableÂ¹) in a regression model. Ex:

```
from lifelines import WeibullAFTFitter
df['start_time'] = df['start_time'].map(map_to_seconds)
df['sin_start_time'] = np.sin(2*np.pi*df['start_time']/seconds_in_day)
df['cos_start_time'] = np.cos(2*np.pi*df['start_time']/seconds_in_day)
df = df.drop('start_time', axis=1)
wf = WeibullAFTFitter().fit(df, "duration")
wf.predict_survival_function(df)
wf.predict_median(df)
```

Since you want *how long left to wait*, you probably want to use the

`conditional_after`

kwarg in the `predict_*`

methods as well
@CamDavidsonPilon Thank you for your answer!

This way I prepared the data as

id(doc id)|start_from_week_seconds(seconds past from start of week after client uploaded doc)|duration(seconds spent to check the doc)

After code line executed

"StatisticalWarning: The diagonal of the variance*matrix* has negative values. This could be a problem with WeibullFitter's fit to the data."

Could you help to understand what is wrong in the code?

Simple code with data https://github.com/dmitryuk/lifetime_predict_queue

This way I prepared the data as

id(doc id)|start_from_week_seconds(seconds past from start of week after client uploaded doc)|duration(seconds spent to check the doc)

After code line executed

`wf = WeibullAFTFitter().fit(df, "duration")`

exception throw"StatisticalWarning: The diagonal of the variance

Could you help to understand what is wrong in the code?

Simple code with data https://github.com/dmitryuk/lifetime_predict_queue

@dmitryuk ah, ignore it, I need to suppress that. Also, make sure to drop the

`id`

col in your model
:wave: minor lifelines release. Better support for pickling! https://github.com/CamDavidsonPilon/lifelines/releases

Hello! I'm trying to predict failure of a few robots with a pretty substantial time-series dataset, and I've been looking at lifelines as a potential method for doing so. The time series data has a few instances of failure, and I'm trying to correlate a number of other variables we have data on (such as forward velocity, number of stationary hours, etc) with failure. In short, I'm trying to get a window in which to predict possible failure based on historical data. Should I be using survival regression for this?

@nravic I think you can use lifelines, but you're in the realm of recurrent events, which lifelines has only a little support for (there may be another package out there?). Since you have daily snapshots, you probably want to use time-varying regression: https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html

Hey, I have a question concerning the concordance_index. I want to use my predicted cumulative hazard functions to compute the concordance_index and use them as predicted_scores. Is it the right way to sum up the chf of each sample and take the negative of it to compute the concordance_index on the basis of the cumulative hazard functions?

Hello. I'm trying to replicate the Weibull AFT model prediction section in the lifelines docs, but the return is all NANs from the predict_survival_function. Any thoughts on this? The code I used is :

```
from lifelines import WeibullAFTFitter
from lifelines.datasets import load_rossi
rossi_dataset = load_rossi()
aft = WeibullAFTFitter()
aft.fit(rossi_dataset, duration_col='week', event_col='arrest')
X = rossi_dataset.loc[:10]
aft.predict_survival_function(X)
```

@d-seki yes that's right, NH is that beta == 0

@julianspaeth depends on the model. Recall that the c-index *only* depends on ranking of values. For the Cox model, the summing the cumulative hazard won't change the ranking, so it won't matter what you use. For an AFT model, it may change the ranking.

Alternatively, you can choose a point in time, and use the CHF at that