Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    cc @githubhsss ^
    Paul Zivich
    @pzivich
    They are equivalent, but there is a formula to convert between the estimate from Weibull AFT and Weibull HR. Where \beta_{PH} = - \beta_{AFT} * \sigma where \sigma is the scale (depends on how you have the Weibull factored as. I think lifelines might be slightly different)
    fuyb1992
    @fuyb1992
    I want to get the interval of predicted median value for Weibull model, and I write some codes to get it, but I'm not sure if this is corret, here is my code:
    class MyWeibullFitter(WeibullFitter): @property def median_confidence_interval_(self): '''get the confidence interval of the median, must call after fit and plot''' if self.median_ != np.inf: self.timeline = np.linspace(self.median_, self.median_, 1) return self.confidence_interval_survival_function_ else: return None
    Thank you for your times!
    githubhsss
    @githubhsss
    @pzivich Thanks for your answer~
    @CamDavidsonPilon May I ask which book is the screenshot of Figure 4.1 from? Newbie at survival analysis and want to learn more~
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @githubhsss it's from a thesis, which is a pretty nice intro to a lot of common models: https://harvest.usask.ca/bitstream/handle/10388/etd-03302009-140638/JiezhiQiThesis.pdf
    githubhsss
    @githubhsss
    @CamDavidsonPilon Thanks a lot!
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @fuyb1992 you can do something like this:
    from lifelines.utils import median_survival_times 
    
    median_survival_times(self.confidence_interval_survival_function_)
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    (though it is pretty efficient, just not most efficient)
    This actually isn't the most efficient way to compute the confidence intervals, but I think I'll expose a better way in the future
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    efficiency in the statistical sense, not performance
    fuyb1992
    @fuyb1992
    @CamDavidsonPilon Thank you for your answer!!! I tried your answer, it only works for data with S(t)<=0.5 and return days interval, but for data with S(t)>0.5 return None .
    fuyb1992
    @fuyb1992
    @CamDavidsonPilon I'am new to survival analysis, excuse me please if I'm wrong. I'm confused after reading wiki and papers about the confidence interval of survival function for parameter models, it would be a great help if you can give some references or documents about that!! Thanks a lot!
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @fuyb1992 you can do something like this:
    from lifelines.utils import median_survival_times
    median_survival_times(self.confidence_interval_survival_function_)
    fuyb1992
    @fuyb1992
    @CamDavidsonPilon Thank you for your answer!!! I tried your answer, it only works for data with S(t)<=0.5 and return days interval, but for data with S(t)>0.5 return None .
    '''
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    efficient as in "statistical efficiency", not peformance
    githubhsss
    @githubhsss
    @CamDavidsonPilon
    Thanks for sharing the thesis again~
    I'm dealing with some repeated events data(machine failure time data). Since a machine may have several failures and different machines have different number of failures, I think it's necessary to consider about repeated events and heterogeneity. Will frailty models help? Or any other advice? (^_^)/
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    Yuck, Gitter is being messy and posting my edited messages much later than originally posted. sorry sorry
    @fuyb1992 ah yes, you may want to keep your if self.median_ != np.inf check
    @githubhsss frailty, is one solution, though it's not in lifelines (but is in R's survival). Another option is to use cluster_col is CoxPHFitter: https://lifelines.readthedocs.io/en/latest/Examples.html#correlations-between-subjects-in-a-cox-model. Another solution is to strata-ify per machine in the CoxPHFitter.
    fuyb1992
    @fuyb1992
    image.png
    fuyb1992
    @fuyb1992
    Thanks a lot! I'm trying to understand the confidence interval of survival function for parameter models, the Taylor expansions method is mentioned a lot , and the Jacobian-vector product is used in lifelines code. I'm confused with the relationship between them, it would be a great help if you could give some references or documents about the implementation method. Thank you for your time!!
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    I'd be happy to, as it is something I'm really excited about. Let me type something up tomorrow
    fuyb1992
    @fuyb1992
    Thank you so much, I'm looking forward it!!
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    let me know if you have questions about it
    :wave: A minor release, 0.20.4, is available. Bug fixes, improvements to large datasets in AFT, and left-truncation in AFT models.
    https://github.com/CamDavidsonPilon/lifelines/releases/tag/v0.20.4
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    Let me know if you are having install problems, please - w.r.t. to the 0.20.4 release
    Also, I'm working on a new survival regression model. The original motivation was the predictable behaviour of SaaS companies customer churn, but it's generally a very flexible model. Have a look here if interested and I'm looking for feedback on it: https://nbviewer.jupyter.org/gist/CamDavidsonPilon/ce93dc24947c45b034402edc657aa6eb
    fuyb1992
    @fuyb1992
    Thank you very much for your answer, which explains the delta method on parameter models clearly!!
    githubhsss
    @githubhsss
    @CamDavidsonPilon Thanks~
    githubhsss
    @githubhsss
    @CamDavidsonPilon
    I have been reading your recommended thesis. It helps a lot. Though still lots of questions...
    I tried Cox and WeibullAFT, but the concordance was only 0.53. Does this mean that the models fit unacceptably? What is the reference of the range of 0.55-0.7? In addition to concordance, can I directly compare log likelihood? Have no idea about goodness of fit and model selection...
    Cameron Davidson-Pilon
    @CamDavidsonPilon

    Disappointingly, 0.53 is a bit on the low end. Have you tried a LogNormalAFT - it can fit some models better.

    What is the reference of the range of 0.55-0.7?

    I think I saw it in Frank H. work, maybe his blog?

    You can't compare CoxPH and WeibullAFT log likelihood values, no. Mostly because the CoxPH is a partial likelihood.

    I recently added some resources here to help with model selection between CoxPH and parametric models: https://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#parametric-vs-semi-parametric-models
    It's also very possible you are missing interactions or non-linear effects in your models.
    githubhsss
    @githubhsss
    @CamDavidsonPilon Thanks for your answer~ I got to keep working on it...
    Manon Wientjes
    @manonww
    Hi @CamDavidsonPilon How do you ensure that lambda and rho are greater than 0 if you fit a weibull distribution using the WeibullFitter? You do not set the bounds as in the LogNormalFitter?
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @manonww good observation. The bounds, when not specified, are set to be always positive
    hgfabc
    @hgfabc
    hi, I'm quite new to using lifelines and I stumble upon errors while executing. I was wondering when using the cph.fit() method, does it omit the missing values/Nan ? Or do I have to reform the dataframe? thanks
    Manon Wientjes
    @manonww
    @CamDavidsonPilon Thanks! I was also wondering why the Weibull distribution is not defined at 0. According to Wikipedia it is defined? https://en.m.wikipedia.org/wiki/Weibull_distribution
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @manonww it is defined at 0, but the probability of an event there is nil (hence why we reject any 0 durations - probably it's malformed data). Is there a place in lifelines where the weibull if not defined at 0? (Maybe my docs?)
    @hgfabc welcome! It does not omit or drop NaNs, that up to you to handle first
    hgfabc
    @hgfabc
    @CamDavidsonPilon so if my original data frame contains Nans in it, it doesn’t raise errors and would proceed with it?
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    It will raise an error, which you must fix @hgfabc
    hgfabc
    @hgfabc
    My mistake, I didn’t read through. Thank you!@CamDavidsonPilon
    Typo sorry @CamDavidsonPilon
    Manon Wientjes
    @manonww
    @CamDavidsonPilon No, sorry I didn't read the error message properly. I have another question :). To determine rho and lambda of a Weibull distribution, you use scipy optimize minimize with the L-BFGS-B method?
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    @manonww yup that is correct!
    Cameron Davidson-Pilon
    @CamDavidsonPilon
    :wave: minor release alert! Update to 0.20.5 for some bug fixes. Changelog here