## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
Martin
@Mbompr
If you don't need the latest changes, maybe can you just pip install tick==0.5.0.0, the linux binaries should be provided there.
Guilherme Borges
@guilhermeresende
Thank you very much @Mbompr ! By D you mean the number of processes or the number squared (the size of the kernel norms matrix)?
Martin
@Mbompr
You're welcome, I mean the number of processes.
I am not a 100 percent sure for HawkesEM though
dangngohai
@dangngohai
Thank you very much @Mbompr. Smaller learning rate means reducing the step size? And how can I deactivate linesearch in tick.hawkes.HawkesExpKern?
I did try gradient based method but I could not success indeed. I am trying using evolutionary optimization technique like differential evolution to maximize likelihood function. Have you ever tried using this method in estimating Hawkes pp?
Martin
@Mbompr
HawkesExpKern is for higher level usage, if you want to fine tune the optimization part I would suggest you to use the optimization API (see https://x-datainitiative.github.io/tick/modules/solver.html). Anyway, something like hawkes_exp_kern.solver_obj.linesearch = False and hawkes_exp_kern.solver_obj.step = 0.01might be a hacky way of doing this.
And no, I am not aware of these techniques, I will have a look.
Pierre-Philippe Crepin
@PPCrepin_gitlab
@Mbompr Thanks ! The version 0.5.0.0 is working :)
dangngohai
@dangngohai
Hello @Mbompr again! I am running HawkesExpKern maximum likelihood parametric estimation and sometimes I meet this runtime error: The sum of the influence on someone cannot be negative. Maybe did you forget to add a positive constraint to your proximal operator. Can you help explaining me how can I add positive constraint to the proximal operator? Thank you very much.
Martin
@Mbompr
By default, by using learner, you have a positive constraint on your proximal operator. In you can, that means that your vector of coeffs is full of 0. This happens often when you optimize for the likelihood loss. Have you tried optimizing for the least-squares ? Likelihood loss does not meet the traditional assumptions that makes optimization algorithms converge smoothly (see https://arxiv.org/pdf/1807.03545.pdf).
dangngohai
@dangngohai
Actually, I first did least-squares estimation then I used the obtained parameters as the initial parameters input for likelihood estimation. "the vector of coeffs is full of 0", do you mean the vector of initial coeffs?
Martin
@Mbompr
That could be a good idea indeed.
I mean the vector at the failing iteration. Shortly, at each step you do w_i <- max(w - g(w), 0)_i where g is the gradient of the loss and w is the vector of parameters.
If at any step w is full of zero then you will have the error message
You might try to deactivate the line search, and / or try with smaller step sizes.
dangngohai
@dangngohai
Thank you very much for your answer. It helps me clear now. Just one more question, how can I deactive the line search in your library code of likelihood maximum estimation?
dangngohai
@dangngohai
@Mbompr
Martin
@Mbompr

Sorry, I missed you message, the previous suggestion does not work ?

HawkesExpKern is for higher level usage, if you want to fine tune the optimization part I would suggest you to use the optimization API (see https://x-datainitiative.github.io/tick/modules/solver.html). Anyway, something like hawkes_exp_kern.solver_obj.linesearch = False and hawkes_exp_kern.solver_obj.step = 0.01might be a hacky way of doing this.

Charlotte Dion
@charlottedion_gitlab
Hello! Thank you for the great library! I am trying to use the HawkesEM method on a network of 249 subjects. Do you think the method can make it? (on my computer it is sooo long). My main concern is the visualisation of the result (I can't represent all the interaction functions). Is it possible to do a colored matrix as it is done on the financial example ? or have you an other idea ? (my goal here is to find the main connexions between the 249 subjects, reducing the dimension around 10 if possible). Thank you for your help, Charlotte
♦♣♠♥
@PhilipDeegan
Hi Charlotte, I'll see if I can get one of the Hawkes experts to comment
paging @Mbompr
otherwise the github issues section is possibly a better place for this
Charlotte Dion
@charlottedion_gitlab
ok thank you I will go there !
dharmash
@dharmash
Hello, firstly thanks and appreciation for the library. Two technical questions about the "score" function, in the learner classes, for e.g. tick.hawkes.HawkesExpKern. a) What exactly does this return? Log-likelihood, or Negative Log-Likelihood? b) Further, does it average it out over the number of "events", i.e. does it divide the overall Log-Likelihood (or overall Negative Log-Likelihood?) by the number of all arrivals/events in the data set whose score is under question? Any pointers on where I may also see this calculation to convince myself?
dharmash
@dharmash
Any help on the above questions? Thanks!
dharmash
@dharmash
Just to verify the log-likelihood calculation, I created a HawkesExpKern model, used a constant decay (float), and took all other defaults (gofit, etc.). I called "fit" on a data set to learn the model, which it learned very fast. I wanted to compute model log-likelihood, so to verify what it is doing, I tried the following: I forced the baseline to be $\lambda_i = N_i/T$, where $N_i$ is the number of events of type i, and T is the largest time-stamp in the data set (say one realization of the event stream/trajectory spanning several event types, say 20 event types), and also forced the adjacency matrix to be zero everywhere, and used a constant value for decay. For this set of parameters, the model essentially reduces to 20 independent Poisson processes, so the log-likelihood is simply $\sum_i (-\lambda_i T + N_i \log(\lambda_i))$. But the "score" call on the above HawkesExpKern model called with arguments baseline and adjacency forced as above, along with a data set: It gives a different value. The analytical formula above gives a large negative number, whereas the score call gave a small positive number. You see any problem with what I say above? Please let me know.
@Mbompr If possible, kindly let me know
dharmash
@dharmash
Is there a way for the user to "set" the 'coeffs' values on a HawkesExpKern object? The plain way of setting results in an AttributeError, "AttributeError: coeffs is readonly in HawkesExpKern"
sumau
@sumau
Hey @dharmash did you find the solution to the questions you were asking? I think I would find it helpful :-)
Jiawen Liu
@Jasmine2322_gitlab
Hi, I have some problems installing the tick package. I have installed through "pip install tick" but when I imported the package, I had the error "DLL load failed: The specified module could not be found". I have tried uninstall and install tick again but it doesn't work. I am using python 3.7 and Windows system. Could anyone please help me? Thanks!
♦♣♠♥
@PhilipDeegan
hmm it's possible we didn't update the library search paths properly for the pip windows install
if you do something like
"set PATH=C:\path\to\tick\lib;%PATH%"
not sure where the library is installed on your system
♦♣♠♥
@PhilipDeegan
alternatively I think the linux pip install should work on WSL (windows subsystem for Linux)
Jiawen Liu
@Jasmine2322_gitlab

if you do something like
"set PATH=C:\path\to\tick\lib;%PATH%"

It works, Thank you so much!!

Voltaire Vergara
@jv11699
Hey everyone, I am just wondering.. How would you fit a 1D data of housing transactions on a given day to make predictions? I tried doing, fit data to HawkesExpKern, then do learner._corresponding_simu() and put the end times on a later date (date that I want to predict). Afterwards I did a lot of simulations and took the average number of them. Apologies if this question is dumb... I am new to ticks and Hawkes process in general. Thanks!
the predictions were way off by the way
Martin
@Mbompr
@jv11699 It is indeed a way to go. I would not say Hawkes processes are meant to be used for predictions but if you really want to, you could do it the way you suggest, I don't see any problem with it.
KristenMoore
@KristenMoore_gitlab
Hi everyone,
I'm new to Tick and am wondering if there are any examples of how to fit a marked multivariate Hawkes model? I'm guessing I want to use HawkesConditionalLaw.
KristenMoore
@KristenMoore_gitlab
Hi, just wondering if it's possible to constrain adjacency matrix values when you fit HawkesSumExpKern (or any of the other Hawkes models)? I want the diagonal elements to be zero and the off diagonals to be non-negative. How could I go about this?
Thanks.
dangngohai
@dangngohai
@Mbompr Hi. If i choose verbose = True in tick.hawkes.HawkesExpKern, what areobj and rel_obj in the information the solver print? Thank you!
gcampede
@gcampede
Hello everyone. I am playing around with Tick as an R package I am using for implementing Hawkes process modeling provides little flexibility in terms of kernels. Currently, however, I have a single question that may seem extremely dumb: how do I retrieve the estimated \alpha , \beta, and \mu? I cannot find examples in the documentation that addresses this point. Also, is there a way to compare models, e.g. by comparing the AIC score? Thank you in advance!
Martin
@Mbompr

Hello,
I have not played much with R so I will rather give you a Python example. I guess you can transcribe it with rpy2 afterwards.
The main thing is that we have no learner able to retrieve all these parameters, especially the beta one. But you can try several values of beta and look for the learner with the best likelihood as in the following example:

import numpy as np
import itertools

from tick.dataset import fetch_hawkes_bund_data
from tick.hawkes import HawkesSumExpKern
from tick.plot import plot_hawkes_kernel_norms

timestamps_list = fetch_hawkes_bund_data()

best_score = -1e100
decay_candidates = np.logspace(0, 6, 6)
for i, decays in enumerate(itertools.combinations(decay_candidates, 3)):
decays = np.array(decays)
hawkes_learner = HawkesSumExpKern(decays, verbose=False, max_iter=10000,
tol=1e-10)
hawkes_learner._prox_obj.positive = False
hawkes_learner.fit(timestamps_list)

hawkes_score = hawkes_learner.score()
if hawkes_score > best_score:
print('obtained {}\n with {}\n'
.format(hawkes_score, decays))
best_hawkes = hawkes_learner
best_score = hawkes_score

plot_hawkes_kernel_norms(best_hawkes, show=True)

This has been discussed earlier in the gitter thread if you want more info or in this issue https://github.com/X-DataInitiative/tick/issues/133#issuecomment-495999841.

gcampede
@gcampede
@Mbompr thank you very much for your timely answer. Will certainly explore this!
lukeCLarter
@lukeCLarter
Hi, I am reproducing some of the analysis done in this paper (https://royalsocietypublishing.org/doi/full/10.1098/rsif.2016.0296) which uses a similar approach to modelling the data as a Hawkes process. The study looks at onset times of calls of 4 birds and models how much influence the calls of each bird have on their neighbours using point process analysis. I have used a Hawkes EM and have found similar patterns of stimulation of calling among the birds in the dataset. However, in addition to call stimulation, the analysis in the paper is also able to model negative influence that birds have on one another, i.e. when the calls of one bird inhibit calling by another. When using HawkesEM, it seems the influence metric bottoms out at 0. Is there any way to expand the model so that I can model inhibition in the way I have described?
lugueraRepo
@lugueraRepo
Hi all, and thank you for this wonderfull package. I want to "predict" x times, so first i build the object of a exponential kernel and fit with real data. Now, if i want to predict, how can i call to _corresponding_simu() and set the end_times to predict? Thank you very much!!
ANittoor
@ANittoor
Hi, I am running the finance data example at https://x-datainitiative.github.io/tick/auto_examples/plot_hawkes_finance_data.html#example-plot-hawkes-finance-data-py , I am using my own dataset, on calling hawkes_learner.fit(), I get ValueError: Cannot run estimation : not enough events for components [0 1] . What does this mean?
This seems to happen intermittently, I am it for bivariate data.
ANittoor
@ANittoor
Hi,
Thanks for the great library!
I have a requirement where I need to rank the performance of different kernels on a given dataset of timestamps, I do the ranking based on an algorithm that uses among other things, the likelihood array of each type of kernel fit. How do I access this likelihood array? I see that there is a score function in HawkesExpKernel for example, which gives me a single value of log-likelihood (is it negative log-likelihood?).
What I am looking for is -
Given a dataset of event timestamps [1, 9, 65 ... etc.] of some length n.
Given the probability distribution from the kernel type.
How do it get the array [NegativeLogLikelihood(events occurring at 1, 9),
NegativeLogLikelihood(events occurring at 1,9,65),
NegativeLogLikelihood(events occurring at 1,9,65, 72) .. etc.]
of length n-1 ?
I think score gives me the last value in this array for an exponential fit. How do I get the array?
Fran Paruas
@luguera:matrix.org
[m]
Hi all, if i have a list of timestamps and a marked number in each timestamp, how can i develop a Hawkes Exp kern?. Thank you very much!!
AA102020
@AA102020
Hi All,
AA102020
@AA102020

I want to use HawkesExpKern, to get the estimated parameters of your exponential kernel based on real data shown below. I will then like to simulate a Hawkes Process using hawkes.SimuHawkes.
My challenge is
(1) How do I get decays , and adjacency for the HawkesExpKern
(2) For t_values = date column in my data , and for y_values = the corresponding values in Activity1 and Activity2 in Data shown below

I have two interdependent activities that I want to model using Tick’s Hawkes. My data is in the form of
ID# Date Activity 1 Activity 2
0 8/26/2006 1 0
1 3/31/2007 5 1
2 5/20/2007 1 1
3 5/25/2007 1 2
4 6/3/2007 1 6
5 6/18/2007 1 3
6 6/19/2007 1 10
7 7/11/2007 1 1
8 7/19/2007 1 0
9 7/26/2007 1 2
10 7/31/2007 1 1
11 8/4/2007 1 2
12 8/8/2007 1 1

Any advice , input , guidance on the above

Yuchao-Dong
@Yuchao-Dong
Hi, All. I have some double about the compution of the log-likehood. It seems to be different with the results that is obtained from other package (hawkeslib). For example, assume there is just one event happen at t=0.5. I set mu=0.05 & alpha=0. The result should be log(mu)-mu*0.5= -3.02. It is also confirmed by hawkeslib, but tick shows the result as -2.52. Could you explain the difference?
jona-sch
@jona-sch
Hey. I am using tick to study causality between events. When using HawkesExpKern and its method fit() it takes a (sometimes very) long time before launching the solver (and printing the line 'Launching the solver AGD...'). My data is around 50MB so not excessively heavy. Is this normal ?
I'm using tick's last release (0.7.1) installed with pip on a CentOS machine.