mayou36 on gh-pages
Deploy zfit/zfit to github.com/… (compare)
mayou36 on 0.5.5
mayou36 on 0.5.5
mayou36 on master
add function to compute covaria… Merge branch 'develop' of https… place with assert KeyError at t… and 63 more (compare)
mayou36 on develop
Extend CHANGELOG.rst, fix autho… Finish 0.5.5 Finish 0.5.5 (compare)
mayou36 on gh-pages
Deploy zfit/zfit to github.com/… (compare)
mayou36 on 0.5.5
Extend CHANGELOG.rst, fix autho… (compare)
mayou36 on develop
change changelog (compare)
mayou36 on gh-pages
Deploy zfit/zfit to github.com/… (compare)
mayou36 on fft_conv
mayou36 on develop
Add primitive version of fft co… Add spline interpolation to FFT… Add TFA as deps and 22 more (compare)
Hi everyone!
Maybe this is not the place to ask for, but If anyone can point me in the right direction it will be very nice.
I am trying to fit a 5 parameter (a, b, c, d, e) model, where one of the parameters is constrained by another, let's say:
0< d < 1
e < |d|
I have only created the zfit.Parameters
and put the limits such that the ranges accessible to them are valid, again, let's say:
d = zfit.Parameter('d', 0.5, 0.3, 1.0, 0.01)
e = zfit.Parameter('e', 0.1, 0.0, 0.3, 0.01)
It has been working well so far, but I think it is not the right way to do it.
So my question is, what is the correct way to deal with this kind of constraint?
Cheers
Hey, on asking: if you can, it is prefered to ask on StackOverflow (maybe you can even post this question and I'm gonna add my answer, just to make it easier accessible for others.
I would use this limits with caution, as they block the variables, ideally, they should be far off the final value. There are two ways, you can either impose a constraint "mathematically" as a logical consequence, so define one parameter from another using a composed parameter (which is a function of other parameters). If possible, this should be the prefered way.
Another option is to impose this restrictions in the likelihood with an additional term. This, however, can have repercusions as you modify the likelihood. The minimizer will find a minimum, but this is maybe not the minimum you have looked for. What you can use are SimpleConstraints and add a penalty term to the likelihood if any of the above is violated (e.g. tf.cast(tf.greater(d, 1), tf.float64) * 100.
). Maybe make also sure that minuit is run with use_minuit_grad
.
Hey, the syntax is basically the same. The only difference is that you need now to integrate out two other dimensions and instead of providing 1D limits, you need 2D limits:
# assuming obs{1,2,3}
obs1 = zfit.Space('obs1', (-1, 1))
obs2 = zfit.Space('obs2', (-10, 3))
obs12 = obs1 * obs2
pdf.partial_integrate(...) # same as 1D case, just with a 2d space as the integration limits
In case this doesn't work, you can also ping me directly
Hi folks. I'm trying to run an unbinned 3D angular fit in zfit, where the input data is a sample with per-event sWeights assigned from a separate mass peak fit. I think I'm running into issues of negatively weighted events in some regions of the phase space, as zfit gives the error:
Traceback (most recent call last):
File "python/fitting/unbinned_angular_fit.py", line 443, in <module>
main()
File "python/fitting/unbinned_angular_fit.py", line 440, in main
run_fit(args.Syst, args.Toy)
File "python/fitting/unbinned_angular_fit.py", line 349, in run_fit
result = minimizer.minimize(nll)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/baseminimizer.py", line 265, in minimize
return self._hook_minimize(loss=loss, params=params)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/baseminimizer.py", line 274, in _hook_minimize
return self._call_minimize(loss=loss, params=params)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/baseminimizer.py", line 278, in _call_minimize
return self._minimize(loss=loss, params=params)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/minimizer_minuit.py", line 179, in _minimize
result = minimizer.migrad(**minimize_options)
File "src/iminuit/_libiminuit.pyx", line 859, in iminuit._libiminuit.Minuit.migrad
RuntimeError: exception was raised in user function
User function arguments:
Hm_amp = +nan
Hm_phi = +0.06
Hp_phi = -0.07
Original python exception in user function:
RuntimeError: Loss starts already with NaN, cannot minimize.
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/minimizer_minuit.py", line 121, in func
values=info_values)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/baseminimizer.py", line 47, in minimize_nan
return self._minimize_nan(loss=loss, params=params, minimizer=minimizer, values=values)
File "/home/dhill/miniconda/envs/ana_env/lib/python3.7/site-packages/zfit/minimizers/baseminimizer.py", line 107, in _minimize_nan
raise RuntimeError("Loss starts already with NaN, cannot minimize.")
I can avoid this error by restricting one of the angle observable ranges slightly, to avoid the region with small numbers of data events where some data is weighted negatively (signal is being over-subtracted by the sWeights). But I wondered if there is another way around this in zfit? Perhaps theUnbinnedNLL
method explicitly requires positive events, so the negatively weighted data points could be set to zero or a small positive value instead? Any help would be great! Thanks, Donal
Yes, there is unfortunately a problem with the conda build, the latest version is 0.5.2 (which I would not recommend). The problem in short is that upstream dependencies are not available for the version that should be used (tensorflow-probability: stuck at 0.8, TF-graphics has no support at all).
So unfortunately currently the golden way is to create a conda env and install e.g. TensorFlow, then install zfit with pip
#PROD PDFs
nSigSig= zfit.Parameter('nSigSig',10000,0,len(df))
signal = zfit.pdf.ProductPDF(pdfs=[BsCB,PhiCB])
signalExtended=signal.create_extended(nSigSig)
nSigBkg= zfit.Parameter('nSigBkg',10000,0,len(df))
signalbkg = zfit.pdf.ProductPDF(pdfs=[BsCB,PhiBkgComb])
signalBkgExtended=signalbkg.create_extended(nSigBkg)
nBkgSig= zfit.Parameter('nBkgSig',10000,0,len(df))
bkgsignal = zfit.pdf.ProductPDF(pdfs=[BsBkgComb,PhiCB])
bkgSignalExtended=bkgsignal.create_extended(nBkgSig)
nBkgBkg= zfit.Parameter('nBkgBkg',10000,0,len(df))
bkg = zfit.pdf.ProductPDF(pdfs=[BsBkgComb,PhiBkgComb])
bkgExtended=bkg.create_extended(nBkgBkg)
model = zfit.pdf.SumPDF(pdfs = [signalExtended,
signalBkgExtended,
bkgSignalExtended,
bkgExtended])
Hi all . I wish to generate toys in 4D using the product of a 3D angular PDF and a B mass PDF. Then I'd like to fit the B mass only of the toy sample in order to derive sWeights, then use those sWeights to do a 3D fit to the angles. Is there a recommended way to generate toys in N dimensions, then fit a subset of dimensions after?
For toys I have generated previously, I have used:
data = pdf.create_sampler(n=10000, fixed_params=True)
nll = zfit.loss.UnbinnedNLL(model=pdf, data=data)
minimizer = zfit.minimize.Minuit(strategy=DefaultToyStrategy(), tolerance=1e-5)
data.resample()
result = minimizer.minimize(nll)
where I do the resampling after defining the nll
in terms of the full ND PDF and dataset. But here I would like to generate in 4D, then define the nll in terms of just 1D and do the fit. Any help would be super :)
Hi, I'm trying to use zfit to fit to data in a loop, where in each iteration the data changes (cut at a different point) so I need to create a new NLL each time - is there a recommended way to do this? At the moment it looks like everything is being stored as it gets gradually slower and I can see my memory usage going up. Sorry if it is already explained somewhere, I saw there is an example for fitting to toys but I am not sampling here
Hey, so one way to do this is to use zfit.run.set_graph_mode(False)
in the beginning. This way, zfit overall may runs slightly slower, but it won't slow down over time. In fact, it runs like numpy then
Hi all . I wish to generate toys in 4D using the product of a 3D angular PDF and a B mass PDF. Then I'd like to fit the B mass only of the toy sample in order to derive sWeights, then use those sWeights to do a 3D fit to the angles. Is there a recommended way to generate toys in N dimensions, then fit a subset of dimensions after?
There are a few ways:
zfit.run.set_graph_mode(False)
and doing things manually can work for sure. E.g. sample, then take a subset ('with_obs) and create a new loss (or even convert it to pandas or something) should work. If the fit takes a long time, you can also leave the graph mode on and clear the cache each time with
zfit.run.clear_graph_cache`(the same is btw true for @srishtibhasin )with_obs
outside of the loop, create the loss with this smaller dataset and then loop (not sure tbh, as there is some redisigning going on to make things cleaner)(you can also write me directly in case it does not work)
Hi @mayou36 - thanks for the info. I have just tried:
data = pdf.create_sampler(n=10000, fixed_params=True)
data.resample()
which generates a toy 4D dataset. Is it OK to do things in this order i.e. do the resampling directly after the create_sampler
? If so, I can just take the mB
variable from my 4D toy sample, use it in a B mass fit to calculate the sWeights, then use those along with the angles from the 4D toy.
resample()
is OK to use in this way, without any definitions of nll
before.
Yes sure! Actually, the
create_sampler
is just a "toy-optimized" version ofsample
. If you use it inside a loop and just do theresample
, it's more efficient, but you can also just directly usesample
in this case:data = pdf.sample(...)
OK that's good to know, thanks!
Hi @mayou36 , loved your package , i am using on LHCB open data (B->kaon) for fitting purposes. I used combined background + signal (exp+gauss) model for fitting , and it works without any issue. Now I am trying to get the total background and signal from their respective model, for that I used normalization() method. It didn't work work for gauss model I was hoping to get a value of 45 but instead I got 1 .
Any help would be appreciated <3 .
Hi, unfortunately at the moment there is no out-of-the-box working solution to save zfit model, (however, we have someone working on this since a while, this is on the roadmap of zfit). There are two ways to do it:
If you don't need to do large fits, or many, I would go for the first option