Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 13 12:10

    jonas-eschle on develop

    docs: update CHANGELOG.rst for … (compare)

  • Oct 11 08:46
    jonas-eschle synchronize #272
  • Oct 11 08:46
    jonas-eschle synchronize #272
  • Oct 11 08:46

    jonas-eschle on binned_new

    chore: add benchmark in require… (compare)

  • Oct 10 19:58
    jonas-eschle synchronize #272
  • Oct 10 19:58
    jonas-eschle synchronize #272
  • Oct 10 19:58

    jonas-eschle on binned_new

    fix: unsupported format type (compare)

  • Oct 10 19:54
    jonas-eschle synchronize #272
  • Oct 10 19:54
    jonas-eschle synchronize #272
  • Oct 10 19:54

    jonas-eschle on binned_new

    fix: unsupported format type (compare)

  • Oct 09 08:12
    jonas-eschle synchronize #272
  • Oct 09 08:12
    jonas-eschle synchronize #272
  • Oct 09 08:12

    jonas-eschle on binned_new

    debug: add statement to check n… (compare)

  • Oct 09 08:03
    jonas-eschle synchronize #272
  • Oct 09 08:03
    jonas-eschle synchronize #272
  • Oct 09 08:03

    jonas-eschle on binned_new

    [pre-commit.ci] pre-commit auto… Merge pull request #368 from zf… Merge branch 'develop' into bin… (compare)

  • Oct 08 22:32
    jonas-eschle synchronize #272
  • Oct 08 22:32
    jonas-eschle synchronize #272
  • Oct 08 22:32

    jonas-eschle on binned_new

    docs: add docs for binned data enh: add chi2 docs and options … (compare)

  • Oct 07 18:08
    jonas-eschle synchronize #272
Jonas Eschle
@jonas-eschle

The other thing is the iminuit v2 release that happened yesterday (which is great news!). However, this has a breaking API. So the solution is to install iminuit<2 (or use the development version for now, it has the requirement up).

To do the former, install zfit, then do pip install -U iminuit<2, that should do the trick

Жук Михаил
@Shebenkae
Hello, sorry to bother you, but I have another problem, I am using sumPDF with gauss and exponential, making data = model.create_sampler(n_sample, limits=obs), then in for i in range(0, 50): I making data.resample() but I have warnings like (WARNING:tensorflow:11 out of the last 11 calls to <function BaseModel.sample.<locals>.run_tf at 0x7f96d2833d08> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings), and my program eats a lot of RAM (about 12GB for 30 cycles) and its a big problem for me, can you give an advice to how handle it or control, to make my program less using RAM cuz its number is limited
Jonas Eschle
@jonas-eschle
Sure! The warnings should be fine (I assume they happen only the first time, correct?). Does the memory usage increase significantly after the first run?
If so, maybe references on objects are kept in the fitresult. So it can be worth to save the information you need from a fitresult and delete them after the run, this should free up memory
Жук Михаил
@Shebenkae
I have 3 warnings every resampling. And memory increases linearly until its filled in totally. I didn't have the same problem when I used only gauss model without exponential
Жук Михаил
@Shebenkae
Can you give an advice how to delete objects after every run?
Jonas Eschle
@jonas-eschle

I have 3 warnings every resampling. And memory increases linearly until its filled in totally. I didn't have the same problem when I used only gauss model without exponential

The warnings should not occur on every resampling actually...

Jonas Eschle
@jonas-eschle

I am investigating it, many thanks for the example. One thing that is also more generally of interest: especially for smaller fits, you can also try to run things in eager mode, e.g. to do:

zfit.run.set_graph_mode(False)

...

which runs zfit in a numpy-like mode. This can be faster for only a few fits and usually does not have memory grow (True; in your case, having multiple fits, it should be faster with the default, I am investigating)

Ilya Komarov
@mozgit
Hi @mayou36 , thanks for looking at @Shebenkae's issue. I'm also taking a look there. Your solution with switching to graph mode works - warnings are gone and memory usage is increasing much slower and we can have job done. However, it still grows, and I don't see why should it
Jonas Eschle
@jonas-eschle
That's true, I've also checked that. The good thing with the eager mode is that now any "normal" python memory profiler can be used, I can try to investigate that
XuelongQin
@XuelongQin

In addition, I am not sure how to import partial integral. I have seen several examples but they are all full integral.

Hi, this is possible indeed! I've added an explanation in the development wiki about the logic. In short, you can register it as you do and there is an example available (also mentioned in the wiki)

Thank you, that really helps!

XuelongQin
@XuelongQin
Hi, I have a question. I am trying to construct a 3-D pdf with TH3D efficiency . I have successfully write the custom_pdf class. But I meet a problem when I try to add analytical integral. We calculated the full space integral for each term in the pdf and saved the results in a TH1D. However, I don't know how to import the results to register_analytic_integral function.

Double_t ret = 9./(323.14159265) (
0.75(1-Fl) intPart[0]

                + Fl                     * intPart[1]
                + 0.25*(1-Fl)            * intPart[2]
                - Fl                     * intPart[3]
                + 0.5*P1*(1-Fl)          * intPart[4]
                + 0.5*sqrt(Fl-Fl*Fl)*P4p * intPart[5]
                + sqrt(Fl-Fl*Fl)*P5p     * intPart[6]
                - sqrt(Fl-Fl*Fl)*P6p     * intPart[7]
                + 0.5*sqrt(Fl-Fl*Fl)*P8p * intPart[8]
                + 2*(1-Fl)*P2            * intPart[9]
                - P3*(1-Fl)              * intPart[10]
                );

I want to import the integral like this formula. So we need to add the intPart histogram in the integral function. I want to know how to do this

XuelongQin
@XuelongQin
It seems that I have solved it. I import the np.array generated from the intPart histogram when I init custom pdf . Then in the integral function, I can use model.intPart to get the array to define the analytical integral.
Jonas Eschle
@jonas-eschle
Yes, that seems a good way to go as you can access the model in the integral function
Colm Harold Murphy
@chm-ipmu
After running memory_profile.profile (https://pypi.org/project/memory-profiler/) I see that when making ComposedParameters I get significant spikes in memory usage, from a baseline of around 500 MB to 12000 MB. After creating the ComposedParameters the memory usage then returns to normal. However, after the minimization is performed, the memory usages rises again to 12000 MB and stays at this level.
Do you have any ideas what might be causing this?
This is causing issues for me as I need to run several thousand toy fits on a batch system, which will not accept such high memory usage jobs. Cheers
Colm Harold Murphy
@chm-ipmu
I was using zfit 0.5.1, after upgrading to 0.5.5 this issue goes away (memory usage is now an acceptable 2gb)
Jonas Eschle
@jonas-eschle
Ah great to hear! I couldn't reproduce it, so that explains it
Colm Harold Murphy
@chm-ipmu
Sorry for the wild goose chase! I should've checked my version when I posted the original question.
1 reply
Donal Hill
@donalrinho

Hi all! A collaborator I'm working with just installed zfit, but when doing an import is getting the error message:

AttributeError: module 'iminuit.util' has no attribute 'MigradResult'

Are there changes on the iminuit side that affect zfit? I can ask him for more details of his package versions e.t.c. if required. Thanks a lot!

Donal Hill
@donalrinho
I see in the change log here (https://iminuit.readthedocs.io/en/stable/changelog.html) that MigradResult was removed.
Jonas Eschle
@jonas-eschle
Hi Donal, did he use pip? If he installed zfit today he should have the 0.5.6 release, is that correct? This should install iminuit<2 actually

I've just tried it and it install the correct version. Otherwise, he can simply downgrade iminuit with `pip install "iminuit<2"

It's true that iminuit 2 is out and there have been quite a few changes to it

Colm Harold Murphy
@chm-ipmu

Hi all. I'm wanting to save as much output pertaining to the fit result and the validity of the minimum found as possible. I see that I can access the (iminuit) information by doing something like:

minimum_info = dict(result.info["original"])

Where "result" is a FitResult instance.
Am I missing a more direct function implemented in zfit?

Jonas Eschle
@jonas-eschle

Indeed, this is most there is, as the common interface is shared by multiple minimizers and therefore does not provide all the detailed information that e.g. iminuit provides. But if you have a good idea on what to add, feel free to propose, that can indeed be helpful. What else you should currently find is:

  • result.edm
  • result.params_at_limit
  • result.valid ( which checks also params_at_limit)
  • result.converged
  • result.fmin (value of function at minimum)

Just let us know (open an issue) if you think there are more things that should be propagated from the minimizer

zleba
@zleba
Hello, I can see that zFit doesn't work with the newest iminuit 2.4.0, instead an older 1.5.4 version is required.
Do you plan to update this such that zFit can run with the newest iminuit version?
Jonas Eschle
@jonas-eschle

Yes, in fact it is already upgraded in the develop version that will be a new "majorish" release, 0.6.0. If you want, you can install the current dev version with
pip install git+https://github.com/zfit/zfit

Furthermore, there will be a general large upgrade on minimizers, adding SciPy and NLopt with a complete overhaul of the mechanics (currently a PR).

Blaise Delaney
@BlaiseDelaney
Hello devs, I have a quick question on something that I suspect may have been sort of asked already; apologies, if that's the case. I wondered if there was a way to perform several fits while looping over arrays within the scope of a .py file or whether one must write a wrapper of some sort. My attempts so far within a jupyter notebook seem to not allow it, as I need to restart the kernel to re-initialise parameters at each iteration. Thanks a lot in advance for your help!
Jonas Eschle
@jonas-eschle
Hey, what exactly to you mean by "looping over arrays"? Is the array the data or the parameter values?
If it is the latter (e.g. to create a likelihood profile), you don't have to re-initialize the parameters but can use the set_value method to set them to a new value
Yanina Biondi
@YaniBion
Hi, newbie question: is there a way to model a pdf with a normalize histogram like rv_histogram.pdf() from scipy.stats when one does not know the parametric form?
*normalized
I'm guessing I can use an object from the class custompdf, I want to know if this is stable and someone tried before or there is better method
Jonas Eschle
@jonas-eschle
Hi, this has indeed been done already multiple times, here is a snippet on how to do it as an example:
import zfit
from zfit import z
import numpy as np
import tensorflow as tf

zfit.run.set_autograd_mode(False)


class BinnedEfficiencyPDF(zfit.pdf.BasePDF):

    def __init__(self, efficiency, eff_bins, obs, name='BinnedEfficiencyPDF'):
        self.efficiency = efficiency
        self.eff_bins = eff_bins
        super().__init__(obs=obs, name=name)

    def _bin_content(self, x):
        eff_bin = np.digitize(x, self.eff_bins)
        return self.efficiency[eff_bin]

    def _unnormalized_pdf(self, x):  # or even try with PDF
        x = z.unstack_x(x)
        probs =  z.py_function(func=self._bin_content, inp=[x], Tout=tf.float64)
        probs.set_shape(x.shape)
        return probs
Yanina Biondi
@YaniBion
Thank you! I couldn't find it
Yanina Biondi
@YaniBion
when I try to make it extended, it fails though :/
Jonas Eschle
@jonas-eschle
Instead of using create_extended, you can also make a PDF extended in-place by using pdf.set_yield()
The problem that it fails is because the copy function has not properly implemented to take into account the things you've added customly. But the set_yield(...) works
This will turn the pdf into an extended one
Yanina Biondi
@YaniBion
thanks a lot
Ryunosuke O'Neil
@ryuwd
Hello, I wondered how to correctly use the convolution tools (e.g. FFTConvPDFV1) in zfit. How might I use FFTConvPDFV1, to create a PDF equivalent to the Voigtian in RooFit? I tried creating a new instance with the kernel set to a Gaussian and func to the RBW from the zfit_physics / bw branch, but it seems I cannot use this straightaway. Is there something I'm missing?
Jonas Eschle
@jonas-eschle
Hi, this should in principle work straight out of the box, if not, it is maybe the best to open an issue and post there any error. Alternatively, the [Voigtian (or erfx) function was just implemented in TensorFlow-Probability] (https://www.tensorflow.org/probability/api_docs/python/tfp/math/erfcx) (there is alternatively also an exp conv gauss)
If you gonna to use one of these, you can contact us directly (here private chat, mail) and you could even contribute it with some hints as it is anyway on the list of PDFs to be added.
4 replies
Rizwaan Mohammed
@Rizwaan96_twitter
Hi, sorry if this is a silly question, but when doing an UnbinnedNLL fit, is it a problem if the final FCN value is positive? The fit looks good otherwise
Jonas Eschle
@jonas-eschle
Hi, this is no problem at all, in fact the absolute value of the likelihood is meaningless (and should not be relied on (!). It can even be beneficial for the minimization to subtract a constant from the likelihood), The difference is only what matters (between the same likelihood but with different parameter values)
Rizwaan Mohammed
@Rizwaan96_twitter
Ah that's great, thanks a lot!
Jonas Eschle
@jonas-eschle

We've released the 0.6 series of zfit! Major addition is a lot of new minimizers that all support uncertainty estimations the same way as used now.

They can now be invoked independent of zfit models at all and used with pure Python functions

The main changes (full changelog here

  • upgraded to TensorFlow 2.4
  • Added many new minimizers. A full list can be found in :ref:minimize_user_api.

    • IpyoptV1 that wraps the powerful Ipopt large scale minimization library
    • Scipy minimizers now have their own, dedicated wrapper for each instance such as
      ScipyLBFGSBV1, or ScipySLSQPV1
    • NLopt library wrapper that contains many algorithms for local searches such as
      NLoptLBFGSV1, NLoptTruncNewtonV1 or
      NLoptMMAV1 but also includes more global minimizers such as
      NLoptMLSLV1 and NLoptESCHV1.
  • Completely new and overhauled minimizers design, including:

    • minimizers can now be used with arbitrary Python functions and an initial array independent of zfit
    • a minimization can be 'continued' by passing init to minimize
    • more streamlined arguments for minimizers, harmonized names and behavior.
    • Adding a flexible criterion (currently EDM, the same that iminuit uses) that will terminate the minimization.
    • Making the minimizer fully stateless.
    • Moving the loss evaluation and strategy into a LossEval that simplifies the handling of printing and NaNs.
    • Callbacks are added to the strategy.
  • Major overhaul of the FitResult, including:

    • improved zfit_error (equivalent of MINOS)
    • minuit_hesse and minuit_minos are now available with all minimizers as well thanks to an great
      improvement in iminuit.
    • Added an approx hesse that returns the approximate hessian (if available, otherwise empty)
Aman Goel
@amangoel185

Hey @mayou36! :)

I wrote to you regarding GSoC 2021 (via aman.goel185@gmail.com) , and have a doubt regarding the same in the evaluation task.

Can I contact you over private chat?

1 reply
Jonas Eschle
@jonas-eschle
This message was deleted
Anil Panta
@panta-123
Is there any example where i can find the code to plot pull of the fit (ExtendedunbinnedNLL fit) ? or could anyone provide me some example script here.
3 replies
Jonas Eschle
@jonas-eschle

We released multiple small releases up to 0.6.3 with a few minor improvements and bugfixes. Make sure to upgrade to the latest version using

pip install -U zfit

Thanks to the finders of the bugs. We appreciate any kind of (informal) feedback, ideas or bugs, feel free to reach out to us anytime with anything

anthony-correia
@anthony-correia
Hello @mayou36, we would like to try a template fit to some 3D binned data. I've been told that a binned fit is possible with zfit, but is still experimental and undocumented yet. I guess it is the branch "binned_new" of the github of zfit. Am I right so far?
Is there anything I need to know before trying to use the code there?
34 replies