Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 13 11:26

    jonas-eschle on fix_binned_space

    (compare)

  • Jan 13 11:26

    jonas-eschle on develop

    fix: warning when non-extended … fix: warning when non-extended … Merge pull request #379 from zf… (compare)

  • Jan 13 11:26
    jonas-eschle closed #379
  • Jan 13 11:26
    jonas-eschle closed #379
  • Jan 12 22:26
    jonas-eschle synchronize #379
  • Jan 12 22:26
    jonas-eschle synchronize #379
  • Jan 12 22:26

    jonas-eschle on fix_binned_space

    fix: warning when non-extended … (compare)

  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:58
    jonas-eschle edited #379
  • Jan 12 20:24
    jonas-eschle opened #379
  • Jan 12 20:24
    jonas-eschle opened #379
  • Jan 12 19:52

    jonas-eschle on fix_binned_space

    fix: warning when non-extended … (compare)

  • Jan 12 19:34

    jonas-eschle on set_saved_params

    (compare)

  • Jan 12 19:34

    jonas-eschle on develop

    feat: allow to set parameters f… test: add test for freezing res… docs: changelog entry for savea… and 2 more (compare)

Jonas Eschle
@jonas-eschle
Hi, this is no problem at all, in fact the absolute value of the likelihood is meaningless (and should not be relied on (!). It can even be beneficial for the minimization to subtract a constant from the likelihood), The difference is only what matters (between the same likelihood but with different parameter values)
Rizwaan Mohammed
@Rizwaan96_twitter
Ah that's great, thanks a lot!
Jonas Eschle
@jonas-eschle

We've released the 0.6 series of zfit! Major addition is a lot of new minimizers that all support uncertainty estimations the same way as used now.

They can now be invoked independent of zfit models at all and used with pure Python functions

The main changes (full changelog here

  • upgraded to TensorFlow 2.4
  • Added many new minimizers. A full list can be found in :ref:minimize_user_api.

    • IpyoptV1 that wraps the powerful Ipopt large scale minimization library
    • Scipy minimizers now have their own, dedicated wrapper for each instance such as
      ScipyLBFGSBV1, or ScipySLSQPV1
    • NLopt library wrapper that contains many algorithms for local searches such as
      NLoptLBFGSV1, NLoptTruncNewtonV1 or
      NLoptMMAV1 but also includes more global minimizers such as
      NLoptMLSLV1 and NLoptESCHV1.
  • Completely new and overhauled minimizers design, including:

    • minimizers can now be used with arbitrary Python functions and an initial array independent of zfit
    • a minimization can be 'continued' by passing init to minimize
    • more streamlined arguments for minimizers, harmonized names and behavior.
    • Adding a flexible criterion (currently EDM, the same that iminuit uses) that will terminate the minimization.
    • Making the minimizer fully stateless.
    • Moving the loss evaluation and strategy into a LossEval that simplifies the handling of printing and NaNs.
    • Callbacks are added to the strategy.
  • Major overhaul of the FitResult, including:

    • improved zfit_error (equivalent of MINOS)
    • minuit_hesse and minuit_minos are now available with all minimizers as well thanks to an great
      improvement in iminuit.
    • Added an approx hesse that returns the approximate hessian (if available, otherwise empty)
Aman Goel
@amangoel185

Hey @mayou36! :)

I wrote to you regarding GSoC 2021 (via aman.goel185@gmail.com) , and have a doubt regarding the same in the evaluation task.

Can I contact you over private chat?

1 reply
Jonas Eschle
@jonas-eschle
This message was deleted
Anil Panta
@panta-123
Is there any example where i can find the code to plot pull of the fit (ExtendedunbinnedNLL fit) ? or could anyone provide me some example script here.
3 replies
Jonas Eschle
@jonas-eschle

We released multiple small releases up to 0.6.3 with a few minor improvements and bugfixes. Make sure to upgrade to the latest version using

pip install -U zfit

Thanks to the finders of the bugs. We appreciate any kind of (informal) feedback, ideas or bugs, feel free to reach out to us anytime with anything

anthony-correia
@anthony-correia
Hello @mayou36, we would like to try a template fit to some 3D binned data. I've been told that a binned fit is possible with zfit, but is still experimental and undocumented yet. I guess it is the branch "binned_new" of the github of zfit. Am I right so far?
Is there anything I need to know before trying to use the code there?
34 replies
Henrikas Svidras
@henrikas-svidras

Hi. I am intensively using zfit in notebooks, and I have been running into the well-known NameAlreadyTakenErrors. I have found workarounds that work for me, but I just wanted to say that it seems to me the example presented in here does not work. Like, if you try to use this you will get an error when minimising:

~/.local/lib/python3.6/site-packages/zfit/minimizers/minimizer_minuit.py in <listcomp>(.0)
     76         errors = tuple(param.step_size for param in params)
     77         start_values = [p.numpy() for p in params]
---> 78         limits = [(low.numpy(), up.numpy()) for low, up in limits]
     79         errors = [err.numpy() for err in errors]
     80 

AttributeError: 'int' object has no attribute 'numpy'

I guess the problem is that the limits are supposed to be tf.Tensor, but if we simply assign a float or int via param.lower or param.upper that breaks the code later?

Couldn't there be some sort of a method, such as set_limit_lower(value) ? Or am I misusing zfit somehow?

6 replies
zleba
@zleba
Hello, do you know what is the best way to deal with the discrete observables? One dimension of my pdf is the charge with values 1 or -1. The normalisation of the pdf(q=1) and pdf(q=-1) is in general different (predicted by the pdf parametrization) and only the sum pdf(q=1)+pdf(q=-1) =1.
10 replies
Kevin Wang
@LeavesWang

Hi, to overload the parameters in Jupyter, does the following way work?

iCell = get_ipython().execution_count #get the current cell number

par1 = zfit.Parameter("par1"+str(iCell), 8., 0., 20.)
par2 = zfit.Parameter("par2
"+str(iCell), -20., -50., 50.)
par3 = zfit.Parameter("par3_"+str(iCell), 10., 0., 20.)

4 replies
Henrikas Svidras
@henrikas-svidras
Hi experts. What kind of a seed does zfit use when you use pdf.sample() method? I found the zfit.setting.set_seed() (which I assume would affect the sample method as well?) Does it work the same way as numpy would, where if you don't specify it explicitly, it would use a random seed? Thanks a lot!
Jonas Eschle
@jonas-eschle
Hi, indeed. The seed is just a global seed, setting it is just setting the seed in the backends, meaning numpy and TensorFlow
Henrikas Svidras
@henrikas-svidras
Alright! I guess this means that it should be fine not setting it explicitly when doing many toy fits in parallel :) I was thinking that it could have some clock/date dependance, so just had to make sure anything like that wasn't the case. Thanks a lot.
Henrikas Svidras
@henrikas-svidras

Hi again :) I was wondering if there is a way to have a parameter which has its upper limit depending on another parameter. As a naive illustration, imagine you are fitting a quadratic parabola ax**2+bx+c, and you want the peak of the parabola to be between 0 and 5. The would mean 0<-b/2a<5.

I naively tried something similar to this:

a = zfit.Parameter("a", 5, floating=True)
b = zfit.Parameter("b", 0, lower = -10*a, floating=True)

However, it seems that all this does is sets the upper limit to be at -10 * initial value of a. Is there a way to somehow change limit to the value as a is changing?

Jonas Eschle
@jonas-eschle
Hey, while there are possibilities (you can create a composite parameter, which is effectively just a function of zero or more free parameters, and let it return the minimum of a or b, leaving the lower limit on b and setting a high enough upper limit), it is not advisable. Limits are in general a bad thing in parameters: when minmimizing a likelihood, most minimizers look for local minimia. This means that we have to make sure that we start close enough and limits should, under normal circumstances, not influence the minimization and shoul be chosen wide enough.
Henrikas Svidras
@henrikas-svidras
Thanks. I understand the caveats, and I guess this makes sense. So, in principle, it would only be available through ComposedParameters. That's what I thought, but I hoped there might be some kind of a secret zfit trick :) In any case, thanks a lot!
Jonas Eschle
@jonas-eschle
Thanks for bringing it up! Indeed, it is however a one-liner with composed parameters. The limits are intentionally kept simple
Jonas Eschle
@jonas-eschle

A new zfit version, 0.8x series is available, with bugfixes, improved numerical integration and different Kernel Density Estimations (also for large sample sizes).

all changes listed

The tutorials also improved in the style and have now their own site. They can be run interactively, or downloaded, or be viewed.

5 replies
Henrikas Svidras
@henrikas-svidras

Hi experts.

I had a question regarding errors. I have made a quick example to highlight my question, hopefully it makes sense.

I have noticed that if I fit my pdf, and then refit that same pdf again and again in a loop, the error I get out is not the same but keeps changing. Also the errors calculated using different methods are not consistent, at least not always, even after the initial fit (e.g. 0th iteration).

I have to say I don't quite understand this. I have to say I am not an expert in fitting, so maybe this is expected, but I find it very weird.

I'll try to illustrate this with an example:

from zfit.pdf import Gauss, Exponential


obs = zfit.Space("x", limits=(-5, 5))
minimizer = zfit.minimize.Minuit(use_minuit_grad=True)

mu = zfit.Parameter("muu", 0, step_size=0.01)
sigma = zfit.Parameter("sigma", 1,step_size=0.01)
gauss = zfit.pdf.Gauss(mu=mu, sigma=sigma, obs=obs)
gauss_yield = zfit.Parameter("g_yield", 100, step_size=0.1)
gauss_ext = gauss.create_extended(gauss_yield)

lam = zfit.Parameter("lam", -1,step_size=0.01)
expo  = zfit.pdf.Exponential(lam=lam, obs=obs)
expo_yield = zfit.Parameter("e_yield", 500, step_size=0.1)
expo_ext = expo.create_extended(expo_yield)

gauss_expo = zfit.pdf.SumPDF([expo_ext, gauss_ext])

random_gauss = np.random.normal(size=500)+1
random_exp = np.random.exponential(scale = 5, size=1000)-5
random_data = np.append(random_gauss, random_exp)

# then for each different error method I run this:
lam.set_value(-1)
mu.set_value(1)
sigma.set_value(1)
frac.set_value(0.5)
expo_yield.set_value(100)
gauss_yield.set_value(100)

data = zfit.Data.from_numpy(obs=obs, array=random_data)
nll = zfit.loss.UnbinnedNLL(model=gauss_expo, data=data)

iterations = np.arange(0,20)
yield_error = []

for it in iterations:
    result = minimizer.minimize(nll)
    result.errors() # here I also try result.hesse() with #method='hesse_np', 'approx', 'minuit_hesse'


    yield_error.append(result.params[gauss_yield]['minuit_minos']['upper'])

Each of these iterations produces a different error, particularly when using minuit_hesse and hesse_np:

#minuit_minos upper (lower is almost the same with a negative sign)
array([118.3930117 , 118.13616212, 118.17518686, 118.1589878 ,
       118.17934101, 118.17029047, 118.18437411, 118.17827166,
       118.18786236, 118.1837249 , 118.19027599, 118.1874818 ,
       118.19182178, 118.1899428 , 118.19290011, 118.19164637,
       118.19360303, 118.19276492, 118.19412068, 118.19355127])

#minuit_hesse
array([264.56868112, 263.99099257, 264.03559365, 263.99005815,
       264.0375983 , 264.01347448, 590.9170118 , 264.02200987,
       590.84657398, 264.03923864, 592.87290404, 264.05010254,
       589.78390794, 426.94055222, 588.76391966, 585.73316667,
       591.81967819, 587.7401319 , 591.81833332, 592.84125751])

#hesse_np
array([205.6559601 , 207.41405669,          nan,          nan,
                nan, 737.84227104,          nan,  70.71791776,
       314.54393221,          nan, 110.97088789, 237.15737011,
                nan,          nan,          nan,          nan,
                nan, 149.10569392, 300.7499339 ,          nan])

#approx
array([118.39301118, 118.1361616 , 118.17518634, 118.15898728,
       118.17934049, 118.17028995, 118.18437359, 118.17827114,
       118.18786184, 118.18372437, 118.19027547, 118.18748128,
       118.19182126, 118.18994228, 118.19289958, 118.19164585,
       118.19360251, 118.1927644 , 118.19412016, 118.19355074])

I understand that I am generally speaking not supposed to loop an already converged fit again, but what is puzzling me that even if I only look at the very first element in each of these lists they are not at all consistent. I noticed this in a more complicated fit that I am doing in an analysis and I am a bit puzzled. I prepared this simple mock example to make it easier to reproduce.

Is this expected? Am I doing something crazy here?

Sorry for the long question, and thanks a lot.

Jonas Eschle
@jonas-eschle

Hi, first of all, thanks a lot for bringing it up and making such a good reproducible examlpe. You are also welcome to opend an issue. The problem is that you create an unbinned likelihood (=> create an ExtendedUnbinnedNLL instead, this works for me), so you are not constraining the sum of yields to be the (poisson distributied) number of events. There should be a warning displayed like

AdvancedFeatureWarning: Either you're using an advanced feature OR causing unwanted behavior. To turn this warning off, use `zfit.settings.advanced_warnings['extended_in_UnbinnedNLL']` = False`  or 'all' (use with care) with `zfit.settings.advanced_warnings['all'] = False
Extended PDFs are given to a normal UnbinnedNLL. This won't take the yield into account and simply treat the PDFs as non-extended PDFs. To create an extended NLL, use the `ExtendedUnbinnedNLL`.
  warn_advanced_feature("Extended PDFs are given to a normal UnbinnedNLL. This won't take the yield "

So the fit you are doing is equal to defining a sum of two pdfs using two free parameters: we end up with a degree of freedom too much. This is what causes the error to vary each time (at least I suspect it).

To explain the errors: 'minuit_minos' is the builtin minuit error (from iminuit, the minos method). minuit_hesse is the hesse algorithm of iminuit. approx is the minimizers approximation of the hesse (and is maybe not available or completely off. It's just a "better than nothing", but often accurate enough for some usecases such as getting the order of magnitude). hesse_np is zfits implementation of Hesse and the NaNs are probably pretty accurate: it can't determine the hession because it fails for the good reason that it's an underconstraint problem.

Just to mention, one method you didn't try is zfit_error, zfits own implementation of "minos". In my test it gives a comparable error (42 vs 39 from minos) using the ExtendedUnbinnedNLL

Henrikas Svidras
@henrikas-svidras
Hi many thanks for the in depth answer. Yes, in this example UnbinnedNLL seemed to be the culprit. I still need to investigate why the fit where I initially spotted this was misbehaving as there I was using the correct ExtendedUnbinnedNLL. But your answer proposes some hints so I will try them. It also brings a bit more clarity about zift overall, thanks :)
greennerve
@greennerve
hi all, I tried to limit the number of CPU used by zfit with zfit.run.set_n_cpu(8), but it doesn't work, zfit still uses all available cores/threads.
what's the proper way to limit the number of cores/threads used by zfit
Jonas Eschle
@jonas-eschle
Did you try to reduce it to, say, 1? And you can also use strict=True. If that doesn't work, feel also free to open an issue
1 reply
Luke Scantlebury-Smead
@lukessmead_gitlab

Dear Experts,

I am trying to perform a fit to some data using a sum of two crystal balls. Here is a minimal version of the fit:

import zfit
from root_pandas import read_root

path = "data_path"
tree = "ntuple_tree"

dataframe = read_root(f"{path}/data.root", tree)

low_B_M = 5175.0
high_B_M = 5425.0
obs = zfit.Space("B0_M", limits=(low_B_M, high_B_M))

#B0 -> D* 3pi
mu = zfit.Parameter("mu", 5279., low_B_M, high_B_M)
sigma = zfit.Parameter("sigma", 10., 0., 50.)
alpha_L = zfit.Parameter("alpha_L", 1.43, 0., 5.)
n_L = zfit.Parameter("n_L", 1.96, 0., 100.)
alpha_R = zfit.Parameter("alpha_R", -1, -5., 0.)
n_R = zfit.Parameter("n_R", 1., 0., 100.)
frac = zfit.Parameter("frac", 0.5, 0., 1.)

n_sig = zfit.Parameter("n_sig", 0.75 * len(dataframe['B0_M']), 0., 1.1 * len(dataframe['B0_M']))

pdf_sig_L = zfit.pdf.CrystalBall(obs=obs, mu=mu, sigma=sigma, alpha=alpha_L, n=n_L)
pdf_sig_R = zfit.pdf.CrystalBall(obs=obs, mu=mu, sigma=sigma, alpha=alpha_R, n=n_R)
pdf_sig = zfit.pdf.SumPDF([pdf_sig_L, pdf_sig_R], frac).create_extended(n_sig)


# Combinatorial
lamda = zfit.Parameter("lamda", -1e-3, -1., -1e-10)
n_bkg = zfit.Parameter("n_bkg", 0.25 * len(dataframe['B0_M']), 0., 1.1 * len(dataframe['B0_M']))

pdf_bkg = zfit.pdf.Exponential(obs=obs, lam=lamda).create_extended(n_bkg)


pdf = zfit.pdf.SumPDF([pdf_sig, pdf_bkg])
data = zfit.Data.from_pandas(dataframe['B0_M'], obs=obs, weights=dataframe["n_sig_sw"])
nll = zfit.loss.ExtendedUnbinnedNLL(model=pdf, data=data)
minimizer = zfit.minimize.Minuit(tol=1e-5)

result = minimizer.minimize(nll)

print(result.info['original'])

param_hesse = result.hesse(method="minuit_hesse") # Computation of the errors
corr = result.correlation(method="minuit_hesse")
cov = result.covariance(method="minuit_hesse")

print("Fit function minimum:", result.fmin)
print("Fit converged:", result.converged)
print("Fit full minimizer information:", result.info)

params = result.params
print(params)

I am getting the following error message (I can post the full log if needed)

NotImplementedError: Cannot convert a symbolic Tensor (gradient_tape/gradient_tape/sub:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

I am using the following versions of various packages:

tensorboard                   2.5.0
tensorboard-data-server       0.6.0
tensorboard-plugin-wit        1.8.0
tensorflow                    2.5.0
tensorflow-addons             0.14.0
tensorflow-estimator          2.5.0
tensorflow-probability        0.13.0
zfit                          0.8.2

A colleague is able to run this code without error with the following versions:

tensorboard                   2.6.0
tensorboard-data-server       0.6.1
tensorboard-plugin-wit        1.8.0
tensorflow                    2.5.0
tensorflow-addons             0.13.0
tensorflow-estimator          2.5.0
tensorflow-probability        0.13.0
zfit                          0.8.2

Is there a known issue with the versions of the packages I am using or is there perhaps something else wrong?

1 reply
Jonas Eschle
@jonas-eschle

!CALL FOR BINNED FIT TESTER!
After a long time of development, binned fits are finally in the alpha stage! We added the concept of binned data, PDFs and loss functions to zfit.

You can try out the unstable preview online.

An overview over the implemented binned models

While the implementation is not fully finalized and the concepts maybe also need some polishing, we're looking for feedback from the community.
An alpha version has been published and can be installed with

pip install -U zfit --pre

(the pre flag specifies that also pre-versions will be installed)

What to expect:
There are rough edges and cornercases as well as not yet implemented methods. What we would like to hear is mostly feedback on the design. There is an zfit/zfit-development#80.

But any way of feedback (issue, Gitter, e-mail, Skype, Mattermost,... is appreciated)

Binned_gaussian_fit_with_BinnedChi2.png
Morphing_with_splines__hist_plot.png
Stefano Roberto Soleti
@soleti
Hello, I am trying to install zfit in a clean conda environment but I keep getting this error
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/miniconda3/envs/zfit/include -arch x86_64 -I/opt/miniconda3/envs/zfit/include -fPIC -O2 -isystem /opt/miniconda3/envs/zfit/include -arch x86_64 -I/private/var/folders/zh/kd49jtns3237x60psw6kqy0r0000gn/T/pip-build-env-e3rpegq3/overlay/lib/python3.9/site-packages/numpy/core/include -I/opt/miniconda3/envs/zfit/include/python3.9 -c src/callback.c -o build/temp.macosx-10.9-x86_64-3.9/src/callback.o
  In file included from src/callback.c:35:
  src/callback.h:32:10: fatal error: 'IpStdCInterface.h' file not found
  #include "IpStdCInterface.h"
           ^~~~~~~~~~~~~~~~~~~
  1 error generated.
  error: command '/usr/bin/clang' failed with exit code 1
  ----------------------------------------
  ERROR: Failed building wheel for ipyopt
1 reply
I just run pip install zfit as per instructions in a clean conda environment (bare Python 3.9 installed)
I am on macOS Monterey
Stefano Roberto Soleti
@soleti
I fixed it by installing on a Linux machine...
Regarding the HistogramPDF, instead, how do I introduce a parameter in the model?
Stefano Roberto Soleti
@soleti
Let's say that I want a scale parameter, and not only a normalization one, e.g. something like
class HistPDF(zfit.pdf.BasePDF):

    def __init__(self, data, gain, obs, name='HistPDF'):
        histo = np.histogram(data/gain, bins=40, range=(0,20))
        self.rv_hist = scipy.stats.rv_histogram(histo) 
        super().__init__(obs=obs, name=name)

    def _normalized_pdf(self, x):
        x = z.unstack_x(x)
        probs =  z.py_function(func=self.rv_hist.pdf, inp=[x], Tout=tf.float64)
        probs.set_shape(x.shape)
        return probs
1 reply
Is there a way to do this with HistogramPDF?
1 reply
Jim Pivarski
@jpivarski

I'm trying to include a zfit example in a Software Carpentries session, and I'm finding that zfit has a lot of difficult-to-satisfy dependencies. It makes my conda install churn for a long time and then it wants to downgrade my OS kernels and CUDA installation. (Of course, I said, "No.")

I'll have to make an isolated environment for this, but I'm just saying, this goes against the "small, focused packages" philosophy—it's difficult to "mix" zfit in with the other packages (i.e. not have to make an isolated environment that downloads the world just to use zfit).

Quite likely, a lot of things in this list:

https://github.com/zfit/zfit/blob/develop/requirements.txt

could be optional/runtime dependencies, like this:

https://github.com/scikit-hep/uproot4/blob/main/src/uproot/extras.py

Like, for example, curve-fitting surely doesn't need Uproot: curve-fitting shouldn't care where the data come from, whether it's a ROOT file or something else. A user should install both zfit and Uproot and send data between them as NumPy arrays, Awkward Arrays, Boost/Hist histograms, etc. With zfit's explicit dependence on Uproot, I can't use it with my developer copy because it has a weird version number made by pip install -e ., which doesn't satisfy zfit's uproot<5 constraint. But surely I should be able to mix a wide range of versions of what ought to be loosely coupled things like curve-fitting and file-reading.

Three of the dependencies are about putting colors in terminals.

8 replies
Nicole Skidmore
@DrNicole1865_twitter
Hi all, on my local machine should I be able to do something as simple as
mamba create --name zfitenv python ipython matplotlib xrootd jupyterlab zfit?
when I do this and then try to import zfit in the zfitenv I get
(zfitenv) C-LOSX2NHP3Y2:~ a05842ns$ python
Python 3.9.9 | packaged by conda-forge | (main, Dec 20 2021, 02:38:53) 
[Clang 11.1.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import zfit
/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/zfit/__init__.py:28: UserWarning: zfit has moved from TensorFlow 1.x to 2.x, which has some profound implications behind the scenes of zfit
    and minor ones on the user side. Be sure to read the upgrade guide (can be found in the README at the top)
     to have a seemless transition. If this is currently not doable (upgrading is highly recommended though)
     you can downgrade zfit to <0.4. Feel free to contact us in case of problems in order to fix them ASAP.
  warnings.warn(
/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/zfit/util/execution.py:62: UserWarning: Not running on Linux. Determining available cpus for thread can failand be overestimated. Workaround (only if too many cpus are used):`zfit.run.set_n_cpu(your_cpu_number)`
  warnings.warn("Not running on Linux. Determining available cpus for thread can fail"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/zfit/__init__.py", line 63, in <module>
    from . import z
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/zfit/z/__init__.py", line 49, in <module>
    from . import random, math
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/zfit/z/random.py", line 5, in <module>
    import tensorflow_probability as tfp
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/__init__.py", line 20, in <module>
    from tensorflow_probability import substrates
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/substrates/__init__.py", line 21, in <module>
    from tensorflow_probability.python.internal import all_util
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/python/__init__.py", line 142, in <module>
    dir(globals()[pkg_name])  # Forces loading the package from its lazy loader.
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/python/internal/lazy_loader.py", line 61, in __dir__
    module = self._load()
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/python/internal/lazy_loader.py", line 41, in _load
    self._on_first_access()
  File "/Users/user/mambaforge/envs/zfit/lib/python3.9/site-packages/tensorflow_probability/python/__init__.py", line 63, in _validate_tf_environment
    raise ImportError(
ImportError: This version of TensorFlow Probability requires TensorFlow version >= 2.6; Detected an installation of version 2.4.3. Please upgrade TensorFlow to proceed.
4 replies
Should zfit "grab" the right TF version?
Yanina Biondi
@YaniBion
I have issues installing the new version even in a cleanish enviroment
2 replies
/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/zfit/__init__.py:37: UserWarning: TensorFlow warnings are by default suppressed by zfit. In order to show them, set the environment variable ZFIT_DISABLE_TF_WARNINGS=0. In order to suppress the TensorFlow warnings AND this warning, set ZFIT_DISABLE_TF_WARNINGS=1.
  warnings.warn("TensorFlow warnings are by default suppressed by zfit."

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
ImportError: numpy.core._multiarray_umath failed to import

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
ImportError: numpy.core.umath failed to import

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_8153/2220551001.py in <module>
----> 1 import zfit
      2 get_ipython().run_line_magic('pinfo', 'zfit.pdf.HistogramPDF')
      3 
      4 print(zfit.__version__)

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/zfit/__init__.py in <module>
     51 
     52 
---> 53 _maybe_disable_warnings()
     54 
     55 import tensorflow as tf

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/zfit/__init__.py in _maybe_disable_warnings()
     46     os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
     47 
---> 48     import tensorflow as tf
     49 
     50     tf.get_logger().setLevel('ERROR')

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/__init__.py in <module>
     39 import sys as _sys
     40 
---> 41 from tensorflow.python.tools import module_util as _module_util
     42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
     43 

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/__init__.py in <module>
     44 
     45 # Bring in subpackages.
---> 46 from tensorflow.python import data
     47 from tensorflow.python import distribute
     48 from tensorflow.python import keras

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/__init__.py in <module>
     23 
     24 # pylint: disable=unused-import
---> 25 from tensorflow.python.data import experimental
     26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE
     27 from tensorflow.python.data.ops.dataset_ops import Dataset
/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/experimental/__init__.py in <module>
     97 
     98 # pylint: disable=unused-import
---> 99 from tensorflow.python.data.experimental import service
    100 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch
    101 from tensorflow.python.data.experimental.ops.batching import dense_to_sparse_batch

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/experimental/service/__init__.py in <module>
    138 from __future__ import print_function
    139 
--> 140 from tensorflow.python.data.experimental.ops.data_service_ops import distribute
    141 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id
    142 from tensorflow.python.data.experimental.ops.data_service_ops import register_dataset

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/data_service_ops.py in <module>
     23 
     24 from tensorflow.python import tf2
---> 25 from tensorflow.python.data.experimental.ops import compression_ops
     26 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy
     27 from tensorflow.python.data.experimental.ops.distribute_options import ExternalStatePolicy

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/compression_ops.py in <module>
     18 from __future__ import print_function
     19 
---> 20 from tensorflow.python.data.util import structure
     21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops
     22 

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/util/structure.py in <module>
     24 import wrapt
     25 
---> 26 from tensorflow.python.data.util import nest
     27 from tensorflow.python.framework import composite_tensor
     28 from tensorflow.python.framework import ops

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/data/util/nest.py in <module>
     38 import six as _six
     39 
---> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor
     41 from tensorflow.python.util import _pywrap_utils
     42 from tensorflow.python.util import nest

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/framework/sparse_tensor.py in <module>
     26 from tensorflow.python import tf2
     27 from tensorflow.python.framework import composite_tensor
---> 28 from tensorflow.python.framework import constant_op
     29 from tensorflow.python.framework import dtypes
     30 from tensorflow.python.framework import ops

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py in <module>
     27 from tensorflow.core.framework import types_pb2
     28 from tensorflow.python.eager import context
---> 29 from tensorflow.python.eager import execute
     30 from tensorflow.python.framework import dtypes
     31 from tensorflow.python.framework import op_callbacks

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/eager/execute.py in <module>
     25 from tensorflow.python import pywrap_tfe
     26 from tensorflow.python.eager import core
---> 27 from tensorflow.python.framework import dtypes
     28 from tensorflow.python.framework import ops
     29 from tensorflow.python.framework import tensor_shape

/disk/groups/atp/anaconda3_new/envs/yanina/lib/python3.9/site-packages/tensorflow/python/framework/dtypes.py in <module>
     30 from tensorflow.python.util.tf_export import tf_export
     31 
---> 32 _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type()
     33 
     34 

TypeError: Unable to convert function return value to a Python type! The signature was
    () -> handle
and
ERROR in cling::CIFactory::createCI(): cannot extract standard library include paths!
Invoking:
  LC_ALL=C x86_64-conda-linux-gnu-c++   -DNDEBUG -xc++ -E -v /dev/null 2>&1 | sed -n -e '/^.include/,${' -e '/^ \/.*++/p' -e '}'
Results was:
With exit code 0