I am also cross-posting this here.
Since I had a discussion about iminuit performance recently, so I ran some benchmarks to compare iminuit + numba_stats (both my libraries) with RooFit.
numba_stats has faster implementations of statistical distributions than scipy. I am very pleased to find that iminuit outperforms RooFit by a large margin on a fit of a Gaussian. I did not expect the gain to be so dramatic. RooFit in parallel mode is very slow compared to numba in parallel mode. You can see this in the bottom right of the following plot. The x-axis is the number of points in the fitted data sample, the y-axis is the runtime of the fit.
mplhep. :+1: https://indico.cern.ch/event/1058838/
v0.7.0, but the
pyhfdocs now have Pyodide/JupyterLite in them. :) Hopefully this will be the start of interactive docs, where we have static examples with live runnable versions side-by-side.
Network access is there, but it’s async only, so the stdlib doesn’t work, and therfore requests doesn’t work.
import pyodide, json inp = pyodide.open_url("https://raw.githubusercontent.com/scikit-hep/pyhf/master/docs/examples/json/2-bin_1-channel.json”) wspace = pyhf.Workspace(json.load(inp))
iminuitfeature that I would find useful?
mncontourIt would be really nice to have the ability to scan on points that are evenly spaced on a log scale.
iminuitbecause one fitting parameter was the thermal cross section for a generic WIMP. Of course I wrote a for loop myself to do this but it would be nice to have it built in since you already have the linear scanning.
@henryiii @HDembinski @jpivarski I haven't given this much thought if this would actually be useful or not, but if we could get a package index up somewhere (similar to https://anaconda.org/scipy-wheels-nightly, though not necessarily on Anaconda Cloud as I don't think they'd give us free space) do you see any benefit in doing nightly builds and uploads of wheels? It would make testing against HEAD for
iminuit easier in general for people as they don't need to build anything and if a similar approach as matplotlib/matplotlib#22733 was used then there wouldn't even need to be additional builds.
awkward-wasm-kernelspackage? I can't remember where we left off with this conversation last time, but maybe if we get the kernel multiplicity down we could look moving to Rust for the existing CPU kernels too.
0.20 is the first version to support Rust, too, by the way.
Oh, I was thinking of the pure-python packaging support on e.g. PyPI/conda-forge, but I realise that we're not there yet w.r.t wheel tags etc.
I took the liberty of activating Scikit-HEP org level discussions https://github.com/orgs/scikit-hep/discussions.
Can be useful for broad topical discussions (If we really don't like this, my apologies, and we can turn it off)
SciPy 2022 Birds-of-a-Feather Session proposals are open until June 15. Do we want to have one for Scikit-HEP/PyHEP/general Python in HEP?
This affects the
pyhf docs, but I haven't checked yet if it affects any other Scikit-HEP tools. Probably worth a quick
git grep on all your projects.
Looks like PyHF is the only one. I generally don’t use URL shorteners. Though I can guess why you might be using it there. There’s also a all-repos extension that will replace git.io links automatically (probalby has to be done before it gets turned off, though, to get the replacement link)
$ all-repos-grep "git.io" output/scikit-hep/pyhf:README.rst: >>> wspace = pyhf.Workspace(requests.get("https://git.io/JJYDE").json()) output/scikit-hep/pyhf:src/pyhf/cli/infer.py: $ curl -sL https://git.io/JJYDE | pyhf fit --value output/scikit-hep/pyhf:src/pyhf/cli/infer.py: $ curl -sL https://git.io/JJYDE | pyhf cls