I've started following the
[uproot]tag on StackOverflow, and will follow an
[awkward-array]tag if somebody creates a question about Awkward there.
A few questions have come in on
[uproot] and one of them was actually about
[awkward-array], so I took that opportunity to create the second tag. Apparently, I can only set up email notifications after the tag has existed for a little while (some database needs to sync), so I'm listening to
[uproot] now and will be listening to
import boost_histogram as bh import uproot import zfit from particle import Particle from iminuit import Minuit from decaylanguage import DecFileParser import ROOT
Not every library needs a two-letter abbreviation. I think that
hepunits loses usefulness if its usage involves weird
u.MeV type things. I think there's an argument for
from hepunits import MeV, GeV, mm, cm 0.510 * MeV # electron mass 125.2 * GeV # Higgs mass 0.456 * mm # B0 ctau 2.685 * cm # K0 ctau
since these objects exist primarily for readability.
By the way, Awkward's official two-letter abbreviation will be
import awkward1 as ak
as in "ak! ak! this array is so awkward!". I've seen some
import awkward as awk
in the wild and I want to avoid mental overlap with Awk (which is a rather nice language for its domain, but still).
... as awkis not a good option ...
These are about recommendations. I don't think I'd recommend
import hepunits as u
which is an extra step (
as u) for the sake of having a name around that's more likely to get clobbered than
GeV. For instance, there could easily be a step in a calculation like
u, v = math.cos(theta), math.sin(theta)
that silently clobbers the
MeV = ... # ?
would not be likely at all. It's about the names humans are likely to use and notice as being different. I wouldn't recommend
from hepunits import *
though. A given script is probably only going to use a couple of units, and pulling them into the namespace deliberately is not an abuse.
active_imports()and get the list of imports you need to paste into your script. :)
It has the
from pylab import * touch, which is now, for good reasons, depreceated. For IPython, I have a startup script full of
import package as pkg with suppressed
ImporError warnings, because yeah, it's otherwise pain.
It’s only for interactive work
I see the use case, and it's nice. But I also see the abuse cases and prefer e.g. template python modules combined with an "optimize import" in my IDE that removes all unnecessary imports. No magic, just clean...
(or better, what I actually do: an editor that imports modules like
pd with a single shortcut when I use them)
For IPython, I have a startup script full of import package as pkg
This is exactly what it can and should be used for - and your start up script slows down your IPython startup time, even if you don’t use all the packages. If you copy your start up script to
~/.pyforest/user_imports.py, you will have instant startup times again, and things get imported when you use them. And you gain the ability to quickly see what you have used. I don’t use the
from pyforst import * at all, it’s just an extention to IPython. I rather which that wasn’t there, for misusage issues.
It contains 2 submodules:
modeling: with the Bayesian Blocks algorithm that moved from
hypotest: aims to provide tools do likelihood-based hypothesis tests such as discovery test, computations of upper limits or confidences intervals. Currently discovery test
using asymptotic formulae is available. More functionalities will be added in the future.
Suggestions are welcome, and feel free to give it a try.
Otherwise we will loose users.
I would argue not many or maybe any. If we add a new package, regardless of the requirements, we can’t loose a user. If we add a new release of a package that is Python 3 only, existing users will still get the old Python 2 version, and again, won’t be lost. If experiments are using our code, they would fall into this category. The only ones I think we would risk loosing would be a new users, who are Python 2 only and want to start using our code - but a) I don’t think we have that many, b) they will still be able use older versions, and c) they already have to be using older versions of numpy, SciPy, matplotlib, and IPython, so our packages would just be one more thing.
Keep in mind, the cost of keeping Python 2 compatibility is non-zero. It requires more complex code, limits use of time saving features, adds extra checks, increases binary build time, etc. It can also hamper the package API and user experience in Python 3. We could use the time we spend fiddling with Python 2 code or writing code in a way to be compatible with both instead developing new libraries and features.
Obviously, the final decision for any package has done on a per-case basis by the core maintainers of that package. I still support Python 2.6 in Plumbum, for example.
But we cannot afford to be as aggressive as some
I don't think we can afford to be less aggressive than numpy - I don't think the HEP community should take over maintaining numpy. Similarly with Python - Over time, the old versions will stop working with anything new, or on new systems, etc. I already had to drop Pandas from Particle/DecayLanguage, because they are missing some variants of Python 2 wheels and will not produce them - their hands are washed clean of Python 2 already.
python_requiresin our setups, so that pip 9+ will automatically find the last supported Python 2 version, etc. The idea is not to cause Python 2 to stop working, but to stop producing new features for Python 2 (more or less). It will be very interesting after January, though, to see how long things hold together...