Dear colleague,
We are pleased to announce the second "Python in HEP" workshop organised by the HEP Software Foundation (HSF). The PyHEP, "Python in HEP", workshops aim to provide an environment to discuss and promote the usage of Python in the HEP community at large.
PyHEP 2019 will be held in Abingdon, near Oxford, United Kingdom, from 16-18 October 2019.
The workshop will be a forum for the participants and the community at large to discuss developments of Python packages and tools, exchange experiences, and steer where the community needs and wants to go. There will be ample time for discussion.
The agenda will be composed of plenary sessions, a highlight of which is the following:
1) A keynote presentation from the Data Science domain.
2) A topical session on histogramming including a talk and a hands-on tutorial.
3) Lightning talks from participants.
4) Presentations following up from topics discussed at PyHEP 2018.
We encourage community members to propose presentations on any topic (email: pyhep2019-organisation@cern.ch). We are particularly interested in new(-ish) packages of broad relevance.
The agenda will be made available on the workshop indico page (https://indico.cern.ch/event/833895/) in due time. It is also linked from the PyHEP WG homepage http://hepsoftwarefoundation.org/activities/pyhep.html.
Registration will open very soon, and we will provide detailed travel and accommodation information at that time.
Travel funds may be available at a modest level. To be confirmed once registration opens.
You are encouraged to register to the PyHEP WG Gitter channel (https://gitter.im/HSF/PyHEP) and/or to the HSF forum (https://groups.google.com/forum/#!forum/hsf-forum) to receive further information concerning the organisation of the workshop.
Looking forward to your participation!
Eduardo Rodrigues & Ben Krikler, for the organising committee
plt.hist2d(nMIP, refMult, bins=[150, 50], cmap=plt.cm.jet)
All right, next problem (sorry if I'm overly bugging you all): histogram fitting. I can do this fairly easily on ROOT, but I'm having a lot of trouble on Python. For whatever reason, I can't seem to find a tutorial that includes this. All the fitting tutorials give errors when I try to fit a 2D histogram. I'm making the histogram as I did in the above post:
plt.hist2d(mipVref[0], mipVref[1], bins=[150, 50],
cmap=plt.cm.get_cmap("afmhot"))
I've tried curve_fit and Model, but to no avail. Any pointers to a specific method to fit 2D histos? Thanks!
hist2d
(here); you get (h, xedges, yedges, image)
; you can define your function f(x, y)
and make the points h
the dependent variable in your fit using curve_fit
(don't forget to convert xedges
and yedges
to the center of the bins); here's an example of a 2D gaussian fit.
I checked out a few things, and am hitting a snag when it comes to getting the function portion down. Here's my relevant code (much of which I adapted from others on Stack Exchange):
H, xedges, yedges = np.histogram2d(mipVref[0], mipVref[1], bins=[100, 100])
def centers(edges):
return edges[:-1] + np.diff(edges[:2])/2
xcenters = centers(xedges)
ycenters = centers(yedges)
pdf = interp2d(xcenters, ycenters, H)
plt.pcolor(xedges, yedges, pdf(xedges, yedges), cmap=plt.cm.get_cmap("hot"))
The issue is the plot is almost an inverse relationship to the actual plot. It should look like this:
image1
But it looks like this:
image2
Sorry for the long post!
New question: I have a working model using Keras with Tensorflow background. My goal is to get the final function coded into ROOT as that's what we would need for the metric we're making. So, how do I make sense of the weights? Here's the relevant code:
model = Sequential()
model.add(Dense(16, input_dim=16, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data[:, 1:], data[:, 0], epochs=30, batch_size=50)
I'm trying it with only one relu layer with 2 neurons in order to get a feel for how it all works, and it returns the following array sizes:
(16, 16), (1, 16), (16, 1), (1, 1)
Just from dimensional analysis (I'm using 16 inputs to get 1 output), I would think that the 16 inputs will combine with the (16, 16) array to give a (1, 16) array which adds the (16, 1) array elements and then the last (16, 1) gives a single element to use in the sigmoid (with the (1, 1) value being the additional term). Taking weights times inputs as w.x, and added to their modifiers. So this would be:
w1.x+b1 = x1, for all b1 indexed
w1 -> (16, 16), b1 -> (1, 16)
1/(1+exp(-w2.x1+b2))
w2 -> (16, 1), b2 -> (1, 1)
Do I understand this correctly? I'm trying to hand-reproduce the final predictions from the model so I can run them in ROOT.
@henryiii I think that's what I'm looking for; I was trying to reconstruct the mathematics of the handoffs, but I think I was missing a layer or two in the exchange. It seems it's not quite as simple as I had it (thought reLU would be like a delta plus and sigmoid like a Boltzman). I'm reading through that git now. Thanks!
@tunnell It's not nearly as ambitious as running TensorFlow on ROOT; I'm just looking for a way to execute the prediction model that was generated. I thought it could be exported in a simple, mathematical formula (as all it's doing, in the end, is putting weights on inputs and using those for an output). I have the shape and weights printed out; basically, I'm looking to reconstruct the prediction algorithm (not further refine it or anything like that; I'm considering all training done once I get out of Python) for use in ROOT. If it's just a mathematical formula, which I should think it would have to be, it ought to be readily programmable to any language without much fuss, no?
datetime
first? It may be an issue with your install. You could try uninstalling and reinstalling NumPy.
h1
and h2
is (h1 - h2)/(h1 + h2)
. Boost-histogram and Hist have basic math for histograms with the same axis: I know they have addition and subtraction, and they probably have division as well. The histograms are assumed to have Poisson statistics, and though ROOT's documentation doesn't say it, I would assume that h1
and h2
are assumed to be independent.
Hi @jpivarski,
That is indeed true - but a look at the source code reveals that in reality there is also weighting being applied to the result. I was wondering if it had already been implemented somewhere before I "reinvent the wheel", so to speak!
I think it would hopefully see some use, @henryiii. I work in hadron structure, and a lot of observables boil down to some sort of beam-spin asymmetry!
(Sorry that I'm reposting this everywhere; I want everyone to be warned.)
The Awkward/Uproot name transition is done, at least at the level of release candidates. If you do
pip install "awkward>=1.0.0rc1" "uproot>=4.0.0rc1"
you'll get Awkward 1.x and Uproot 4.x. (They don't strictly depend on each other, so you could do one, the other, or both.)
If you do
pip install "awkward1>=1.0.0rc1" "uproot4>4.0.0rc1"
you'll get thin awkward1 and uproot4 packages that just bring in the appropriate awkward and uproot and pass names through. This is so that uproot4.whatever
still works.
If you do
pip install awkward0 uproot3 # or just uproot3
you'll get the old Awkward 0.x and Uproot 3.x that you can import ... as ...
. This also brings in uproot3-methods
, which is a new name just to avoid compatibility issues with old packages that we saw last week.
All of the above are permanent; they will continue to work after Awkward 1.x and Uproot 4.x are full releases (not release candidates). However, the following will bring in old packages before the full release and new packages after the full release.
pip install awkward uproot
So it is only the full release that will break scripts, and only when users pip install --update
. I plan to take that step this weekend, when there might be fewer people actively working. It also gives everyone a chance to provide feedback or take action with import ... as ...
.
(Sorry for the reposting, if you saw this message elsewhere.)
Probably the last message about the Awkward Array/Uproot name transition: it's done. The new versions have moved from release candidates to full releases. Now when you
pip install awkward uproot
without qualification, you get the new ones. I think I've "dotted all the 'i's of packaging" to get the right dependencies and tested all the cases I could think of on a blank AWS instance.
pip install awkward0 uproot3
returns the old versions (Awkward 0.x and Uproot 3.x). The prescription for anyone who needs the old packages is import awkward0 as awkward
and import uproot3 as uproot
.pip install awkward1 uproot4
returns thin wrappers of the new ones, which point to whatever the latest awkward
and uproot
are. They pass through to the new libraries, so scripts written with import awkward1, uproot4
don't need to be changed (though you'll probably want to, for simplicity).uproot-methods
no longer causes trouble because there's an uproot3-methods
in the dependency chain: awkward0
→ uproot3-methods
→ uproot3
. The latest uproot-methods
(no qualification) now excludes Awkward 1.x so that they can't be used together by mistake.@dano0014 What happens when you try
puls.array() # get it as an Awkward Array (fast, if possible)
or
puls.array(library="np") # get it as a NumPy array of Python objects
That is, try to use the default interpretation (which can be seen without the table limitations using puls.interpretation
).
Oh, I see: the Pulse
class failed to get an interpretation, but instead of raising an error (as it should), it returned None
. That would be a bug. I can investigate it if you post it as a GitHub Issue with the original file (rawdaq_1810251534.root
).