Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 22 20:33
    ericpre commented #2715
  • Apr 22 08:27
    ericpre commented #2715
  • Apr 22 02:27
    HamishGBrown commented #2715
  • Apr 22 02:26
    HamishGBrown commented #2715
  • Apr 21 23:34
    HamishGBrown commented #2715
  • Apr 21 21:08
    codecov[bot] commented #2399
  • Apr 21 21:07
    codecov[bot] commented #2399
  • Apr 21 20:55
    codecov[bot] commented #2399
  • Apr 21 20:55
    ericpre synchronize #2399
  • Apr 21 19:44
    ericpre unlabeled #2697
  • Apr 21 19:43
    ericpre milestoned #2697
  • Apr 21 19:43
    ericpre unlabeled #2711
  • Apr 21 19:43
    ericpre unlabeled #2709
  • Apr 21 19:22
    ericpre labeled #2712
  • Apr 21 19:22
    ericpre labeled #2712
  • Apr 21 19:22
    ericpre unlabeled #2712
  • Apr 21 19:22
    ericpre labeled #2712
  • Apr 21 13:05
    codecov[bot] commented #2717
  • Apr 21 12:52
    ericpre edited #2717
  • Apr 21 12:52
    ericpre opened #2717
Thomas Aarholt
@thomasaarholt
We should add an explanation under "Model Fitting" and add a link to it from "Binned and Unbinned signals" .
Sharing this job advert here in case it's of interest to anyone. Is for an image analysis group leader position in Germany (Forschungszentrum Juelich). Unfortunately I think I'm too much of an experimentalist.
F00lWise
@F00lWise
image.png
Signal2D.isig[] is messing with the axes
Hi all, I recently discovered hyperspy and was very inspired by the concept, so I decided to try it out for evaluation of a four-dimensional spectroscopy dataset I am working on.
But there is something weird happening with the axes when I use indexing. In fact, it seems to change the order and size of the axes from a simple indexing operation. This produces silent errors in the result.
Does somebody know what is going on here and how to assert consistency between the axes manager and the size of the dataset?
In the image example I posted above, you see how indexing switched the axis information of the navigation dimensions.
FuhuiSHAO
@FuhuiSHAO
slice is in the scale of axis_manager.scale to choose the data.
Thomas Aarholt
@thomasaarholt
That's interesting. I can't seem to reproduce with a minimum example:
import hyperspy.api as hs
import numpy as np

data = np.zeros((63, 51, 400, 975))
s = hs.signals.Signal2D(data)
s
# <Signal2D, title: , dimensions: (51, 63|975, 400)>

s.isig[2:4, 2:3]
# <Signal2D, title: , dimensions: (51, 63|2, 1)>
Any chance you could reproduce in a smaller example?
Thomas Aarholt
@thomasaarholt
@FuhuiSHAO is there any chance you fed hyperspy the wrong (swapped) axes sizes in the dicts in your first line? And then hyperspy computes the correct ones when you use isig?
Thomas Aarholt
@thomasaarholt
I'm pretty sure that is what was happening:
import hyperspy.api as hs
import numpy as np

data = np.zeros((63, 51, 400, 975))

ax1 = {'size' : 63}
ax2 = {'size' : 51}
ax3 = {'size' : 400}
ax4 = {'size' : 975}

s = hs.signals.Signal2D(data, axes=[ax2, ax1, ax3, ax4])
s
# <Signal2D, title: , dimensions: (63, 51|975, 400)>
s.isig[1:2, 3:4]
# <Signal2D, title: , dimensions: (51, 63|1, 1)>
Thomas Aarholt
@thomasaarholt
Here I fed ax1 and ax2 in the wrong order. hyperspy accepts this, but then updates it when slicing the signal.
FuhuiSHAO
@FuhuiSHAO
yes they are always switched.
Thomas Aarholt
@thomasaarholt
You may misunderstand me
I mean that you wrote the wrong size in the a_in_axes_dict and en_in_axis_dict
FuhuiSHAO
@FuhuiSHAO
I see, thanks!
Thomas Aarholt
@thomasaarholt
You're welcome :)
F00lWise
@F00lWise

Here I fed ax1 and ax2 in the wrong order. hyperspy accepts this, but then updates it when slicing the signal.

Thank you Thomas! I indeed found an error in the sizes of the input axis dictionaries.
This fixed my problem.
Perhaps a quality of life improvement could be an assert statement for the correct axis sizes in the signal init method? Since I had to define them manually (I had the axes as vectors) it is quite easy to mistype some value or specify the axes in the wrong order.

Thomas Aarholt
@thomasaarholt
Yes, I raised something similar in #2345 (my second post in that thread, under "Data axis size".
The dataaxis is getting a big update with the nonuniform axes PR #2399, maybe we can add something there
Thomas Aarholt
@thomasaarholt
I made #2692. Please provide feedback.
Mingquan Xu
@Mingquan_Xu_twitter
hi, a simple question: how to install HyperSpy in Macbook with M1 processor?
2 replies
Katherine E. MacArthur
@k8macarthur
@Mingquan_Xu_twitter I have a intel processor on my Macbook but I found the Anaconda install option to work fine. Instructions can be found here: http://hyperspy.org/hyperspy-doc/current/user_guide/install.html
Alternatively pip install should also work.
Do give another shout if you get stuck though.
Mingquan Xu
@Mingquan_Xu_twitter
@k8macarthur, I have tried to install Anaconda, but it seemingly can not be opened in my MacBook.
5 replies
SANMoya
@SANMoya

Hi everyone, I am not sure if this is the right place to post this but I encounter an error when Atomap is refining Gaussians on a second sublattice of a nanoparticle. This error repeats many times over, but the Gaussian fitting progresses to the end.

WARNING:hyperspy.model:Covariance of the parameters could not be estimated. Estimated parameter standard deviations will be np.nan. WARNING:hyperspy.model:m.fit()did not exit successfully. Reason: Number of calls to function has reached maxfev = 5000.'
It works perfectly fine for the first sublattice despite it having more atom columns.
The error warning comes in at about 30% of the Gaussian refinement. Initially I thought it may be the number of atoms and so I cropped the image to a few rows and the same thing still happens. I then attempted to change the maxfev value in the atom_finding_refining script to 5000. Here, the error persists only with an updated value of maxfev = 5000 being reached.

Your help in anyway is much appreciated,

Thanks

Eoghan O'Connell
@PinkShnack
Hey @SANMoya, it might be good for you to create an issue over on the Atomap repository here: https://gitlab.com/atomap/atomap/-/issues. I have seen this issue sometimes. It happens (as the error says) because the function can't be fit to the data points. I've found it usually happens when I've messed up the initial atom finding (overlapping atom positions, incomplete atom intensities). However, it has been happening more lately, so perhaps Atomap/Hyperspy have implemented some changes that have effected the gaussian fitting. You could install a previous version of the packages eg, pip install --upgrade atomap==0.2.1.
Perhaps @magnunor and @thomasaarholt would have a good idea. Regardless, create an issue over at Atomap and provide a minimal working example so that the developers can see what is going wrong.
Magnus Nord
@magnunor
@SANMoya, @PinkShnack, working on a fix for this now :) The error message is from scipy.optimize.leastsq, and is due to the least square fitter using too many iterations. Probably because of not well-resolved atomic columns.
SANMoya
@SANMoya
@PinkShnack thank you for your response. I will raise the issue on gitIab. The initial refinement works perfectly fine without issues with locating atoms. Thank you @magnunor . The images are atomically resolved but they are not STEM-ADF images. They are restored focal series exit wave phases. The second sublattice is that of O atoms which may have a comparable phase to the vacuum noise or the phase shift between atoms, hence the failure.
Tom Slater
@TomSlater
Hi all. I'm having some issues with the widget backend when using jupyterlab through a google cloud platform hosted kernel. When plotting, the figure is created but nothing is displayed. There are no error messages either. We've installed ipympl via conda but we may have missed an important step in the setup. Has anyone experienced something similar? It may be an issue with the remote kernel on google cloud platform (currently investigating), but thought I'd check if anyone knew of something we'd missed in the setup.
Eric Prestat
@ericpre
If this is with jupyterlab 3, pip install ipympl or conda install ipympl is enough, no need to build the lab extension, if this is what you are asking?
Tom Slater
@TomSlater
Yes, that's what I was asking (in a convoluted way). It is jupyterlab 3 so should have been straightforward. We'll keep investigating.
Eric Prestat
@ericpre
The following works fine for me in jupyterlab 3, matplotlib 3.4.1 and ipympl 0.7:
%matplotlib widget
import matplotlib.pyplot as plt
plt.plot([0, 1, 2])
Tom Slater
@TomSlater
I can get it to work locally with the same installation procedure so it must be an issue with the cloud platform.
Thomas Aarholt
@thomasaarholt
You may want to post on the ipympl repo.
In case other share similar things
There's a jupyter-widgets gitter that is probably even easier/better
Eric Prestat
@ericpre
Are the ipywidgets working?
Tom Slater
@TomSlater
ipywidgets seem to be working. I can create a basic slider okay. Thanks @thomasaarholt , I'll see if anyone has any experience using different backends on cloud platform or similar.
Thomas Aarholt
@thomasaarholt
Perhaps check the browser element inspector, to see if any javascript is throwing errors, then you have something concrete to post
NathalieBrun
@NathalieBrun
Hi everyone. In Hyperspy Cluster analysis, is there a simple way to retrieve the initial estimator.labels_ of scikit (that is a list of labels) instead of the Hyperspy learning_results.cluster_labels (that is a Boolean matrix)?
rpsankaran
@rpsankaran
Hi, I am getting memoryerror when trying to open a 512 512 4096 8uint (only mbs, right?!). I think I installed the latest hyperspy 64bit version? [document said memorerror - should check to have 64bit hyperspy...on a 64bit machine. i do not know how to check if the hyperspy I pip installed is "64 bit"....anyway..). I have previous experience with hyperspy. I have 32GB ram, opening like a 6mb file. If anyone could help, I would be very grateful
9 replies
rpsankaran
@rpsankaran
Another question: my colleague was interested in the hyperspy executable. When I went to look at the internals, it looked like a python environment -- I am a bit confused on how to use this as I saw no obvious "exe" to open files..
3 replies
Katherine E. MacArthur
@k8macarthur
@rpsankaran that is correct. The Hyperspy bundle installs as a completely independent local python installation. In this way it is separate from any other libraries installed on your main python PC. There is now more than one bundle type but you can find the WinPython distribution for direct installation on Windows here:
https://github.com/hyperspy/hyperspy-bundle/releases
Niels Cautaerts
@din14970
@ericpre looks like some issue on the docs again, the Math didn't build right at least on http://hyperspy.org/hyperspy-doc/current/user_guide/mva.html
1 reply
Thomas Aarholt
@thomasaarholt
I notice that model fitting doesn't currently work for complex numbers. Has anyone looked at adding support for this? My friend wants to do curve fitting on dielectric functions.
There are a few issues currently:
  • model estimation assumes float64, not complex128 when needed
  • I don't know how the scipy fitting routines will interact with complex data
  • probably some more :)
lnaglecocco
@lnaglecocco

I have imported a Raman mapscan to Hyperspy as a Signal1D, with the x-y directions of the map as the navigation axes. This is the code I used:

dict0 = {'name':'x','size':len(data_y)}
dict1 = {'name':'y','size':len(data_y[0])}
dict2 = {'name':'intensity','size':len(data_y[0][0])}
s_hs = hs.signals.Signal1D(np.array(data_y),axes=[dict0,dict1,dict2])
factor = len(s_hs.axes_manager['intensity'].axis) / ( max(data_x[0][0]) - min(data_x[0][0]) )
s_hs.axes_manager['intensity'].scale = 1/factor
s_hs.axes_manager['intensity'].offset = min(data_x[0][0])

where data_y is an array containing an array of rows in x axis, each row being a set of arrays each of which is a Raman spectrum (the intensity). data_x has this same structure, but each pixel is an array of wavenumber shifts corresponding to each data point in data_y.

I was able to load the data into Hyperspy and also get the wavenumber axis scaled properly. I have two questions and would be very grateful for assistance. Firstly, is there a better way to synchronise the x axis with the data? Or is the way I've done it fine?

Secondly, when I try to go to a specific region of navigation or signal space, i.e. s_test = s_hs.inav[5,5] the x axis data isn't carried along and the new s_test object will have an x axis which is just integers from 0 to the size of the data set - is there a way to get around this? This latter question is my main concern

3 replies