Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Aug 12 10:56
    codecov[bot] commented #2986
  • Aug 12 10:45
    thomasaarholt edited #2986
  • Aug 12 10:43
    thomasaarholt review_requested #2986
  • Aug 12 10:43
    thomasaarholt edited #2986
  • Aug 12 10:42
    thomasaarholt edited #2986
  • Aug 12 10:42
    thomasaarholt edited #2986
  • Aug 12 10:41
    codecov[bot] commented #2986
  • Aug 12 10:40
    thomasaarholt synchronize #2986
  • Aug 12 10:35
    thomasaarholt commented #2986
  • Aug 12 10:33
    codecov[bot] commented #2986
  • Aug 12 10:28
    thomasaarholt edited #2986
  • Aug 12 10:17
    codecov[bot] commented #2986
  • Aug 12 10:16
    thomasaarholt synchronize #2986
  • Aug 12 09:10
    codecov[bot] commented #2986
  • Aug 12 08:56
    thomasaarholt edited #2986
  • Aug 12 08:55
    thomasaarholt opened #2986
  • Aug 10 20:22
    codecov[bot] commented #2876
  • Aug 10 20:07
    codecov[bot] commented #2876
  • Aug 10 20:06
    CSSFrancis synchronize #2876
  • Aug 08 23:26
    omnivus labeled #2985
Thomas Aarholt
I really like your approach for creating a mask, using random.choice!
  1. The vacuum pixels should indeed be masked in the way you describe. I'm not sure how masking works with samfire!
  1. I suggest you add a comment to that post, and perhaps delve into the source code and see if you can help shed light.
Zezhong Zhang
@thomasaarholt Thanks for the comments! Sure, I will add the request for absolute intensity (the product) to the post, and dive a bit deeper into the source code.
Thomas Aarholt
Brilliant! Let us know how you get on - I'm a bit busy with other things,but can at least comment on it
Mingquan Xu
When I use ‘align_zero_loss_peak’ to align ZLP in my SI dataset, there is an error:
I used this function before, but this is the first time I see this warning. What would cause this?
Could anyone give me any suggestions to solve this? Thanks in advance!
Eric Prestat
is your hyperspy up to date?
Mingquan Xu

is your hyperspy up to date?

the version is 1.6.2

Eric Prestat
which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3
Mingquan Xu

which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3

Thanks very much for your reply. I will update my HyperSpy and have a check.

Thomas Aarholt
What is a good way to save artificial lazy signals that are larger than memory? I notice that my ram consumption shoots up when I try saving a dask-created signal, even if I specify the chunks.
import hyperspy.api as hs
from hyperspy.axes import UniformDataAxis
import dask.array as da

from hyperspy.datasets.example_signals import EDS_SEM_Spectrum
from hyperspy._signals.eds_sem import LazyEDSSEMSpectrum
from hyperspy._signals.signal2d import LazySignal2D

s = EDS_SEM_Spectrum()
data = s.data
axis = UniformDataAxis(offset = -0.1, scale = 0.01, size = 1024, units="eV")

s2 = LazyEDSSEMSpectrum(data, axes = [axis])

nav = LazySignal2D(da.random.random((2500, 1000)))
s = s2 * nav.T

print("Shape:", s.data.shape) # 2500, 1000, 1024 - ~20GB
s.save("lazy.hspy", compression=None, overwrite=True, chunks = (100, 1000, 1024))
Håkon Wiik Ånes
Could this be a problem with dask 2021.04.0 and related to https://github.com/dask/dask/issues/7583#issue-863708913? We've pinned dask to below this version in kikuchipy because of sudden memory issues after 2021.04.0.
12 replies
Thomas Aarholt
Possibly! I'll try with an older dask and see!
Hmm, could someone please check if conda create --name testdask hyperspy results in an error? I'm running mamba on my M1 Mac, and installing hyperspy is giving a really weird error today. Installing jupyter notebook works fine:
(base) ➜  ~ conda create --name testdask hyperspy
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.

(No info after the UnsatisfiableError)
15 replies
Thomas Aarholt
I've just gone over the linear fitting PR #2422 once again, and I'm considering squashing most commits so that it is one commit per "category" (Model, Components, Tests and Docs), and then force pushing to the PR branch.
1 reply
I'm wondering if that makes it easier to review it, or if that just complicates matters. I've taken all comments that were in #2422 into consideration, so I think squashing would be clearer and shouldn't introduce confusion.
I've tested it, so the branch will look like this: https://github.com/thomasaarholt/hyperspy/commits/linear_fit_squash_into_files
Hi, Not sure if this is the correct place to ask this but I am new to hyperspy and was wondering if there is a way of selectively removing element contributions to a EDS_TEM spectrum?
1 reply
Hi, I noticed that the existing integration for DENSsolutions log files is outdated. I started work on a io_plugin to load log files from our Impulse software which is in csv format. However there is already another io_plugin that has the file_extension "csv". If I simply add the new one, the existing one stops working. Is there a proper way in hyperspy to deal with this issue? Thanks!
6 replies
Thomas Aarholt
Is there any way to distinguish the Impulse csv from others? In my book, a csv file should really only be delimited by commas and newlines. How is the impulse one different?
1 reply
I'm interested by Hyperspy to treat my STEM-EELS data among other things.
I wanted to give a try with HypersyUI first but i cannot start the program.
I did the installation with conda and a python 3.7 version.
Here's attached the log:
Apparently there is an issue with scipy.
Any idea what i could do to get it started ?
Many thanks
Switching to python 3.8 fixed the issue.
Maybe an indication of which python version is needed will be helpful for other here:
Thomas Aarholt
Well done sorting it, and thanks for the good terminal output!
Do you know which aspect of 3.8 affected Scipy?
Eric Prestat
hyperspyui does support python 3.7 and known to work. The error message you provided says that the error arises when import hyperspy and it seems that there was something wrong with your install, however there is nothing obvious in the error traceback to have of an idea of where the error could come from. If this is working fine now, I would not worry too much about it! :)

Do you know which aspect of 3.8 affected Scipy?

Sorry Thomas i have no clue

My collaborators and me am developing a machine learning algorithm based on the scikit-learn framework. We would like our algorithm to be as easy to handle as possible by using hyperspy. If I understood correctly, the correct way to go is to use the decomposition function and use our estimator with the algorithm = key word argument .
The problem we face is that our fit method takes some arguments and they are not included in the parameters of the estimator object itself because it depends on the fitted data. (For example, but not limited to that, for graph regularization a 2D shape input is needed.)
I thought that since decomposition takes **kwargs they would be passed to the fitfunction of the algorithm object but it seems to me that I was wrong.
Is there any solution ? Am I missing something ?
Looking into the code of hyperspy, the **kwargs are not passed to the fit (or fit_transform) function of the custom algorithm. Shall I submit this as an issue on github ?
8 replies
Jonas Lähnemann
LumiSpy v0.1.2 (https:/lumispy.org) is out on PyPi and conda-forge now (and on the AUR for arch users) for those who work with HyperSpy for e.g. CL and PL data. So far, we have a limited number of additional functionalities besides the provision of dedicated signal classes. Conversion to energy or wavenumber axes is already included, and the non-uniform axes support has matured so far as to be included in the release_next_minor branch of HyperSpy. We're happy for any user feedback, but also for the help of people who want to contribute routines to the project.
As a sidenote - as I was already at it - I also submitted pkgbuild files for pyxem and kikuchipy (and missing dependencies) to the AUR. In case there is any arch user among you who wants to test installing from there.
1 reply
Felix Utama Kosasih
Hello all, I have a question about the 'ranking' of NMF components in Hyperspy. If I perform NMF on a dataset with n as the desired number of output components, how does Hyperspy decide which component is #0, #1,..., #(n-1)? Is it also ranked by each component's proportion of the original data's variance a la PCA?
3 replies
Eric R. Hoglund
Is there an argument that I need to include for si.decomposition(alogrithm='ORPCA') to return the variance? I ran standard SVD the ORPC and am receiving the following:
AttributeError                            Traceback (most recent call last)
<ipython-input-99-659ba5d350c0> in <module>
----> 1 si.plot_explained_variance_ratio(threshold=4, xaxis_type='number')

c:\users\owner\documents\github\hyperspy\hyperspy\learn\mva.py in plot_explained_variance_ratio(self, n, log, threshold, hline, vline, xaxis_type, xaxis_labeling, signal_fmt, noise_fmt, fig, ax, **kwargs)
   1428         """
-> 1429         s = self.get_explained_variance_ratio()
   1431         n_max = len(self.learning_results.explained_variance_ratio)

c:\users\owner\documents\github\hyperspy\hyperspy\learn\mva.py in get_explained_variance_ratio(self)
   1311         target = self.learning_results
   1312         if target.explained_variance_ratio is None:
-> 1313             raise AttributeError(
   1314                 "The explained_variance_ratio attribute is "
   1315                 "`None`, did you forget to perform a PCA "

AttributeError: The explained_variance_ratio attribute is `None`, did you forget to perform a PCA decomposition?
3 replies

I have a very dumb question :

X = np.random.rand(10,15,20)
si = hs.signals.Signal1D(X)
print(si.axes_manager[0].size, si.axes_manager[1].size)

I obtain :


Why is it that way ?

2 replies
Eric R. Hoglund
The Signal class HyperSpy uses the image order for indexing i.e. [x, y, z,…] (HyperSpy) vs […,z,y,x] (numpy)
Hi All, I am having issues with loading a large EDX stack from hdf5. The following is what I am doing:
import hyperspy.api as hs
import h5py
import dask
import dask.array as da
f = h5py.File('data_path.hdf5', 'r')
edx_da = da.from_array(f['data_edx'], chunks=[1,128,128,4096])
edx_hs = hs.signals.EDSTEMSpectrum(edx_da).as_lazy()
# <LazyEDSTEMSpectrum, title: , dimensions: (512, 512, 98|4096)>
test = edx_hs.inav[:,:,0]
The above is giving me the following error:
ValueError: Not a dataset (not a dataset)
Any ideas why / how to fix?
7 replies
Eric R. Hoglund
Also need assistance with EDX. I'm getting 0s across the board from non-zero intensities when using CL.
quant_elms = ['O_K', 'Si_K', 'Sc_K', 'Er_L', 'Nd_L', 'Yb_L', 'Lu_L']
quant_sigs = [result[i] for i in [10, -3, -5, 1, 8, -2, 4]]
quant_K = np.array([1.0, 0.885161, 0.98704, 1.45108, 1.61348, 1.6705, 1.69536])
print(pd.DataFrame(hs.stack(quant_sigs).data.T, columns=quant_elms))
quant = sroi_stack.quantification(quant_sigs, method='CL', factors=1/quant_K, composition_units='weight', max_iterations=100)
print(pd.DataFrame(hs.stack(quant).data.T, columns=quant_elms))

[########################################] | 100% Completed |  0.1s
         O_K       Si_K      Sc_K      Er_L      Nd_L      Yb_L      Lu_L
0  10.096292  10.968796  0.046850  0.051064  0.015106  0.048460  0.038284
1   6.665914   5.675549  2.449875  3.110012  1.423448  2.923658  2.596810
2   6.598061   5.350703  2.364560  3.020911  1.404513  2.716718  2.388128
3   7.454355   5.216240  0.247508  1.743997  7.806576  0.876016  0.638009
[########################################] | 100% Completed |  0.1s
   O_K  Si_K  Sc_K  Er_L  Nd_L  Yb_L  Lu_L
0  0.0   0.0   0.0   0.0   0.0   0.0   0.0
1  0.0   0.0   0.0   0.0   0.0   0.0   0.0
2  0.0   0.0   0.0   0.0   0.0   0.0   0.0
3  0.0   0.0   0.0   0.0   0.0   0.0   0.0
Eric R. Hoglund
And even odder, if I iterate the nav dimension manually then I get a different result. The first and last positions are reasonable, but the rest are still 0s.
weight_percent = [sroi_stack.inav[i].quantification([sig.inav[i] for sig in quant_sigs],
                                                    method='CL', factors=quant_K, composition_units='weight', max_iterations=100, convergence_criterion=0.001)
                  for i in range(0,4)]
print(pd.DataFrame(np.array([[j.data[0] for j in i] for i in weight_percent]), columns=quant_elms))

         O_K       Si_K      Sc_K      Er_L       Nd_L      Yb_L      Lu_L
0  50.240269  48.313808  0.230111  0.368721   0.121285  0.402830  0.322976
1   0.000000   0.000000  0.000000  0.000000   0.000000  0.000000  0.000000
2   0.000000   0.000000  0.000000  0.000000   0.000000  0.000000  0.000000
3  24.858340  15.397206  0.814680  8.439159  42.003570  4.880007  3.607039
Katherine E. MacArthur
@erh3cq I've just seen this. Have you checked that all your elements you want to quantify are in the metadata? What happens when you use 'zeta' or 'cross_section'. Is the error just there with 'CL'?
The quant function definitely takes somethings from the metadata especially when using absorption correction. Might this be the source of your error?
Eric R. Hoglund

@k8macarthur Yes those are good checks.
I have checked the metadata and all edges are included:

azimuth_angle = 0.0
elevation_angle = 35.0
energy_resolution_MnKa = 131.45115116593425
number_of_frames = 58
tilt_alpha = 0.005
tilt_beta = -0.0
x = -0.000321
y = -8.1e-05
z = -2.5e-05
beam_energy = 200.0
camera_length = 73.0
magnification = 20000.0
microscope = Titan
date = 2021-08-18
original_filename = 1032 SI 23500 x HAADF-BF_original.emd
time = 10:32:08-04:00
time_zone = Eastern Daylight Time
title = Stack of EDS
elements = ['C', 'Er', 'Ga', 'Lu', 'Mo', 'N', 'Nd', 'O', 'Pt', 'Sc', 'Si', 'Yb']
xray_lines <list>
[0] = C_Ka
[1] = Er_La
[10] = Nd_Lb1
[11] = Nd_Lb2
[12] = Nd_Lb3
[13] = Nd_Lg1
[14] = Nd_Ma
[15] = Nd_Mg
[16] = Nd_Mz
[17] = O_Ka
[18] = Pt_Ma
[19] = Sc_Ka
[2] = Er_Mb
[20] = Sc_Kb
[21] = Si_Ka
[22] = Yb_La
[23] = Yb_Ma
[24] = Yb_Mb
[3] = Er_Mz
[4] = Ga_La
[5] = Lu_La
[6] = Lu_Ma
[7] = Mo_La
[8] = N_Ka
[9] = Nd_La
binned = True
signal_type = EDS_TEM

I have not yet tried the zeta or cross_section quantifications, but I did try feeding directly into from hyperspy.misc.eds.utils import quantification_cliff_lorimer as suggested by @thomasaarholt. Using the util directly gave results for all navigation indices. There must be something going wrong in quant when the attributes are pulled from the signal.

7 replies
Hello all, I am trying to sum specific frames while opening JEOL.pts files, but the file opened is always the full integration of frames for the EDSSpectrum signal. If I use the sum_frames=False, I can open all the frames, but this is very memory-consuming. Follow an example of what I am doing: S=hs.load("data.pts", sum_frames=True, first_frame=0, last_frame=10). When I do it with emd files the code works but not with the pts files. I tried also with the .asw from Jeol and the problem is the same. Does someone know what can I do to solve this issue? I cannot process the data if I open all frames without summing because of memory.
6 replies
In this case I have more than 10 frames (37) and the command integrates all the 37 instead of 10
Svetlana K
Hi guys, could you remove automatic adding of the power law background to the model? I spent two days trying to figure out why my low loss fitting with fixed patterns was not working properly. And guess what, there was a power law component
17 replies