## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Nov 29 01:20
codecov[bot] commented #2399
• Nov 29 01:19
github-actions[bot] synchronize #2399
• Nov 29 01:19

github-actions[bot] on non_uniform_axes

Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 9 more (compare)

• Nov 28 16:34

ericpre on RELEASE_1.6.1

• Nov 28 16:34

ericpre on RELEASE_next_minor

Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 8 more (compare)

• Nov 28 16:30

ericpre on RELEASE_next_patch

Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 7 more (compare)

• Nov 28 16:30
ericpre closed #2594
• Nov 28 16:30
ericpre commented #2594
• Nov 28 16:10
codecov[bot] commented #2594
• Nov 28 15:56
codecov[bot] commented #2594
• Nov 28 15:56
ericpre synchronize #2594
• Nov 28 15:31

ericpre on RELEASE_1.6.1

Update changelog. Set version to 1.6.1. Fix uploading wheel to pypi. and 4 more (compare)

• Nov 28 15:24
codecov[bot] commented #2594
• Nov 28 15:24
ericpre synchronize #2594
• Nov 28 15:21
codecov[bot] commented #2594
• Nov 28 15:06
codecov[bot] commented #2594
• Nov 28 15:06
ericpre synchronize #2594
• Nov 28 14:51
codecov[bot] commented #2594
• Nov 28 14:51
ericpre synchronize #2594
• Nov 28 14:44
codecov[bot] commented #2594
Weixin Song
@winston-song
Hi, ALL, is there any method to twin the Mn L3 and L2 H-S GOS edge height?
Magnus Nord
@magnunor
@winston-song, I think they're twinned automatically
Eric Prestat
@ericpre
@winston-song, yes there is the twin attribute of parameters
This is an example to show adapted from the documentation above:
import hyperspy.api as hs

s = hs.datasets.artificial_data.get_core_loss_eels_signal()

m = s.create_model()
Mn_L2_intensity_parameter = m[2].intensity
print(Mn_L2_intensity_parameter.twin)

m.print_current_values(only_free=True)
m.print_current_values(only_free=False)
Eric R. Hoglund
@erh3cq
Loading a Velox EDX SI that has been processed with "Reduce file size" in Velow throws some errors. The reduced file does not have a spectrum stream or multiple detectors but only a single SI. Does anyone have a fix for this?

Loading a Velox EDX SI that has been processed with "Reduce file size" in Velow throws some errors. The reduced file does not have a spectrum stream or multiple detectors but only a single SI. Does anyone have a fix for this?
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-3-51db32dd469f> in <module>
----> 2 SI = hs.load(file+'.emd', select_type='spectrum_image',
3              sum_frames=False, sum_EDS_detectors=False,
4              first_frame=1, last_frame=1)
5 SI

~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io.py in load(filenames, signal_type, stack, stack_axis, new_axis_name, lazy, convert_units, **kwds)
277         else:
278             # No stack, so simply we load all signals in all files separately
--> 279             objects = [load_single_file(filename, lazy=lazy,
280                                         **kwds)
281                        for filename in filenames]

~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io.py in <listcomp>(.0)
277         else:
278             # No stack, so simply we load all signals in all files separately
--> 279             objects = [load_single_file(filename, lazy=lazy,
280                                         **kwds)
281                        for filename in filenames]

316     else:
319
320

322                      **kwds):
323     lazy = kwds.get('lazy', False)
325                                         **kwds)
326     objects = []

~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in file_reader(filename, log_info, lazy, **kwds)
1379     if fei_check(filename) == True:
1380         _logger.debug('EMD is FEI format')
-> 1381         emd = FeiEMDReader(filename, lazy=lazy, **kwds)
1382         dictionaries = emd.dictionaries
1383     else:

~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in __init__(self, filename, select_type, first_frame, last_frame, sum_frames, sum_EDS_detectors, rebin_energy, SI_dtype, load_SI_image_stack, load_reduced_SI, lazy)
588         except Exception as e:
--> 589             raise e
590         finally:
591             if not self.lazy:

~\Programs\Anaconda\envs\hyperspy\lib\site-packages\hyperspy\io_plugins\emd.py in __init__(self, filename, select_type, first_frame, last_frame, sum_frames, sum_EDS_detectors, rebin_energy, SI_dtype, load_SI_image_stack, load_reduced_SI, lazy)
585                 self.p_grp = f.get('Presentation')
586                 self._parse_image_display()
588         except Exception as e:
589             raise e

619             t1 = time.time()
621             t2 = time.time()
622             _logger.info('Time to load images: {} s.'.format(t1 - t0))

1056                 stream.spectrum_image = sa
1057
-> 1058         spectrum_image_shape = streams[0].shape

IndexError: list index out of range
Eric Prestat
@ericpre
From the user guide:
"Pruned Velox EMD files only contain the spectrum image in a proprietary format that HyperSpy cannot read. Therefore, don’t prune Velox EMD files if you intend to read them with HyperSpy."
Mingquan Xu
I can not save file as *msa format suddenly, which can be done before. What would the cause?
Thomas Aarholt
@thomasaarholt
@Mingquan_Xu_twitter my guess is that you're trying to save a multidimensional spectrum, while the msa format only supports single spectra (1D). What is the shape of your signal?
Mingquan Xu
@thomasaarholt ，thanks very much for your reply. I have checked the dimensions of my data, and this is the cause. Thanks for your suggestion.
Eric R. Hoglund
@erh3cq
@ericpre thank you
Justyna Gruba
@justgruba
Hi, hello :)) i want to ask u, if it is the current version of quantification_cliff_lorimer function? in this link https://github.com/hyperspy/hyperspy/blob/851c9c0687533f429c853873834100aae2e6f92b/hyperspy/misc/eds/utils.py?fbclid=IwAR12kjkMJ2kd4KQAYWDgod7z0ufyGVL5x2tVJmFQcHyu7NKl2esQTK1uCnY. Than u in advance! Justyna
Eric Prestat
@ericpre
@justgruba, what do you mean with "current"? You should get an answer by looking at the history of that file on github
petergraat
@petergraat
I'm playing around with TEM/EDX quantification in Hyperspy, particularly with absorption correction in the Cliff-Lorimer method. I noted that I get a much smaller effect when including absorption correction in Hyperspy than when I do it "manually". Browsing through the code I noted that in the get_abs_corr_zeta method the mac is multiplied with 0.1 (line 560 in misc/eds/utils.py file: mac = stack(material.mass_absorption_mixture(weight_percent=weight_percent)) * 0.1). This seems to be the reason for the difference. What is the reason for this factor 0.1 ?
Alexander Skorikov
@petergraat Well it seems to be a conversion factor from cm^2/g to m^2/kg (indicated in the source file as a comment)
petergraat
@petergraat
@askorikov Ah, stupid, I should have seen that! Then there might be an issue with the mass thickness returned by the CL_get_mass_thickness method in the _signals/eds_tem.py file. That method should return the mass thickness in kg/m2. But I think it returns the mass thickness in g/cm2. On line 911 the elemental mass thickness is calculated as the product of composition (in %), thickness (in nm) and density (in g/cm3) and a factor of 1e-9 to get from % to fraction (1e-2) and from nm to cm (1e-7). Or do I oversee again something with the units?
Andrew Herzing
@aaherzing_gitlab
Is there any way to link two signals together so that if the navigation axes of the first dataset is cropped then this will automatically crop the navigation axes of the second? Something like this is what I'm looking for:
>>> s = hs.signals.Signal1D(np.zeros([100,100,10]))
>>> print(s)
<Signal1D, title: , dimensions: (100, 100|10)>
>>> s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
>>> print(s)
<Signal1D, title: , dimensions: (100, 100|10)>
>>> s = s.inav[10:,:]
>>> print(s)
<Signal1D, title: , dimensions: (90, 100|10)>
>>> print(s2)
<Signal1D, title: , dimensions: (90, 100|10)>
Alexander Skorikov
@petergraat Hm, indeed looks like there's a mistake there
Thomas Aarholt
@thomasaarholt
@petergraat @askorikov great that you guys are tracking this :) Could one of you also make a GH issue about it?
petergraat
@petergraat
@thomasaarholt I've just created a new issue at GitHub.
petergraat
@petergraat

@aaherzing_gitlab The following might work:

s = hs.signals.Signal1D(np.zeros([100,100,10]))
s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
s2.axes_manager = s.axes_manager
s.crop(0, 10, None)
print(s)
print(s2)

Using s = s.inav[10:,:] won't work because it creates a new copy of s, and then the axes_manager of s2 isn't the same as the axes_manager of s anymore.

Andrew Herzing
@aaherzing_gitlab
Thanks! This might do it. I'm actually hoping to embed the second signal in the metadata of the first. Is this a bad idea in practice?
Eric Prestat
@ericpre
@aaherzing_gitlab and @petergraat, the example above doesn't work and will break s2. I can't think of a way of doing this automatically
Would putting it in a for loop works well enough:
s = hs.signals.Signal1D(np.arange(100*100*10).reshape(100, 100, 10))
s2 = s.deepcopy()

for _s in [s, s2]:
_s.crop(0, 10, None)
petergraat
@petergraat
@ericpre and @aaherzing_gitlab OK, I understand, setting s2.axes_manager = s.axes_manager couples the axes managers, but not the data. Thus, s.data.shape() is affected by the s.crop() command, but s2.data.shape() has still the original shape.
Maybe stacking the signals is another possibility:
import hyperspy.misc as hm
s = hs.signals.Signal1D(np.zeros([100,100,10]))
s2 = hs.signals.Signal1D(np.zeros([100,100,10]))
s3 = hm.utils.stack([s, s2])
print(s3)
s3 = s3.inav[10:, :, :]
print(s3)
Katherine E. MacArthur
@k8macarthur
Quick question. Apart from Hyperspy what would people reference when using PCA or one of the other sub algorithms? Specifically I'm doing PCA denoising on my EDX stuff at the moment.
petergraat
@petergraat
@k8macarthur , Do you mean scientific literature? In that case this one might be relevant:
C.M. Parish, 'Multivariate Statistics Applications in Scanning Transmission Electron Microscopy X-Ray Spectrum Imaging', chapter 5 of 'Advances in Imaging and Electron Physics', Volume 168 (Elsevier, 2011).
Katherine E. MacArthur
@k8macarthur
@petergraat Yes I mean scientific literature. Thanks! :)
lukmuk
@lukmuk

Hey all, I am trying to run a Gaussian smoothing kernel over the spatial dimension of my STEM-EDS datacube, i.e. of shape (x, y | E). I am using the map() function for this and have a quick question regarding the signal/navigation dimensions:
If I have a signal s=(x, y | E), then running s.map(my_gaussian_filter) would smooth along the energy channels (signal dimension).
For filtering in spatial dimension I transposed the signal to swap signal/navigation, apply s.map(my_gaussian_filter) and then transpose back, i.e. s.T.map(my_gaussian_filter).T?

Both version return some smoothed version of my spectrum image, but I just wanted to ask if the ideas above are correct (as both version have a smoothing effect on the spectrum image, so it is hard to compare). Thank you for your help!
For anyone interested, I am trying to mimic the Gaussian kernel filtering from the temDM MSA plugin by Pavel Potapov: https://www.sciencedirect.com/science/article/pii/S0968432816303821?via%3Dihub

Eoghan O'Connell
@PinkShnack

Hey @lukmuk. I'm sure someone more experienced will be able to tell you if your method works (I don't using the map function often).

Without having your dataset I can't be sure, but I assume the Scipy multidim gaussian filter function should do the job without needing any transposing? See the example below with some dummy data.

from scipy.ndimage import gaussian_filter
import hyperspy.api as hs
import numpy as np
s = hs.signals.Signal1D(np.zeros([100,100,10]))
gaussian_filter(s, sigma=1)

# general 3D example
a = np.arange(125, step=1).reshape((5,5,5)) #  3D signal
a
gaussian_filter(a, sigma=1)

s = hs.signals.Signal1D(a)
gaussian_filter(s, sigma=1)

See documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter.html
Could also use the scikit-image wrapper version: https://scikit-image.org/docs/dev/api/skimage.filters.html#skimage.filters.gaussian

lukmuk
@lukmuk
@PinkShnack Thank your for the idea and the help! I tested different methods for Gaussian filtering and the map() method seems to match (relatively) well with the temDM output for separate filtering in spatial/energy dimension. Applying gaussian_filter(s) seems to smooth in 3D, i.e. over spatial and signal dimension simultaneously. I put the tests in a notebook (https://github.com/lukmuk/em-stuff/tree/main/Spectrum-image-Gaussian-filter). Again, thank you for your help.
Eric Prestat
@ericpre
@lukmuk: your example of s.T.map(my_gaussian_filter).T is brilliant! This is exactly what it is design for and illustrate very well the idea behind the transposition of navigation and signal space!
Jędrzej Morzy
@JMorzy
Since I updated my hyperspy recently, when using the remove_background() or fit_component() with the PowerLaw component, I keep getting the following warning 'WARNING:hyperspy.model:Covariance of the parameters could not be estimated. Estimated parameter standard deviations will be np.nan.
WARNING:hyperspy.model:m.fit() did not exit successfully. Reason: Number of calls to function has reached maxfev = 600.' It was not there before and it seems to fit poorly, any ideas of how I can improve this? It doesn't take maxfev argument, so perhaps needs changing of the default maxfev value in the code itself?
Thomas Aarholt
@thomasaarholt
My guess is that increasing the maxfev ("maximum function evaluations") will not improve the fit markedly. Could you share a screenshot or the data, as well as the code you're using for the fit? My guess is that you could probably improve it by changing the fit region or similar.
SataC90
@SataC90
Hi, I'm very new to Hyperspy and Python in general, I need Hyperspy to read .emd velox files. I have installed Jupyter notebook and tried to run the code, but I'm getting an error at hs.load ("filename"). It says "no file name matches this pattern". Since I'm not an expert I haven't been able to troubleshoot this error. I'm looking for a solution that's why I'm writing here. Any help would be much appreciated
Thomas Aarholt
@thomasaarholt
Hi @SataC90 I reckon you are trying to give the load function the wrong path to a file. Make sure that you're opening jupyter notebook in the same folder that you have the files in, or that you are passing the full path to the file you're opening.
Eric R. Hoglund
@erh3cq
@SataC90 and on Windows’s you need to use an r before your quoted string (r”...”) if you file path has \ instead of /. Easy to check.
SataC90
@SataC90
Thanks for the tips @thomasaarholt @erh3cq it seems that I've to upload each emd file into the jupyter notebook and then if I run the code it allows the image to be visualized. So I'm doing it this way as of now. But one thing I didnt understand why the code didn't run when I had already uploaded the entire folder with my data into the notebook. Any tips for that? Thanks again.
Katherine E. MacArthur
@k8macarthur
@SataC90 as a general rule for specific queries like 'why doesn't this work?' it far easier for people to help you if you copy the code into this thread. That way we can spot mistakes more easily. Generally, Jupyter notebooks run from the folder where they're stored. So if you type just a file name it expects it to reside in the same folder. Alternatively you can use the full file name starting your drive e.g. C:. If you have already imported file as a variable using:
s = hs.load('filename')
Then you perform the remaining tasks using s. However, for the Velox files in particular they don't just load one image for a given file name but a whole list of up to 8 items are loaded. Therefore often you need to run all your functions on s[3]. Or better still assign the data you wish to work on to another variable name. If you type s and run than your notebook will print a list of what your s variable actually contains.
Jędrzej Morzy
@JMorzy
@thomasaarholt here is the code used. It is a standard O-edge background fitting (the same thing happens when doing remove_background().
    mO = sO.create_model(ll = ll, GOS="Hartree-Slater", auto_add_edges=False)
mO.fit_component(mO["PowerLaw"], bounded=False, fit_independent=True, signal_range=[500.0,520.0], only_current=True)
mO.assign_current_values_to_all()
mO.fit_component(mO["PowerLaw"], bounded=False, fit_independent=True, signal_range=[500.0,520.0], only_current=False)`
Here is a screenshot of the data and the fit - it does a reasonable job with the background, but I am just concerned about it not converging on the right answer every time
Thomas Aarholt
@thomasaarholt
@JMorzy Your model shouldn't be using a Power law fit. It looks like you already have removed the background.
I wouldn't be concerned about the fit. As these things go, the O-edge fit is really quite good.
Jędrzej Morzy
@JMorzy
You are right, sorry - I should have mentioned that I remove the background separately - the issue is exactly the same there