Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 10:10
    ericpre closed #2727
  • 10:10
    ericpre commented #2727
  • 10:03
    ericpre commented #2727
  • 10:03
    ericpre commented #2727
  • 09:19
    codecov[bot] commented #3032
  • 09:04
    ericpre closed #3028
  • 09:01
    codecov[bot] commented #3032
  • 09:00
    ericpre synchronize #3032
  • Sep 24 20:42
    codecov[bot] commented #3028
  • Sep 24 20:22
    codecov[bot] commented #3028
  • Sep 24 19:43
    codecov[bot] commented #3028
  • Sep 24 19:42
    jlaehne synchronize #3028
  • Sep 24 19:42

    jlaehne on RELEASE_next_minor

    Switch from deprecated GitHub `… Remove unused workflow Pin third party actions to SHA and 4 more (compare)

  • Sep 24 19:42
    jlaehne closed #3027
  • Sep 24 19:42
    jlaehne unlabeled #3027
  • Sep 24 19:41
    jlaehne edited #2996
  • Sep 24 19:22
    codecov[bot] commented #3034
  • Sep 24 19:20
    codecov[bot] commented #3034
  • Sep 24 19:08
    ericpre opened #3034
  • Sep 24 18:55
    ericpre commented #2996

Hi everyone, I have recorded a HAADF-STEM video in Velox with 500 image frames. During recording, I changed the magnification, zoomed in and out. I am already able to export each individual frame. However, the scalebar is the same for each frame (whereas in Velox it shows different scale bars for each individual frame). Do you have any solution for that? What did NOT work:

(The type of the complete_dataset is signal2D). Thanks a lot for your help! :)

4 replies

Hi everyone, I am having trouble saving a signal or multiple signals loaded from .emd format to nexus format using the following command:


I got the following error :

AttributeError: 'dict' object has no attribute 'dtype'

it works fine using the following command , but the original metadata would be not saved:


Can someone help please, i need to save the metadata also .


Hi all,

I am new to hyperspy and still a bit lost getting the thickness out of an EEL spectrum.
I have tried this:

s_ll = hs.load("20220315/data.dm3")
s_ll.set_microscope_parameters(beam_energy = 200, collection_angle=6.0, convergence_angle=0.021)
s.set_microscope_parameters(beam_energy = 200, collection_angle=6.0, convergence_angle=0.021)


s_ll.align_zero_loss_peak(subpixel=True, also_align=[s])

th = s_ll.estimate_elastic_scattering_threshold(window=10)

density = 5.515

s_ll_thick = s_ll.estimate_thickness(threshold=th, density=density)

and as a result I obtain this:

<BaseSignal, title: E8-FeNiO-PS71-T_0043 thickness (nm), dimensions: (1|)>

What does it mean? I would like to obtain a value and not a signal. Sorry, if this is an obvious answer. ;-)

Thank you for your help!

2 replies
Eric Leroy
Hi, I have EELS datacubes in the low-loss region. I would like to perform PCA or ICA and for this it seems that it is better to remove the zero loss peak. I search for an equivalent to the extract zero loss function of GMS but I didn’t found, could you help me to do this ?
9 replies
Hi, may I ask if there is a way to speed up multifit process for EELS spectrum imaging?
15 replies
Eric R. Hoglund
Is it possible to set the dtype a signal reads in as? I have some moderate size spectrum images that could either be int or small bit floats. HS is defaulting to float32 which takes the data from manageable to unmanageable
1 reply
Can any one help me look at the gaussian fit of Ni L3? Why it has a hump at around 880eV? So strange
4 replies
Yisong Han

m = s_fit.create_model()
g1 = hs.model.components1D.GaussianHF()
g2 = hs.model.components1D.GaussianHF()
g3 = hs.model.components1D.GaussianHF()
m.extend([g1, g2, g3])
m00 = m.inav[0]

How can I get the fitted data for each component after running m00.fit() (including the sum) so I can plot them myself rather than using the s.plot()? Do I have to compute them myself? Thanks very much.

23 replies
Dieter Weber

Hi all, greetings from LiberTEM! With the latest release we implemented full Dask array integration. That means LiberTEM and HyperSpy (lazy) signals can now interoperate. That opens a range of opportunities:

  • Share the same file readers for both projects. See our supported formats. In particular, support for K2IS raw, FRMS6 and Direct Electron SEQ could be added to HyperSpy with little effort.
  • Simplify implementing routines in HyperSpy that would require Dask’s map_blocks() interface by using the LiberTEM UDF interface instead.
  • Use the same implementation for scientific algorithms in both projects. As an example, fast ptychography could be offered in HyperSpy, or strain, phase and orientation mapping in pyxem could be applied to live data streams.

This message is to start a discussion on what could be useful and what steps to take in that direction. 😊

6 replies
Friedrich Bohr
Hello! I am trying to import EDS data from Aztec Oxford Instrument. Aztec export the data in two files: .raw and .rpl. If I load the .rpl file in the python (e.g., s=hs.load('Data.rpl') it seems that I miss the meta data as they system cannot identify that the data studied is EDS data. is there a command to load both rpl and raw file? Thanks.
1 reply
Hi. I have EELS data from a FEI Tecnai Remote TCPIP. This is taken in two spatial dimensions, with a grid of pixels and each pixel contains a spectrum. I would like to determine the size of pixels used when taking the measurement, but I cannot find this information in the metadata. Does anyone know how I can extract this information?
I know i can find out by contacting the TEM technician I'm working with but I'd quite like to have a self-contained bit of code which can automate things as much as possible, so it should automatically get the pixel size from the data I have
4 replies
Hi, I am trying robust PCA on my EELS data. I want to know how to determine the "output_dimension" value in the "RPCA" method. Thank you!
Additionally, how to perform the RPCA processed result?
Thomas Aarholt

What do you know about the area you've imaged with EELS? How many different regions do you anticipate? Typically the number is on the order of 2-5. I would just try these and then inspect them with s.plot_decomposition_results().

What do you mean by "perform"?

7 replies
I try to refine the background fitting in the EDS model but it seems off in all the ways. Is there someone willing to provide with some guidance? is the polynomial background the only option for now?
I tried several window sigma value but I was hoping for a much tighter fit (I try to see Na in my maps! (a few at.%)).

The best I got, but still showing negative peaks:

Worse using the default background:
Figure_EDX_Signal_model_default BG.png

Thomas Aarholt
Pass bounded=True to m.fit() or m.multifit(). The gaussian peaks that are added from elements in the EDSModel have a minimum bounding at 0 (see G.A.bmin, where G is one of the Gaussian components in your model, so that you will avoid getting negative peaks.
25 replies


I tried to have a interactive function to obtain chemical maps of EELS spectrum image datasets in hyperspy.

The idea of the following code is to plot the full spectrum, select interactively an energy range on it and then plot interactively the intensity map of the integrated energy range.

import numpy as np
import hyperspy.api as hs

spim = hs.load("file.hspy")

class cst_SpanROI (hs.roi.SpanROI) : 
    def mapping (self,spim = None, out_map = None) :
        out_map.data = spim.isig[self.left:self.right].sum(axis = 2).data
        return out_map

# Get safe initial coordinates for the ROI.
half_e = spim.axes_manager[2].offset + spim.axes_manager[2].scale*0.5*spim.axes_manager[2].size
third_e = spim.axes_manager[2].offset + spim.axes_manager[2].scale*0.333*spim.axes_manager[2].size

# Sum over all the spectra of the spim
full_spectrum = spim.integrate1D(axis = (0,1))
# Initialize the ROI
spectrum_ROI = cst_SpanROI(third_e,half_e)

# Plot both the spectrum and the ROI (interactively)

# Initialize a Signal2D which will represent the map
out = hs.signals.Signal2D(np.zeros((spim.axes_manager[0].size,spim.axes_manager[1].size)))
# Link the spim, the map and the SpanROI interactively
map=hs.interactive(spectrum_ROI.mapping,recompute_out_event = spectrum_ROI.events.changed,spim = spim,out_map = out)

I am not sure it is the intended way to do this, however it worked in hyperpsy 1.6.5. It broke in 1.7.0

  • First : integrate1D does not take tuples for axis
  • Second : Even when using spim.sum(axis = (0,1)), the value for left and right of the SpanROI does not update when the ROI is changed. Thus it does display a map, but a fixed one, with the starting left and right values.

How to solve this issue ? Did I miss a more straightforward solution ? If it does not exist, can we ake it a feature ?

Thomas Weatherley

Hey everyone. I'm making an animation from a signal2D object, in which the frames would just tick through the navigation axis (i.e., frame i is the plot generated by map.isig[i].as_signal2D([0, 1]).plot(). Is there a neat way to do this quickly with hyperspy? I've found a way to make it work with matplotlib using the AxesSubplot object returned by hs.plot.plot_images():

plot = hs.plot.plot_images(tcspc.isig[0].as_signal2D([0, 1]))[0]
image = plot.get_images()[0]

def anim_func(frame):
    new_data = tcspc.isig[frame].as_signal2D([0, 1])
    image.set_clim(vmin=new_data.data.min(), vmax=new_data.data.max())

anim_created = FuncAnimation(plt.gcf(), anim_func, frames=30, interval=300)

But I'm just wondering if there is a neater way to do it, as it feels like something people might want to do quite often using hyperspy. Thanks in advance!

4 replies
I tried reading .si file, Noran spectrum imaging file. Some files could be read. Is reading .si file useful?
I read the following topic.
Thermo provides file-convertor tool from .si to .raw and .rpl.
My method is opening .si file directly without the file-convertor.
But I'm not sure if my method is correct or not.
Can anyone share .si and if possible .emsa (export from .si to check matadata) files for test?
I'm sorry but our laboratory prohibits to open internal data.
So I cannot share my own data.
But if someone can share data, I will test the method and share it.
Ben Gaunt
Hello, Trying to load a JEOL .asw file, which has support added in 1.7.0
But I ended up getting the following error:

KeyError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 s = hs.load("20220301_test1.asw")

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io.py:454, in load(filenames, signal_type, stack, stack_axis, new_axis_name, lazy, convert_units, escape_square_brackets, stack_metadata, load_original_metadata, show_progressbar, kwds)
451 objects.append(signal)
452 else:
453 # No stack, so simply we load all signals in all files separately
--> 454 objects = [load_single_file(filename, lazy=lazy,
455 for filename in filenames]
457 if len(objects) == 1:
458 objects = objects[0]

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io.py:454, in <listcomp>(.0)
451 objects.append(signal)
452 else:
453 # No stack, so simply we load all signals in all files separately
--> 454 objects = [load_single_file(filename, lazy=lazy, **kwds)
455 for filename in filenames]
457 if len(objects) == 1:
458 objects = objects[0]

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io.py:513, in load_single_file(filename, kwds)
506 raise ValueError(
507 "reader should be one of None, str, "
508 "or a custom file reader object"
509 )
511 try:
512 # Try and load the file
--> 513 return load_with_reader(filename=filename, reader=reader,
515 except BaseException:
516 _logger.error(
517 "If this file format is supported, please "
518 "report this error to the HyperSpy developers."
519 )

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io.py:533, in load_with_reader(filename, reader, signal_type, convert_units, load_original_metadata, kwds)
531 """Load a supported file with a given reader."""
532 lazy = kwds.get('lazy', False)
--> 533 file_data_list = reader.file_reader(filename,
534 signal_list = []
536 for signal_dict in file_data_list:

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io_plugins/jeol.py:100, in file_reader(filename, kwds)
98 if sub_ext in extension_to_reader_mapping.keys():
99 reader_function = extension_to_reader_mapping[sub_ext]
--> 100 d = reader_function(file_path, scale,
101 if isinstance(d, list):
102 dictionary.extend(d)

File /ceph/users/gbz45948/miniconda3/envs/hyperspy_env/lib/python3.10/site-packages/hyperspy/io_plugins/jeol.py:159, in _read_img(filename, scale, **kwargs)
139 axes = [
140 {
141 "name": "y",
153 },
154 ]
156 datefile = datetime(1899, 12, 30) + timedelta(
157 days=header_long["Image"]["Created"]
158 )
--> 159 hv = header_long["Instrument"]["AccV"]
160 if hv <= 30.0:
161 mode = "SEM"

KeyError: 'AccV'

Ben Gaunt
Seems like JEOL's software hasn't saved the acceleration voltage? I could add that in later, once the signal object was loaded in, but it doesn't get that far...
6 replies
Eric R. Hoglund
Noticed something odd…I am fitting gaussians to background subtracted Ti-L23. Edges are already fit and fixed. It was taking 10-20 hours for 75K pixels. I just updated to 1.7 and now it says it will take 191 hours on the same data set. Any thoughts?
7 replies
Hi everyone, I'm new to hyperspy and need some generous help. I am trying to load image by load function under version 1.7.0 in Jupyter notebook, however it failed with following message:
ImportError: cannot import name 'version' from 'hyperspy' (unknown location)
5 replies
Hi everone. I cannot seem to find the algorithm used for the rebin command for EDS signals, in the HYPERSPY documentation. I am mostly curios about what kind of algorithm HYPERSPY uses to do the binning with. Can anyone point me in the right direction, or give me the name of the algorithm? Thank you
2 replies
Yisong Han

Hi Everyone,

I am wondering how I may modify/edit the code for my own needs (and hopefully later I may pull/contribute). I have googled and it seems there is a lack of a comphrehensive guide.

I know from the HyperSpy website, I need to:

  1. Fork HyperSpy and have a local copy in a folder.
  2. I need to build it into Anacona in a new environment ideally (using the commands listed on the HyperSpy website).
  3. Then do I edit the code in the Anacona package folder and this will take effect automatically?
  4. Or do I edit the code in the local forked folder and reinstall or update to the Anacona? If yes, how?
  5. Or do I install the package using for example "$python setup.py develop" and edit the anancona folder directly which will be reflected in the original forked folder?
  6. Do I need to update setup.py if I make any changes?

Could you suggest a website or a video which could answer my questions if it is hard to explain by texts? Thanks very much.

4 replies
Natalie Nord
Hi All,
The docs aren't entirely clear on this. When using estimate_shift2D, is the output in pixels, or calibrated distance units? I presume it would be pixels right, in case you're aligning images that don't have calibrated distances. It's hard to tell at a glance in my data, because the pixel size is very close to one.
Thank you
2 replies

Sorry if this is a too naive question to ask here.

  1. I am trying to plot my EDS spectrum using hyperspy. When I add new elements using add_xray_lines_markers['Cu_Ka'] I cannot find a way to increase the size of the element labels when I plot it.

  2. Also as a personal request, it would be really helpful for the community if the more experienced colleagues can add their Jupyter notebooks to update the 'examples' folder in git with some published data.
    Thank you

7 replies
Joshua F Einsle
Dear Hyperspy community - I am writing some lectures on PCA and ICA at the moment, while I love hyperspy it is not really the right tool for the course I am teaching.
I am wanting to demonstrate PCA to ICA ala the SVD and BSS demo in the documentation. I have been having a peak at the BSS soucre code here, but I am a little unclear what hyperspy is doing under the hood as it takes the factors from the PCA and then performs FastICA on them.
From what I can see FAst ICA acts on the factors from the previous decomposition, but I do not understand how we get to the new set of loadings and components. Could someone please clarify? cheers
Thomas Aarholt
@vivekdevulapalli07 this image is the result of the code I posted in your thread. Couldn't figure out how to upload a picture into the thread reply
HeY people, please have a wonderful day!
Thomas Aarholt

Could someone on Mac try opening a new jupyter notebook instance, and typing import hyperspy.components1d.A<TAB>? Where <TAB> is pressing the tab button. It should autocomplete to Arctan, but for some reason it isn't doing so on my Mac.

Please specify if you're on one of the new M1 mac's if you try this. I'm on an M1 mac, and it doesn't work here. See #2968 for details.

Hello Everyone! I am Hyperspy to analyse CL dataset generated by Attolight (.sur) format. I have 100 scans with x and y stage position data available in the metadata and wanted to stich the scans together. I just wanted to ask if there is a method already implemented in hyperspy as I could not locate one in the documentation.
4 replies
Eric Leroy
is it possible to link an HAADF image to an EELS datacube in order to navigate from the image instead of the datacube ?
Eric R. Hoglund
Eric Leroy
@erh3cq thanks ! an other basic question: my haadf image has the first and last column missing. As a result the SI dimension is for exemple 487, 593 |1033 and the haadf image dimension is 485, 593.
I tried to extract a subarray with the si2=si[1:483,,] but I got the error 'EELSSpectrum is not scriptable' I thought that the EELSSpectrum was an array
Eric Leroy
I found the answer : si2=si.inav[1:486] make the job
T. Nemoto
Is there any example to read the image with separated metadata?
I'm trying to make reader for JEOL's FIB Image files. They are (imagename).BMP for image and (imagename).txt for metadata.
3 replies
Thomas Weatherley
Hi everyone! I'm wondering whether I can use the hyperspy multifit method to fit a custom numerical model to some hyperspectral microscopy data (as a Signal1D). I'm trying to think of how I could wrap this custom model as a method in a hyperspy Component subclass, however, the model depends on the spatial position of the spectrum. Is there a way to refer to the current navigation coordinates of the spectrum being fitted by multifit when defining the method inside the Component subclass? Sorry for my complex wording in the question, but I can't think of a simpler way to express the issue!
3 replies
Eric Leroy
Sorry to ask again a newbie question. How can I get the values of the dimensions of an EELSdatacube ?
10 replies
Fanzhi Su
Hi everyone! Has anyone tried .crop() function in hyperspy? I opened my .emi file and tried to crop an energy range out with .crop() function in hyperspy, but it outputs: 'list' object has no attribute 'crop'. I was wondeirng if anyone has met similar problem and knows how to fix it? Thanks a lot!
2 replies
Wu Michael
Try .isig(your energy range)
Wu Michael
Hi when using the HDF5 reader plugin for Digital Micrograph, the calibrations in the HDF5 file do not seem to be read by DM automatically. Any possible solution to this? Thank you!
6 replies
Eric Leroy
I have an issue when importing RPL files. My raw data are larger than 1Go and when I try to import them, GMS freezes and finally crashes.
The version is GMS 2.32 64 bits and the PC is running Windows 10.
Do you have any ideas to export data from hyperspy to GMS ? I also submitted an issue in hyperspy/ImportRPL
2 replies
Hi all,
I'm using HyperSpyUI to do background subtraction on EELS line scans (.dm3 format) before feeding it into an existing (non-python) analysis routine. Is there a way to export bg-subtracted data to a txt or csv file that keeps the energy axis values? The "Save Data as Text" option only exports the intensity.
Thank you!
Andrea Lorenzon
Hello everyone! first of all, thank you for hyperspy, it really speeds up my job!
I've got an issue, so I opened an issue on github repo (hyperspy/hyperspy#2976 )
I'll write here too: does anyone know why my fitted model parameters ends out of provided boundaries?