Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 02:45
    erh3cq labeled #2940
  • 02:45
    erh3cq opened #2940
  • May 15 14:16
    tjof2 labeled #2937
  • May 15 06:49
    Ruiky94 edited #2939
  • May 15 06:48
    Ruiky94 edited #2939
  • May 15 06:46
    Ruiky94 opened #2939
  • May 14 11:25
    CSSFrancis commented #2938
  • May 13 15:53
    magnunor opened #2938
  • May 13 14:52
    uellue commented #2936
  • May 13 14:37
    jlaehne closed #1835
  • May 13 14:37
    jlaehne commented #1835
  • May 13 14:05
    uellue commented #2936
  • May 13 08:52
    francisco-dlp labeled #2937
  • May 13 08:49
    francisco-dlp commented #2937
  • May 12 13:27
    francisco-dlp commented #2936
  • May 12 13:21
    CSSFrancis commented #2936
  • May 12 11:43

    jlaehne on RELEASE_next_patch

    Increase tolerance so that it p… Fix tvips test files packaging Reduce memory usage of jeol test and 3 more (compare)

  • May 12 11:43
    jlaehne closed #2933
  • May 12 11:42
    jlaehne unlabeled #2933
  • May 12 11:37
    jlaehne closed #2845
adriente
@adriente

I have a very dumb question :

X = np.random.rand(10,15,20)
si = hs.signals.Signal1D(X)
print(si.data.shape)
print(si.axes_manager[0].size, si.axes_manager[1].size)

I obtain :

(10,15,20)
(15,10)

Why is it that way ?

2 replies
Eric R. Hoglund
@erh3cq
@adriente
The Signal class HyperSpy uses the image order for indexing i.e. [x, y, z,…] (HyperSpy) vs […,z,y,x] (numpy)
Mohsen
@M0hsend
Hi All, I am having issues with loading a large EDX stack from hdf5. The following is what I am doing:
import hyperspy.api as hs
import h5py
import dask
import dask.array as da
f = h5py.File('data_path.hdf5', 'r')
edx_da = da.from_array(f['data_edx'], chunks=[1,128,128,4096])
f.close()
edx_hs = hs.signals.EDSTEMSpectrum(edx_da).as_lazy()
# <LazyEDSTEMSpectrum, title: , dimensions: (512, 512, 98|4096)>
test = edx_hs.inav[:,:,0]
test.compute()
The above is giving me the following error:
ValueError: Not a dataset (not a dataset)
Any ideas why / how to fix?
7 replies
Eric R. Hoglund
@erh3cq
Also need assistance with EDX. I'm getting 0s across the board from non-zero intensities when using CL.
In:
quant_elms = ['O_K', 'Si_K', 'Sc_K', 'Er_L', 'Nd_L', 'Yb_L', 'Lu_L']
quant_sigs = [result[i] for i in [10, -3, -5, 1, 8, -2, 4]]
quant_K = np.array([1.0, 0.885161, 0.98704, 1.45108, 1.61348, 1.6705, 1.69536])
​
print('Intensities')
print(pd.DataFrame(hs.stack(quant_sigs).data.T, columns=quant_elms))
​
quant = sroi_stack.quantification(quant_sigs, method='CL', factors=1/quant_K, composition_units='weight', max_iterations=100)
print('Wt%')
print(pd.DataFrame(hs.stack(quant).data.T, columns=quant_elms))

Out:
Intensities
[########################################] | 100% Completed |  0.1s
         O_K       Si_K      Sc_K      Er_L      Nd_L      Yb_L      Lu_L
0  10.096292  10.968796  0.046850  0.051064  0.015106  0.048460  0.038284
1   6.665914   5.675549  2.449875  3.110012  1.423448  2.923658  2.596810
2   6.598061   5.350703  2.364560  3.020911  1.404513  2.716718  2.388128
3   7.454355   5.216240  0.247508  1.743997  7.806576  0.876016  0.638009
Wt%
[########################################] | 100% Completed |  0.1s
   O_K  Si_K  Sc_K  Er_L  Nd_L  Yb_L  Lu_L
0  0.0   0.0   0.0   0.0   0.0   0.0   0.0
1  0.0   0.0   0.0   0.0   0.0   0.0   0.0
2  0.0   0.0   0.0   0.0   0.0   0.0   0.0
3  0.0   0.0   0.0   0.0   0.0   0.0   0.0
Eric R. Hoglund
@erh3cq
And even odder, if I iterate the nav dimension manually then I get a different result. The first and last positions are reasonable, but the rest are still 0s.
In:
weight_percent = [sroi_stack.inav[i].quantification([sig.inav[i] for sig in quant_sigs],
                                                    method='CL', factors=quant_K, composition_units='weight', max_iterations=100, convergence_criterion=0.001)
                  for i in range(0,4)]
print(pd.DataFrame(np.array([[j.data[0] for j in i] for i in weight_percent]), columns=quant_elms))

Out:
         O_K       Si_K      Sc_K      Er_L       Nd_L      Yb_L      Lu_L
0  50.240269  48.313808  0.230111  0.368721   0.121285  0.402830  0.322976
1   0.000000   0.000000  0.000000  0.000000   0.000000  0.000000  0.000000
2   0.000000   0.000000  0.000000  0.000000   0.000000  0.000000  0.000000
3  24.858340  15.397206  0.814680  8.439159  42.003570  4.880007  3.607039
Katherine E. MacArthur
@k8macarthur
@erh3cq I've just seen this. Have you checked that all your elements you want to quantify are in the metadata? What happens when you use 'zeta' or 'cross_section'. Is the error just there with 'CL'?
The quant function definitely takes somethings from the metadata especially when using absorption correction. Might this be the source of your error?
Eric R. Hoglund
@erh3cq

@k8macarthur Yes those are good checks.
I have checked the metadata and all edges are included:

Acquisition_instrument
TEM
Detector
EDS
azimuth_angle = 0.0
elevation_angle = 35.0
energy_resolution_MnKa = 131.45115116593425
number_of_frames = 58
Stage
tilt_alpha = 0.005
tilt_beta = -0.0
x = -0.000321
y = -8.1e-05
z = -2.5e-05
beam_energy = 200.0
camera_length = 73.0
magnification = 20000.0
microscope = Titan
General
date = 2021-08-18
original_filename = 1032 SI 23500 x HAADF-BF_original.emd
time = 10:32:08-04:00
time_zone = Eastern Daylight Time
title = Stack of EDS
Sample
elements = ['C', 'Er', 'Ga', 'Lu', 'Mo', 'N', 'Nd', 'O', 'Pt', 'Sc', 'Si', 'Yb']
xray_lines <list>
[0] = C_Ka
[1] = Er_La
[10] = Nd_Lb1
[11] = Nd_Lb2
[12] = Nd_Lb3
[13] = Nd_Lg1
[14] = Nd_Ma
[15] = Nd_Mg
[16] = Nd_Mz
[17] = O_Ka
[18] = Pt_Ma
[19] = Sc_Ka
[2] = Er_Mb
[20] = Sc_Kb
[21] = Si_Ka
[22] = Yb_La
[23] = Yb_Ma
[24] = Yb_Mb
[3] = Er_Mz
[4] = Ga_La
[5] = Lu_La
[6] = Lu_Ma
[7] = Mo_La
[8] = N_Ka
[9] = Nd_La
Signal
binned = True
signal_type = EDS_TEM

I have not yet tried the zeta or cross_section quantifications, but I did try feeding directly into from hyperspy.misc.eds.utils import quantification_cliff_lorimer as suggested by @thomasaarholt. Using the util directly gave results for all navigation indices. There must be something going wrong in quant when the attributes are pulled from the signal.

7 replies
MurilooMoreira
@MurilooMoreira
Hello all, I am trying to sum specific frames while opening JEOL.pts files, but the file opened is always the full integration of frames for the EDSSpectrum signal. If I use the sum_frames=False, I can open all the frames, but this is very memory-consuming. Follow an example of what I am doing: S=hs.load("data.pts", sum_frames=True, first_frame=0, last_frame=10). When I do it with emd files the code works but not with the pts files. I tried also with the .asw from Jeol and the problem is the same. Does someone know what can I do to solve this issue? I cannot process the data if I open all frames without summing because of memory.
6 replies
MurilooMoreira
@MurilooMoreira
In this case I have more than 10 frames (37) and the command integrates all the 37 instead of 10
Svetlana K
@Kosvetka_gitlab
Hi guys, could you remove automatic adding of the power law background to the model? I spent two days trying to figure out why my low loss fitting with fixed patterns was not working properly. And guess what, there was a power law component
17 replies
adriente
@adriente

I am trying to register a new signal type. I have done as it is written here : (https://hyperspy.readthedocs.io/en/latest/dev_guide/writing_extensions.html) I have added an entry point in my setup.py and I have created a hyperspy_extension.yaml (in the same folder of setup.py) with all the required info. But when I use hs.print_known_signal_types(), my new signal does not appear.

I found a way to overcome this problem by puting

import hyperspy.extensions as e

e.ALL_EXTENSIONS["signals"]["MySignal"] = {'signal_type': 'MySignal',
   'signal_dimension': 1,
   'dtype': 'real',
   'lazy': False,
   'module': 'mymodule.datasets.signal'}

in the signal.py file.

What am i doing wrong ?

7 replies
NathalieBrun
@NathalieBrun
Hi all, I am trying to use Pipeline following the indications in the user guide. After executing '''pipe = Pipeline([("scaler", MinMaxScaler()), ("PCA", PCA())]) out = s.decomposition(algorithm=pipe, return_info=True) ''' I want to reconstruct a denoised model. I need to use the inverse_transform of the scaler, but I don't know how to call it. thanks
3 replies
Eric R. Hoglund
@erh3cq
Anyone use miniconda? I uninstalled anaconda because my environments were a mess. I installed miniconda and followed dev installation. I am running as admin and have VS Community 2019 with the python and C++ tools. The pip -e . command fails with:
(data-analysis) C:\Users\Owner\Documents\GitHub\hyperspy>pip install -e . --no-deps --user
Obtaining file:///C:/Users/Owner/Documents/GitHub/hyperspy
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Installing collected packages: hyperspy
  Running setup.py develop for hyperspy
    ERROR: Command errored out with exit status 1:
     command: 'C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Owner\\Documents\\GitHub\\hyperspy\\setup.py'"'"'; __file__='"'"'C:\\Users\\Owner\\Documents\\GitHub\\hyperspy\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps --user --prefix=
         cwd: C:\Users\Owner\Documents\GitHub\hyperspy\
    Complete output (34 lines):
    test_compilers.c
    running develop
    WARNING: The user site-packages directory is disabled.
    running egg_info
    writing hyperspy.egg-info\PKG-INFO
    writing dependency_links to hyperspy.egg-info\dependency_links.txt
    writing requirements to hyperspy.egg-info\requires.txt
    writing top-level names to hyperspy.egg-info\top_level.txt
    reading manifest file 'hyperspy.egg-info\SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    warning: no files found matching '*.py' under directory 'bin'
    warning: no files found matching '*.rst' under directory 'examples'
    warning: no files found matching '*.txt' under directory 'examples'
    adding license file 'COPYING.txt'
    adding license file 'AUTHORS.txt'
    writing manifest file 'hyperspy.egg-info\SOURCES.txt'
    running build_ext
    building 'hyperspy.io_plugins.unbcf_fast' extension
    C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\Owner\Programs\Miniconda3\envs\data-analysis\include -IC:\Users\Owner\Programs\Miniconda3\envs\data-analysis\include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include -IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt -IC:\Users\Owner\Programs\fftw-3.3.5-dll64 -IC:\Users\Owner\Programs\boost_1_72_0\boost_1_72_0 /Tchyperspy\io_plugins\unbcf_fast.c /Fobuild\temp.win-amd64-3.9\Release\hyperspy\io_plugins\unbcf_fast.obj
    unbcf_fast.c
    hyperspy\io_plugins\unbcf_fast.c(21760): error C2039: 'tp_print': is not a member of '_typeobject'
    C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\include\cpython/object.h(193): note: see declaration of '_typeobject'
    hyperspy\io_plugins\unbcf_fast.c(21771): error C2039: 'tp_print': is not a member of '_typeobject'
    C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\include\cpython/object.h(193): note: see declaration of '_typeobject'
    hyperspy\io_plugins\unbcf_fast.c(21776): error C2039: 'tp_print': is not a member of '_typeobject'
    C:\Users\Owner\Programs\Miniconda
I remember something about setuptools being finicky
Eric R. Hoglund
@erh3cq
I have now also tried Anaconda and downgrading setuptools to 57.4.0, no luck.
and the second half of the output is missing above:
 C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\include\cpython/object.h(193): note: see declaration of '_typeobject'
    hyperspy\io_plugins\unbcf_fast.c(22423): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
    hyperspy\io_plugins\unbcf_fast.c(22439): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
    hyperspy\io_plugins\unbcf_fast.c(23650): warning C4996: 'PyUnicode_FromUnicode': deprecated in 3.3
    error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
    ----------------------------------------
  Rolling back uninstall of hyperspy
  Moving to c:\users\owner\programs\miniconda3\envs\data-analysis\lib\site-packages\hyperspy-1.6.4.dist-info\
   from C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\Lib\site-packages\~yperspy-1.6.4.dist-info
  Moving to c:\users\owner\programs\miniconda3\envs\data-analysis\lib\site-packages\hyperspy\
   from C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\Lib\site-packages\~yperspy
ERROR: Command errored out with exit status 1: 'C:\Users\Owner\Programs\Miniconda3\envs\data-analysis\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Owner\\Documents\\GitHub\\hyperspy\\setup.py'"'"'; __file__='"'"'C:\\Users\\Owner\\Documents\\GitHub\\hyperspy\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
100 replies
Eric R. Hoglund
@erh3cq
Not sure what bcf format is.
3 replies
Eric R. Hoglund
@erh3cq
I will try that after a complete uninstall of all things python and VS. Another blank slate might help. I just hope uninstalling VS doesn’t trash some other program…we will see.
Thomas Aarholt
@thomasaarholt
Good luck, and goodnight :)
image.png
New record?
Eric R. Hoglund
@erh3cq
Would deleting gitignore or anything git help?
HA! Going for gold 96
Good night Thomas. Late there
Eric R. Hoglund
@erh3cq
@thomasaarholt ... figured it out ... downgraded to python 3.8, then my local repo installed. I think the 3.9 wheels are missing or can't be built. At least it is running now! Now we know the source of the problem and it might be worth investigating or making note of.
1 reply
Eric R. Hoglund
@erh3cq
Why does it fail? Is there a way to fix rather then remove?
6 replies
DENSmerijn
@DENSmerijn

I am trying to run pytest on a local development directory, but I am getting the error

ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: -n --dist loadfile

Do you know what is causing this issue?

2 replies
Eric R. Hoglund
@erh3cq
Anyone know the memory usage per atom in atomap Gaussian fitting? Have 25K atoms and the fitting just ran out of memory on the HPC…ha!
Thomas Aarholt
@thomasaarholt
Raises hand my bad!
At least I wrote the optimized version.
What resolution is the image?
The algorithm crops out a small square at a time from the image, centered around the atomic column in question
I'm not sure how memory intensive it is supposed to be though, since it works in a loop. Do you know where it was when it died?
Eric R. Hoglund
@erh3cq
This one is unfortunately 4k. Needed to get the full field of view and sampling
Eric R. Hoglund
@erh3cq
For EDX analysis with the super X (4 detectors), I’m assuming each detector’s signal needs to be analyzed independently to account for experimental geometry?
1 reply
Eric R. Hoglund
@erh3cq
Okkie dokkie
Luiz Tizei
@ltizei
Does anyone know if there is a way to implement a component for fitting which uses either np.piecewise() or np.heaviside()? I tried doing as described on the model fitting section of the manual, but I don't think Sympy constructs the function correctly.
4 replies
OliDG
@OliDG

Hello,

The [example code for interactive line2DROI] (hyperspy/hyperspy#2414) triggers the following valueerror for me: "The value is out of the axis limits".
Can anyone reproduce the error or it it just me? (Hyperspy is vesion 1.6.2, hyperspygui 1.4.0.)

here is the code:

#https://github.com/hyperspy/hyperspy/pull/2414
import hyperspy.api as hs

im0 = hs.datasets.example_signals.reference_hologram()
im1 = hs.datasets.example_signals.object_hologram()

im0.plot()
im1.plot()

line_profile = hs.roi.Line2DROI(400, 250, 220, 600, 2)
line0 = line_profile.interactive(im0)
line1 = line_profile.interactive(im1)

hs.plot.plot_spectra([line0, line1])

the same happens with the [exemple from the docs] (https://hyperspy.org/hyperspy-doc/current/user_guide/interactive_operations_ROIs.html)

import hyperspy.api as hs
holo = hs.datasets.example_signals.object_hologram()
roi = hs.roi.Line2DROI(x1=465.577, y1=445.15, x2=169.4, y2=387.731, linewidth=0)
holo.plot()
ss = roi.interactive(holo)

Thanks!
Olivier

3 replies
Zanetta Pierre-marie
@ZanettaPM

Hi folks !
Did anyone already mentioned that if you pass the limit of 10 X rays lines and if you try to extract their intensities to save them into the metadata to be able to save the entire signal under an .hspy format, hyperspy sort them in a weird sequence ?

import numpy as np
import hyperspy.api as hs
s=hs.datasets.example_signals.EDS_TEM_Spectrum()
s.metadata
s.add_elements(['O','Ca','O','Ti','Al','Si','Mg','C','Cu','Ga','Fe','V','Cr'])
s.add_lines()
s.metadata.Sample.intensities=s.get_lines_intensity()
s.metadata

I try to use the sort function and other tricks but I have difficulty to make the list match my list of Xray lines and kfactors for the quantification. Any ideas? should I open an issue ? It might be more convenient if the list match the Xray line order. Thanks !

4 replies
Zanetta Pierre-marie
@ZanettaPM
Capture.PNG
lnaglecocco
@lnaglecocco

Hi, I've encountered a strange issue with EELS data analysis. I have a LowLoss data file and a HighLoss data file. I can save them as Hyperspy objects and plot them, and the data is clearly all there, although for the LowLoss the zero loss peak is not at 0 eV. However when I try to align it, using the below code, I find that both the LowLoss and HighLoss datasets are somehow cut down to only exist over a very narrow range, as if they are somehow getting cropped in signal space. This only happens with some datasets, and other datasets work as they should with no issue. Does anyone know the problem here? I am able to share example data files if it would be useful for debugging

LowLoss.align_zero_loss_peak(calibrate=True,also_align=[HighLoss],subpixel=True)

6 replies
Mingquan Xu
@Mingquan_Xu_twitter
Hi, all, I have a question about the PCA algorithm. In my understanding, the PCA (or SVD) has an averaging effect when applied in our data containing tiny features. When I try to use PCA to de-noise my atomic-resolution EELS, the averaging effect is obvious. In my EELS map of ~30 atoms, only one atom has its unique feature, while after PCA, the unique feature disappears and its spectrum became similar to the sum spectrum from other atoms. Is there any other de-noise algorithm to avoid this effect?
Thomas Aarholt
@thomasaarholt
Well, with PCA, how many components are you using? Are you inspecting the loadings/factors before removing them?
7 replies
PCA only works well if you have multiple measurements of a given feature. If your unique atom is resolved only by a single pixel, PCA can't help, because it uses similar information that is present in the other pixels to help reduce the noise.
I can recommend performing curve fitting to capture the feature in question. Can you give us a screenshot of the spectrum?
Mingquan Xu
@Mingquan_Xu_twitter
one atom has 4x4 pxiels.
Mingquan Xu
@Mingquan_Xu_twitter
I use "specifying vline=True" to estimate the number of significant components in PCA scree plot, and it usually tells me 6~10 compoments for reconstruction.