Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 14:16
    codecov[bot] commented #3031
  • 14:11
    codecov[bot] commented #3031
  • 14:00
    codecov[bot] commented #3031
  • 13:59
    CSSFrancis synchronize #3031
  • Oct 06 18:48
    codecov[bot] commented #3031
  • Oct 06 18:46
    codecov[bot] commented #3031
  • Oct 06 18:33
    codecov[bot] commented #3031
  • Oct 06 18:32
    CSSFrancis synchronize #3031
  • Oct 03 16:31
    HanHsuanWu closed #3021
  • Oct 03 07:17
    codecov[bot] commented #3040
  • Oct 03 07:14
    codecov[bot] commented #3040
  • Oct 03 06:59

    ericpre on RELEASE_next_minor

    Update model.rst Just citing T… Update doc/user_guide/model.rst… Merge pull request #3039 from H… (compare)

  • Oct 03 06:59
    ericpre closed #3039
  • Oct 03 00:22
    bryandesser opened #3040
  • Oct 02 19:14
    codecov[bot] commented #3039
  • Oct 02 19:11
    codecov[bot] commented #3039
  • Oct 02 19:08
    codecov[bot] commented #3039
  • Oct 02 19:05
    codecov[bot] commented #3039
  • Oct 02 18:13
    codecov[bot] commented #3039
  • Oct 02 18:13
    HanHsuanWu synchronize #3039
Thomas Aarholt
@thomasaarholt
Hi @wstolp! Taking a look now about the dev guide!
I can't reproduce the 404 - where did you find it?
I went to https://hyperspy.org/ and pressed "Documentation" under both Stable and Development (on the right hand side)
And then look under "Developer Guide"
We're very happy to help new developers like yourself, so just have a go and we'll help with what you need :)
wstolp
@wstolp
Hi Thomas. Yes later I saw there are multiple developer guide type pages. The 404 I saw here https://github.com/hyperspy/hyperspy, scroll down to "contributing guidelines", then "developer guide".
purewendy
@purewendy
Hello, I would like to do EELS fitting on my data. The hs.preferences.gui() links to GOS directory to C:\Program Files\Gatan. This is the directory of my installed Digital Micrograph. But it's show show errors. Any idea to fix it? Thanks very much!
Weixin Song
@winston-song
Hi All, I use hs.plot.plot_images(atom_percent, scalebar =[0],scalebar_color='white'), is there any way to customize the scalebar length and the fontsize of the scalebar?
Weixin Song
@winston-song

Hi All, I use hs.plot.plot_images(atom_percent, scalebar =[0],scalebar_color='white'), is there any way to customize the scalebar length and the fontsize of the scalebar?

atom_percent is a list

Thomas Aarholt
@thomasaarholt
@purewendy what is your error?
Jędrzej Morzy
@JMorzy
Hi, I have a ragged (yeah, I know :/) data array (1 spectral dimension, 2 'navigation' dimensions that are actually scan number and time). I'd like to do some PCA on it. At the moment, the empty spots in the ragged array are filled with NaNs. Is there any way of making it work despite the NaNs? I was thinking it should be possible to reshape the array into a 1D array (flattening the scan and scan number away) and then re-reshaping it back and looking at the loadings afterwards, but that sounds like a total botch. Any more elegant ideas?
Sorry, 2D. One very long navigation dimension and one normal signal dimension of course
Thomas Aarholt
@thomasaarholt
Hi @JMorzy, which part of your data is ragged? Are you missing entries in the signal or navigation dimension?
I normally think of ragged arrays as ala:
[
    [1,2,3],
    [1,2,3,4],
]
How does this compare with your data?
Jędrzej Morzy
@JMorzy
Exactly, ragged in the navigation dimension. The scans are different lenghts (scan number is my y, time is my x)
Then to make it non-ragged to put it into numpy array/hs signal I filled the gaps with NaNs
2 replies
Corentin Le Guillou
@CorentinLG
Hi all, I am using the non_uniform_axes brnach. Since last update (apparently), the m.set_signal_range() function does not accept explicit axis values, but only axis indices. Ant idea why ? And a suggestion to go around that problem ? thanks
3 replies
Mingquan Xu
@Mingquan_Xu_twitter
Hi, all, is there any packages that can do local low rank denoising for EEL spectrum image?
Thomas Aarholt
@thomasaarholt
What is low rank denoising? I use PCA and ICA a lot for eels in hyperspy.
I see. I hadn't heard about it before.
Mingquan Xu
@Mingquan_Xu_twitter
Hi, @thomasaarholt , yes, I also know this from this article, but do not know which software can do this process.
I have tried PCA and NMF in hyperspy on my data (SI), the results are not so good.
Thomas Aarholt
@thomasaarholt
Havd you thought about what might be causing your data's results to be "not so good"? What sort of eels is it?
rtangpy
@rtangpy
Hi all, I am doing model fitting, with 2D navigation and 1D signal. I have two questions: 1. since I need to fit more than 1 million pixels, using multifit will take me 50 mins. Is there any method that can speed up the code; 2. before I learned how to use hyperspy, I use multiprocessing with 8 cores to perform optimize.minimize to speed up code. Interestingly, using hyperspy multifit which only uses one core is even slighly faster than my code with multiprocessing (8 cores). I am curious about how hyperspy can reach such high speed.
13 replies
adriente
@adriente

I am performing data analysis on EDXS data. For the analysis I need some parameters such as sample thickness, elements in the sample, etc .. Depending on the microscope that was used (and the corresponding acquisition software) these parameters are not all filled in the metadata.

Is there a way to set the metadata parameters so that the previous values are not overwritten and only the empty ones are filled ?
I know it is possible to do that for elements using s.add_elements(["Si"]), but I couldn't find the same function for microscope parameters for example.

2 replies
Eric Prestat
@ericpre
image.png
2 replies
@adriente, is it not what you need?
Zezhong Zhang
@zezhong-zhang
samfire_red_chis.png

Hi everyone, I am trying use SamFire for EELS model fitting, After reading the documentation and the source code a bit, I still have few question about how to set up properly. I currently have the setup as:

# to fit 5% of the pixels to estimate the starting values
shape = (s_eels.axes_manager.navigation_axes[1].size, s_eels.axes_manager.navigation_axes[0].size)
mask = np.random.choice([0, 1], size=shape, p=[0.05, 0.95])
m.multifit(mask=mask, optimizer='lm', bounded=True,iterpath='serpentine',kind='smart')
# then start samfire
samf = m.create_samfire(workers=2, ipyparallel=False) *#create samfire*
samf.metadata.goodness_test.tolerance = 0.3 *#set a sensible tolerance*
samf.refresh_database() # here is to refresh the stragtegy or the pixel fitted? it reads bit contradictory from the documentation and the source code
samf.start(optimizer='lm', loss_function='ls', bounded=True,iterpath='serpentine',kind='smart', optional_components=['Mn_L3','O_K','PowerLaw']) *#start fitting*

The fitting results have following issues:

  1. Only the already m.multfit() fitted pixels have sensible values, the others does not have a good fit. I also tried fitting some pixels with smart_fit() which gives similar results. This can be verified with m.red_chisq.plot() (see attached).

  2. The vacuum pixels yiled growth for the powerlaw fitting of the pre-edge range, due to the noise, and the edge components fail as well as there should be none. Thus, I have all the components as optional but this is not the solution. Is it possible to switch off the fitting for the vacuum, I guess one can use mask.

  3. One quesiton about the elemental component intensity for mapping, I saw discussion in #2562, is it possible to have the absolute intensity or show the H-S cross-section under the given microscope condition? As I want to know their exact product to calculate the partial cross-section…

  4. One final question about the fine structure coefficient when m.enable_fine_structure(), are those a combination of gaussians? Can we acess the gaussian height, width and centre? I currently counldn’t find docs about the values in the fine_structure_coefficient, but see sometimes their values are negative and the plot indeed shows negetive gaussian correspondingly to fit the curve (which occurs even after forcing all edge component to be possitive), does the negative values make sense? If it is gaussian combination, it will be really helpful to have the acess to their values (instead of making gaussian models oneself), which can be used for computing white line for example.

I am happy to give a minimum example if that could be helpful. Many thanks for your helps!

Thomas Aarholt
@thomasaarholt
@zezhong-zhang I'm happy you're using SAMFIRE! I too am unsure on how exactly to set it up. It will be good to get a working example.
I really like your approach for creating a mask, using random.choice!
  1. The vacuum pixels should indeed be masked in the way you describe. I'm not sure how masking works with samfire!
  1. I suggest you add a comment to that post, and perhaps delve into the source code and see if you can help shed light.
Zezhong Zhang
@zezhong-zhang
@thomasaarholt Thanks for the comments! Sure, I will add the request for absolute intensity (the product) to the post, and dive a bit deeper into the source code.
Thomas Aarholt
@thomasaarholt
Brilliant! Let us know how you get on - I'm a bit busy with other things,but can at least comment on it
Mingquan Xu
@Mingquan_Xu_twitter
When I use ‘align_zero_loss_peak’ to align ZLP in my SI dataset, there is an error:
image.png
I used this function before, but this is the first time I see this warning. What would cause this?
image.png
Could anyone give me any suggestions to solve this? Thanks in advance!
Eric Prestat
@ericpre
is your hyperspy up to date?
Mingquan Xu
@Mingquan_Xu_twitter

is your hyperspy up to date?

the version is 1.6.2

Eric Prestat
@ericpre
which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3
Mingquan Xu
@Mingquan_Xu_twitter

which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3

Thanks very much for your reply. I will update my HyperSpy and have a check.

Thomas Aarholt
@thomasaarholt
What is a good way to save artificial lazy signals that are larger than memory? I notice that my ram consumption shoots up when I try saving a dask-created signal, even if I specify the chunks.
import hyperspy.api as hs
from hyperspy.axes import UniformDataAxis
import dask.array as da

from hyperspy.datasets.example_signals import EDS_SEM_Spectrum
from hyperspy._signals.eds_sem import LazyEDSSEMSpectrum
from hyperspy._signals.signal2d import LazySignal2D

s = EDS_SEM_Spectrum()
data = s.data
axis = UniformDataAxis(offset = -0.1, scale = 0.01, size = 1024, units="eV")

s2 = LazyEDSSEMSpectrum(data, axes = [axis])
s2.add_elements(s.metadata.Sample.elements)
s2.set_microscope_parameters(beam_energy=10.)

nav = LazySignal2D(da.random.random((2500, 1000)))
s = s2 * nav.T

print("Shape:", s.data.shape) # 2500, 1000, 1024 - ~20GB
s.save("lazy.hspy", compression=None, overwrite=True, chunks = (100, 1000, 1024))
Håkon Wiik Ånes
@hakonanes
Could this be a problem with dask 2021.04.0 and related to https://github.com/dask/dask/issues/7583#issue-863708913? We've pinned dask to below this version in kikuchipy because of sudden memory issues after 2021.04.0.
12 replies
Thomas Aarholt
@thomasaarholt
Possibly! I'll try with an older dask and see!
Hmm, could someone please check if conda create --name testdask hyperspy results in an error? I'm running mamba on my M1 Mac, and installing hyperspy is giving a really weird error today. Installing jupyter notebook works fine: