by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 21:59
    thomasaarholt commented #2469
  • 21:57
    thomasaarholt commented #2469
  • 19:48
    tjof2 commented #2472
  • 18:30
    ericpre commented #2464
  • 18:26
    francisco-dlp commented #2464
  • 18:24
    ericpre commented #2464
  • 18:19
    francisco-dlp commented #2464
  • 18:10
    ericpre commented #2464
  • 18:05
    francisco-dlp commented #2464
  • 18:02
    ericpre commented #2464
  • 17:38

    francisco-dlp on RELEASE_next_minor

    Update DOI (compare)

  • 17:32

    francisco-dlp on v1.6.0

    (compare)

  • 17:28

    francisco-dlp on RELEASE_next_minor

    Update CHANGES.rst Merge remote-tracking branch 'u… Add cluster analysis and 21 more (compare)

  • 17:28
    francisco-dlp closed #2464
  • 17:26
    codecov[bot] commented #2472
  • 17:22
    tjof2 commented #2472
  • 17:19
    tjof2 commented #2472
  • 17:14
    ericpre commented #2472
  • 17:13
    ericpre commented #2472
  • 17:12
    tjof2 edited #2472
ksyao2002
@ksyao2002
Sorry, I misspoke: I meant choosing chunking without saving, for example choosing chunks right before plotting the dataset. If we have to save the data every time to chunk, this would make changing the chunks very inefficient.
Eric Prestat
@ericpre
@ksyao2002: I guess @thomasaarholt suggested to use chunks=(100, 100, 2) when calling save because in your example, you are loading a hspy file (data = hs.load("tests/3d.hspy", lazy = True)) and he may have assumed that you saved the data using hyperspy originally?
maybe a better question is when is chunking is done for the first time? and you may want to specify the chunk size when the data is created?
Corentin Le Guillou
@CorentinLG

on windows 7, after updating all packages using conda, I have the following error while loading hyperspy ---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)

<ipython-input-3-d7e81a299c63> in <module>
12 get_ipython().run_line_magic('matplotlib', 'widget')
13
---> 14 import hyperspy.api as hs
15 import numpy as np
16

~\Anaconda3\lib\site-packages\hyperspy\api.py in <module>
----> 1 from hyperspy.api_nogui import *
2 import logging
3 _logger = logging.getLogger(name)
4
5 doc = hyperspy.api_nogui.doc

~\Anaconda3\lib\site-packages\hyperspy\api_nogui.py in <module>
12 from hyperspy.utils import *
13 from hyperspy.io import load
---> 14 from hyperspy import signals
15 from hyperspy.Release import version as version
16 from hyperspy import docstrings

~\Anaconda3\lib\site-packages\hyperspy\signals.py in <module>
48 _g[_signal] = getattr(
49 importlib.import_module(
---> 50 _specs["module"]), _signal)
51
52 del importlib

~\Anaconda3\lib\importlib__init__.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129

~\Anaconda3\lib\site-packages\hyperspy_signals\signal2d.py in <module>
24 import logging
25 from scipy.fftpack import fftn, ifftn
---> 26 from skimage.feature.register_translation import _upsampled_dft
27
28 from hyperspy.defaults_parser import preferences

ModuleNotFoundError: No module named 'skimage.feature.register_translation'

I notices scikit-image just released a new version
Eric Prestat
@ericpre
scikit-image broke their API in the last release, until there is a new release of hyperspy, you can use downgrade scikit-image: conda install scikit-image=0.16.2
Mohsen
@M0hsend
eels_detector_artefact.png
Hi, When performing decomposition on eels data sometimes the boundary of GIF detector quadrants appears in some of the components (like in image above). How is the best way to avoid this? (other than correcting with a vertical offset after the fact)
Thomas Aarholt
@thomasaarholt
@ericpre aw man, I just realised that this was causing the importerror my friend was seeing, installing hyperspy for gms python today. Just spent an hour via instant messaging trying to work it out to no avail. (my friend is new to python)
Hi @M0hsend, I see the same thing on my data. Ive generally just left it in, or used blind source separation to remove it by not including that particular component.
Corentin Le Guillou
@CorentinLG
Thanks @ericpre , I should have looked at the opned issues...
I have another one : I am trying to use s.align2D(), but I am getting this error message :
image.png
Thomas Aarholt
@thomasaarholt
If you're still trying, @CorentinLG, you could try upgrading scipy and numpy and see if that fixes it. I haven't got a computer to try wih now
Mohsen
@M0hsend

Hi @M0hsend, I see the same thing on my data. Ive generally just left it in, or used blind source separation to remove it by not including that particular component.

Thanks @thomasaarholt !

Corentin Le Guillou
@CorentinLG
@thomasaarholt ; I did updating numpy and scipy and also tried using the 1.5 Dev version from release next minor, but I still have the same error. Any idea would be appreciated...
Corentin Le Guillou
@CorentinLG
I had the wrong develpment version. Getting 1.6 Dev did the trick...
Thomas Aarholt
@thomasaarholt
great :)
Sumo1612
@Sumo1612

Hi, I am a new user of hyperspy and I am specifically learning the EELS Quantification module. I was following the tutorial and I am facing the following issues:

  1. How to extract the individual components of model and plot? I am using the example of "Hexagonal BN", but I am not able to separate the individual spectra and also I cannot put legend in the plot. This is the small code that I tried and got the output as given in the image attached.

s = hs.datasets.eelsdb(title="Hexagonal Boron Nitride",
spectrum_type="coreloss")[0]

ll = hs.datasets.eelsdb(title="Hexagonal Boron Nitride",
spectrum_type="lowloss")[0]

s.set_microscope_parameters(beam_energy=100,
convergence_angle=0.2,
collection_angle=2.55)

s.add_elements(('B', 'N'))
m = s.create_model(ll=ll)
m.enable_fine_structure()
m.smart_fit()
m.plot(plot_components=True) # It plots all the components, how can I extract individual components here?
m.enable_adjust_position()


  1. I tried to extract edges using the following code, but its giving an error.

edges = ("N_K","B_K")
m.enable_edges(edges_list=edges)
hs.plot.plot_spectra([m[edge].intensity.as_signal("std") for edge in edges], legend=edges)


Could anyone please help me understand this ?

Thanks

Figure_Hexagonal_Boron_Nitride_Signal.png
Thomas Aarholt
@thomasaarholt
Hi @Sumo1612. For starters, I suggest you take a look at the m.as_signal() method. That will give you the computed fit of the model in each spectrum in your dataset.
Sumo1612
@Sumo1612
@thomasaarholt I tried using as_signal() but it only shows the model plot, not the other components. It seems like it m.as_signal() doesn't have other components.
Figure_2.png
Eric Prestat
@ericpre
@Sumo1612: have a read at the docstring of as_signal, it has a component_list argument
Thomas Aarholt
@thomasaarholt
That's a reminder to how to it would be nice to have as_signal for each component as well. TODO.
Sumo1612
@Sumo1612
@ericpre and @thomasaarholt Thanks. Its working now :)
Thomas Aarholt
@thomasaarholt
Thomas Aarholt
@thomasaarholt
Would it be possible (and fun/interesting) to use azure pipelines to track execution speed of various functions, including import hyperspy.api and import hyperspy.api_nogui?
Tom Furnival
@tjof2
Sounds like it would be straightforward to have some timing script but what would you do with the information? Where would you push it? Track any regressions?
Thomas Aarholt
@thomasaarholt
Yes, I was thinking that it would be interesting to track how long certain processes take, and if possible, flag anything that suddenly takes longer. If we do end up implementing lazy imports (or moving imports within functions), it , we will probably accidentally forget about that in a future PR. I was also thinking it would be nice to track the timings of common large and small operations, like m.multifit and s.decomposition (with specified arguments) so that we are aware of any changes.
Say, just as an example (the PR very nice), that something added in #2441 accidentally slowed down m.fit() or m.multifit but the tests still pass, we might not notice that.
Tom Furnival
@tjof2
Nicolas Tappy
@LMSC-NTappy
Is there an issue in the github where signal_ranges in fit models are discussed? I came to think it would be useful to pass it in the form of a bool array, akin to signal masks when performing PCA or spikes removal
Francisco de la Peña
@francisco-dlp
You can always open a new issue with a feature request. However, isn't set_signal_range what you need?
Nicolas Tappy
@LMSC-NTappy
Indeed, sorry for not going straight to the point ^^. I am using 2D models where it is currently not implemented. So I started an implementation and now I am stuck with a dilemma: either I extend the signature of set_signal_range to set_signal_range(x11=None, x12=None, x21= None, x22 = None) or I set the range with a bool array of the right shape: set_signal_range(mask=None). This former choice presenting the advantage of being coherent with 1D implementation, the latter should be generalisable to ND which is an other advantage. So I was wondering if this discussion had been raised in 1D, and how the community would feel about such change of paradigm. Maybe there is a way to extend the function to set_signal_range(self, x1=None, x2=None,mask=None), where passing a mask would override x1 and x2 but preserve their use for backward compatibility?
muratyesibolati
@muratyesibolati
Capture.JPG
Hi, Will you please help me add offset to the phase image ? I can not figure out which function should I call? The pixel values should still be in the -pi to +pi range but with different initial value near the boundary. I am still very new to hyperspy, and could not figure that out. Thank you !
Nicolas Tappy
@LMSC-NTappy
Not sure to understand what you want to do but try this: (signal+ 10).plot()
This will offset the whole image. Then you can put it in an other variable for further processing
Francisco de la Peña
@francisco-dlp
@LMSC-NTappy, that's indeed an interesting question. I think that it best to discuss it in a GitHub issue so that we can easily keep track of the arguments for whatever decision we take.
Nicolas Tappy
@LMSC-NTappy
@francisco-dlp Alright then, I opened #2454 in github
adriente
@adriente
Hello,
I am trying to use decomposition with the mlpca algortihm on a 80*80 (pixels) times 1980 (energy channels) EDX dataset. I let it run for more than 3 hours and it did not converged. My computer has a i7-8750H 2.2 Ghz CPU and 32 Go of RAM. I know that mlpca is computationally heavy but I thought that it takes very long given the dataset size. Is that normal behavior or is there an issue here ?
Tom Furnival
@tjof2
@adriente are you using the release installed from conda/pip (v1.5.2) or are you using the latest code from Github? If the latter, it should be much faster than that.
Hopefully v1.6.0 will be released today(?) @francisco-dlp that will allow you to access the faster code.
Tom Furnival
@tjof2
That said, 6400x1980 is a big dataset. See #2352 for the code change. RAM shouldn't be an issue. I've got a 7-year old i5 CPU, and I can get a random matrix of 6400x1980 size to converge in 140 seconds (= 6 iterations) using the new code on Github.
adriente
@adriente
@tjof2 Thanks for the quick reply. I am indeed using the 1.5.2. I'll check the code change and I'll try again with 1.6.0 then.
Tom Furnival
@tjof2
@adriente no problem! Before 1.6.0 is released you can check out the development code as written here https://hyperspy.readthedocs.io/en/latest/user_guide/install.html#install-development-version
Also note that import hyperspy.api as hs; hs.set_log_level("INFO") will give you a sense of how fast each iteration runs. There's not a lot of logging but there is some.
(before you call the decomposition)
adriente
@adriente
@tjof2 I didn't know about hs.set_log_level("INFO") thanks a lot. From my perspective, it would be nice to have this feature easier to reach when using the decomposition (maybe as an argument of the function ?? ). Improving the quantity and content of the displayed infos would also be a nice addition.
Tom Furnival
@tjof2
If you're on the latest code you can also do print_info=True, which will output a little bit more. In general there's not a lot more you can print from decomposition - many of the algorithms don't have a lot to log. That said some of them do, e.g. NMF takes a verbose argument. You can also pass your own sklearn estimators as the algorithm argument in the latest code - they sometimes take a verbose argument.