Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 28 18:46
    CSSFrancis commented #3031
  • Sep 28 08:34
    ericpre commented #3028
  • Sep 28 00:45
    bryandesser commented #3028
  • Sep 27 18:44

    ericpre on RELEASE_next_minor

    Fix typo import `scipy.datasets` (compare)

  • Sep 27 16:52

    ericpre on RELEASE_next_minor

    Change order import to avoid de… Merge pull request #3036 from e… Merge remote-tracking branch 'u… (compare)

  • Sep 27 12:34

    francisco-dlp on RELEASE_next_patch

    Change order import to avoid de… Merge pull request #3036 from e… (compare)

  • Sep 27 12:34
    francisco-dlp closed #3036
  • Sep 27 08:58
    ericpre labeled #3036
  • Sep 27 08:23
    codecov[bot] commented #3036
  • Sep 27 08:07
    ericpre opened #3036
  • Sep 26 18:06

    ericpre on RELEASE_next_minor

    Fix error handling when trying … Add changelog entry Merge pull request #3033 from e… and 6 more (compare)

  • Sep 26 17:59

    francisco-dlp on RELEASE_next_patch

    Fix link to example of numpydoc Merge pull request #3035 from e… (compare)

  • Sep 26 17:59
    francisco-dlp closed #3035
  • Sep 26 16:04
    codecov[bot] commented #3035
  • Sep 26 15:55
    ericpre labeled #3035
  • Sep 26 15:47
    ericpre labeled #3035
  • Sep 26 15:46
    ericpre opened #3035
  • Sep 26 09:27
    codecov[bot] commented #3030
  • Sep 26 09:27
    sivborg synchronize #3030
  • Sep 26 07:32

    jlaehne on RELEASE_next_patch

    Replace deprecated `scipy.misc.… Add changelog entry Merge pull request #3032 from e… (compare)

Mingquan Xu
@Mingquan_Xu_twitter
Hi, all, is there any packages that can do local low rank denoising for EEL spectrum image?
Thomas Aarholt
@thomasaarholt
What is low rank denoising? I use PCA and ICA a lot for eels in hyperspy.
I see. I hadn't heard about it before.
Mingquan Xu
@Mingquan_Xu_twitter
Hi, @thomasaarholt , yes, I also know this from this article, but do not know which software can do this process.
I have tried PCA and NMF in hyperspy on my data (SI), the results are not so good.
Thomas Aarholt
@thomasaarholt
Havd you thought about what might be causing your data's results to be "not so good"? What sort of eels is it?
rtangpy
@rtangpy
Hi all, I am doing model fitting, with 2D navigation and 1D signal. I have two questions: 1. since I need to fit more than 1 million pixels, using multifit will take me 50 mins. Is there any method that can speed up the code; 2. before I learned how to use hyperspy, I use multiprocessing with 8 cores to perform optimize.minimize to speed up code. Interestingly, using hyperspy multifit which only uses one core is even slighly faster than my code with multiprocessing (8 cores). I am curious about how hyperspy can reach such high speed.
13 replies
adriente
@adriente

I am performing data analysis on EDXS data. For the analysis I need some parameters such as sample thickness, elements in the sample, etc .. Depending on the microscope that was used (and the corresponding acquisition software) these parameters are not all filled in the metadata.

Is there a way to set the metadata parameters so that the previous values are not overwritten and only the empty ones are filled ?
I know it is possible to do that for elements using s.add_elements(["Si"]), but I couldn't find the same function for microscope parameters for example.

2 replies
Eric Prestat
@ericpre
image.png
2 replies
@adriente, is it not what you need?
Zezhong Zhang
@zezhong-zhang
samfire_red_chis.png

Hi everyone, I am trying use SamFire for EELS model fitting, After reading the documentation and the source code a bit, I still have few question about how to set up properly. I currently have the setup as:

# to fit 5% of the pixels to estimate the starting values
shape = (s_eels.axes_manager.navigation_axes[1].size, s_eels.axes_manager.navigation_axes[0].size)
mask = np.random.choice([0, 1], size=shape, p=[0.05, 0.95])
m.multifit(mask=mask, optimizer='lm', bounded=True,iterpath='serpentine',kind='smart')
# then start samfire
samf = m.create_samfire(workers=2, ipyparallel=False) *#create samfire*
samf.metadata.goodness_test.tolerance = 0.3 *#set a sensible tolerance*
samf.refresh_database() # here is to refresh the stragtegy or the pixel fitted? it reads bit contradictory from the documentation and the source code
samf.start(optimizer='lm', loss_function='ls', bounded=True,iterpath='serpentine',kind='smart', optional_components=['Mn_L3','O_K','PowerLaw']) *#start fitting*

The fitting results have following issues:

  1. Only the already m.multfit() fitted pixels have sensible values, the others does not have a good fit. I also tried fitting some pixels with smart_fit() which gives similar results. This can be verified with m.red_chisq.plot() (see attached).

  2. The vacuum pixels yiled growth for the powerlaw fitting of the pre-edge range, due to the noise, and the edge components fail as well as there should be none. Thus, I have all the components as optional but this is not the solution. Is it possible to switch off the fitting for the vacuum, I guess one can use mask.

  3. One quesiton about the elemental component intensity for mapping, I saw discussion in #2562, is it possible to have the absolute intensity or show the H-S cross-section under the given microscope condition? As I want to know their exact product to calculate the partial cross-section…

  4. One final question about the fine structure coefficient when m.enable_fine_structure(), are those a combination of gaussians? Can we acess the gaussian height, width and centre? I currently counldn’t find docs about the values in the fine_structure_coefficient, but see sometimes their values are negative and the plot indeed shows negetive gaussian correspondingly to fit the curve (which occurs even after forcing all edge component to be possitive), does the negative values make sense? If it is gaussian combination, it will be really helpful to have the acess to their values (instead of making gaussian models oneself), which can be used for computing white line for example.

I am happy to give a minimum example if that could be helpful. Many thanks for your helps!

Thomas Aarholt
@thomasaarholt
@zezhong-zhang I'm happy you're using SAMFIRE! I too am unsure on how exactly to set it up. It will be good to get a working example.
I really like your approach for creating a mask, using random.choice!
  1. The vacuum pixels should indeed be masked in the way you describe. I'm not sure how masking works with samfire!
  1. I suggest you add a comment to that post, and perhaps delve into the source code and see if you can help shed light.
Zezhong Zhang
@zezhong-zhang
@thomasaarholt Thanks for the comments! Sure, I will add the request for absolute intensity (the product) to the post, and dive a bit deeper into the source code.
Thomas Aarholt
@thomasaarholt
Brilliant! Let us know how you get on - I'm a bit busy with other things,but can at least comment on it
Mingquan Xu
@Mingquan_Xu_twitter
When I use ‘align_zero_loss_peak’ to align ZLP in my SI dataset, there is an error:
image.png
I used this function before, but this is the first time I see this warning. What would cause this?
image.png
Could anyone give me any suggestions to solve this? Thanks in advance!
Eric Prestat
@ericpre
is your hyperspy up to date?
Mingquan Xu
@Mingquan_Xu_twitter

is your hyperspy up to date?

the version is 1.6.2

Eric Prestat
@ericpre
which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3
Mingquan Xu
@Mingquan_Xu_twitter

which means that it is not up to date, latest is 1.6.4 and this issue has been fixed in 1.6.3

Thanks very much for your reply. I will update my HyperSpy and have a check.

Thomas Aarholt
@thomasaarholt
What is a good way to save artificial lazy signals that are larger than memory? I notice that my ram consumption shoots up when I try saving a dask-created signal, even if I specify the chunks.
import hyperspy.api as hs
from hyperspy.axes import UniformDataAxis
import dask.array as da

from hyperspy.datasets.example_signals import EDS_SEM_Spectrum
from hyperspy._signals.eds_sem import LazyEDSSEMSpectrum
from hyperspy._signals.signal2d import LazySignal2D

s = EDS_SEM_Spectrum()
data = s.data
axis = UniformDataAxis(offset = -0.1, scale = 0.01, size = 1024, units="eV")

s2 = LazyEDSSEMSpectrum(data, axes = [axis])
s2.add_elements(s.metadata.Sample.elements)
s2.set_microscope_parameters(beam_energy=10.)

nav = LazySignal2D(da.random.random((2500, 1000)))
s = s2 * nav.T

print("Shape:", s.data.shape) # 2500, 1000, 1024 - ~20GB
s.save("lazy.hspy", compression=None, overwrite=True, chunks = (100, 1000, 1024))
Håkon Wiik Ånes
@hakonanes
Could this be a problem with dask 2021.04.0 and related to https://github.com/dask/dask/issues/7583#issue-863708913? We've pinned dask to below this version in kikuchipy because of sudden memory issues after 2021.04.0.
12 replies
Thomas Aarholt
@thomasaarholt
Possibly! I'll try with an older dask and see!
Hmm, could someone please check if conda create --name testdask hyperspy results in an error? I'm running mamba on my M1 Mac, and installing hyperspy is giving a really weird error today. Installing jupyter notebook works fine:
(base) ➜  ~ conda create --name testdask hyperspy
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed

UnsatisfiableError:
(No info after the UnsatisfiableError)
15 replies
Thomas Aarholt
@thomasaarholt
I've just gone over the linear fitting PR #2422 once again, and I'm considering squashing most commits so that it is one commit per "category" (Model, Components, Tests and Docs), and then force pushing to the PR branch.
1 reply
I'm wondering if that makes it easier to review it, or if that just complicates matters. I've taken all comments that were in #2422 into consideration, so I think squashing would be clearer and shouldn't introduce confusion.
I've tested it, so the branch will look like this: https://github.com/thomasaarholt/hyperspy/commits/linear_fit_squash_into_files
Ghost
@ghost~610bd8c66da0373984828cc9
Hi, Not sure if this is the correct place to ask this but I am new to hyperspy and was wondering if there is a way of selectively removing element contributions to a EDS_TEM spectrum?
1 reply
DENSmerijn
@DENSmerijn
Hi, I noticed that the existing integration for DENSsolutions log files is outdated. I started work on a io_plugin to load log files from our Impulse software which is in csv format. However there is already another io_plugin that has the file_extension "csv". If I simply add the new one, the existing one stops working. Is there a proper way in hyperspy to deal with this issue? Thanks!
6 replies
Thomas Aarholt
@thomasaarholt
Is there any way to distinguish the Impulse csv from others? In my book, a csv file should really only be delimited by commas and newlines. How is the impulse one different?
1 reply
Crystal4276
@Crystal4276
Hello,
I'm interested by Hyperspy to treat my STEM-EELS data among other things.
I wanted to give a try with HypersyUI first but i cannot start the program.
I did the installation with conda and a python 3.7 version.
Here's attached the log:
https://pastebin.com/cG0t6trw
Apparently there is an issue with scipy.
Any idea what i could do to get it started ?
Many thanks
Crystal4276
@Crystal4276
Follow-up.
Switching to python 3.8 fixed the issue.
Maybe an indication of which python version is needed will be helpful for other here:
https://hyperspy.org/hyperspyUI/installation.html
Thomas Aarholt
@thomasaarholt
Well done sorting it, and thanks for the good terminal output!
Do you know which aspect of 3.8 affected Scipy?
Eric Prestat
@ericpre
hyperspyui does support python 3.7 and known to work. The error message you provided says that the error arises when import hyperspy and it seems that there was something wrong with your install, however there is nothing obvious in the error traceback to have of an idea of where the error could come from. If this is working fine now, I would not worry too much about it! :)
Crystal4276
@Crystal4276

Do you know which aspect of 3.8 affected Scipy?

Sorry Thomas i have no clue

adriente
@adriente
Hello,
My collaborators and me am developing a machine learning algorithm based on the scikit-learn framework. We would like our algorithm to be as easy to handle as possible by using hyperspy. If I understood correctly, the correct way to go is to use the decomposition function and use our estimator with the algorithm = key word argument .
The problem we face is that our fit method takes some arguments and they are not included in the parameters of the estimator object itself because it depends on the fitted data. (For example, but not limited to that, for graph regularization a 2D shape input is needed.)
I thought that since decomposition takes **kwargs they would be passed to the fitfunction of the algorithm object but it seems to me that I was wrong.
Is there any solution ? Am I missing something ?
adriente
@adriente
Looking into the code of hyperspy, the **kwargs are not passed to the fit (or fit_transform) function of the custom algorithm. Shall I submit this as an issue on github ?
8 replies
Jonas Lähnemann
@jlaehne
LumiSpy v0.1.2 (https:/lumispy.org) is out on PyPi and conda-forge now (and on the AUR for arch users) for those who work with HyperSpy for e.g. CL and PL data. So far, we have a limited number of additional functionalities besides the provision of dedicated signal classes. Conversion to energy or wavenumber axes is already included, and the non-uniform axes support has matured so far as to be included in the release_next_minor branch of HyperSpy. We're happy for any user feedback, but also for the help of people who want to contribute routines to the project.
As a sidenote - as I was already at it - I also submitted pkgbuild files for pyxem and kikuchipy (and missing dependencies) to the AUR. In case there is any arch user among you who wants to test installing from there.
1 reply
Felix Utama Kosasih
@fukosasih_twitter
Hello all, I have a question about the 'ranking' of NMF components in Hyperspy. If I perform NMF on a dataset with n as the desired number of output components, how does Hyperspy decide which component is #0, #1,..., #(n-1)? Is it also ranked by each component's proportion of the original data's variance a la PCA?
3 replies