ericpre on RELEASE_next_minor
Update model.rst Just citing T… Update doc/user_guide/model.rst… Merge pull request #3039 from H… (compare)
I am performing data analysis on EDXS data. For the analysis I need some parameters such as sample thickness, elements in the sample, etc .. Depending on the microscope that was used (and the corresponding acquisition software) these parameters are not all filled in the metadata.
Is there a way to set the metadata parameters so that the previous values are not overwritten and only the empty ones are filled ?
I know it is possible to do that for elements using
s.add_elements(["Si"]), but I couldn't find the same function for microscope parameters for example.
Hi everyone, I am trying use SamFire for EELS model fitting, After reading the documentation and the source code a bit, I still have few question about how to set up properly. I currently have the setup as:
# to fit 5% of the pixels to estimate the starting values shape = (s_eels.axes_manager.navigation_axes.size, s_eels.axes_manager.navigation_axes.size) mask = np.random.choice([0, 1], size=shape, p=[0.05, 0.95]) m.multifit(mask=mask, optimizer='lm', bounded=True,iterpath='serpentine',kind='smart') # then start samfire samf = m.create_samfire(workers=2, ipyparallel=False) *#create samfire* samf.metadata.goodness_test.tolerance = 0.3 *#set a sensible tolerance* samf.refresh_database() # here is to refresh the stragtegy or the pixel fitted? it reads bit contradictory from the documentation and the source code samf.start(optimizer='lm', loss_function='ls', bounded=True,iterpath='serpentine',kind='smart', optional_components=['Mn_L3','O_K','PowerLaw']) *#start fitting*
The fitting results have following issues:
Only the already
m.multfit() fitted pixels have sensible values, the others does not have a good fit. I also tried fitting some pixels with
smart_fit() which gives similar results. This can be verified with
m.red_chisq.plot() (see attached).
The vacuum pixels yiled growth for the powerlaw fitting of the pre-edge range, due to the noise, and the edge components fail as well as there should be none. Thus, I have all the components as optional but this is not the solution. Is it possible to switch off the fitting for the vacuum, I guess one can use mask.
One quesiton about the elemental component intensity for mapping, I saw discussion in #2562, is it possible to have the absolute intensity or show the H-S cross-section under the given microscope condition? As I want to know their exact product to calculate the partial cross-section…
One final question about the fine structure coefficient when
m.enable_fine_structure(), are those a combination of gaussians? Can we acess the gaussian height, width and centre? I currently counldn’t find docs about the values in the fine_structure_coefficient, but see sometimes their values are negative and the plot indeed shows negetive gaussian correspondingly to fit the curve (which occurs even after forcing all edge component to be possitive), does the negative values make sense? If it is gaussian combination, it will be really helpful to have the acess to their values (instead of making gaussian models oneself), which can be used for computing white line for example.
I am happy to give a minimum example if that could be helpful. Many thanks for your helps!
import hyperspy.api as hs from hyperspy.axes import UniformDataAxis import dask.array as da from hyperspy.datasets.example_signals import EDS_SEM_Spectrum from hyperspy._signals.eds_sem import LazyEDSSEMSpectrum from hyperspy._signals.signal2d import LazySignal2D s = EDS_SEM_Spectrum() data = s.data axis = UniformDataAxis(offset = -0.1, scale = 0.01, size = 1024, units="eV") s2 = LazyEDSSEMSpectrum(data, axes = [axis]) s2.add_elements(s.metadata.Sample.elements) s2.set_microscope_parameters(beam_energy=10.) nav = LazySignal2D(da.random.random((2500, 1000))) s = s2 * nav.T print("Shape:", s.data.shape) # 2500, 1000, 1024 - ~20GB s.save("lazy.hspy", compression=None, overwrite=True, chunks = (100, 1000, 1024))
conda create --name testdask hyperspyresults in an error? I'm running mamba on my M1 Mac, and installing hyperspy is giving a really weird error today. Installing jupyter notebook works fine:
(No info after the UnsatisfiableError)
(base) ➜ ~ conda create --name testdask hyperspy Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError:
decompositionfunction and use our estimator with the
algorithm =key word argument .
fitmethod takes some arguments and they are not included in the parameters of the estimator object itself because it depends on the fitted data. (For example, but not limited to that, for graph regularization a 2D shape input is needed.)
decompositiontakes **kwargs they would be passed to the
fitfunction of the algorithm object but it seems to me that I was wrong.
si.decomposition(alogrithm='ORPCA')to return the variance? I ran standard SVD the ORPC and am receiving the following:
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-99-659ba5d350c0> in <module> ----> 1 si.plot_explained_variance_ratio(threshold=4, xaxis_type='number') c:\users\owner\documents\github\hyperspy\hyperspy\learn\mva.py in plot_explained_variance_ratio(self, n, log, threshold, hline, vline, xaxis_type, xaxis_labeling, signal_fmt, noise_fmt, fig, ax, **kwargs) 1427 1428 """ -> 1429 s = self.get_explained_variance_ratio() 1430 1431 n_max = len(self.learning_results.explained_variance_ratio) c:\users\owner\documents\github\hyperspy\hyperspy\learn\mva.py in get_explained_variance_ratio(self) 1311 target = self.learning_results 1312 if target.explained_variance_ratio is None: -> 1313 raise AttributeError( 1314 "The explained_variance_ratio attribute is " 1315 "`None`, did you forget to perform a PCA " AttributeError: The explained_variance_ratio attribute is `None`, did you forget to perform a PCA decomposition?