hyperspy.learn.mva module
- class hyperspy.learn.mva.LearningResults
Bases:
object
Stores the parameters and results from a decomposition.
- crop_decomposition_dimension(n, compute=False)
Crop the score matrix up to the given number.
It is mainly useful to save memory and reduce the storage size
- load(filename)
Load the results of a previous decomposition and demixing analysis.
- Parameters:
filename (string) – Path to load the results from.
- save(filename, overwrite=None)
Save the result of the decomposition and demixing analysis.
- Parameters:
filename (string) – Path to save the results to.
overwrite ({True, False, None}, default None) – If True, overwrite the file if it exists. If None (default), prompt user if file exists.
- class hyperspy.learn.mva.MVA
Bases:
object
Multivariate analysis capabilities for the Signal1D class.
- _calculate_recmatrix(components=None, mva_type='decomposition')
Rebuilds data from selected components.
- Parameters:
components (None, int, or list of ints) –
If None, rebuilds signal instance from all components
If int, rebuilds signal instance from components in range 0-given int
If list of ints, rebuilds signal instance from only components in given list
mva_type (str {'decomposition', 'bss'}) – Decomposition type (not case sensitive)
- Returns:
Data built from the given components.
- Return type:
Signal instance
- _cluster_analysis(scaled_data, algorithm)
Cluster analysis of a scaled data - internal
- Parameters:
n_clusters (int) – Number of clusters to find.
scaled_data (numpy array - (number_of_samples,number_of_features)) –
algorithm (scikit learn clustering object) –
**kwargs – Additional parameters passed to the clustering algorithm. This may include n_init, the number of times the algorithm is restarted to optimize results.
- Returns:
return the sklearn.cluster object
- Return type:
alg
- _distances_within_cluster(cluster_data, memberships, squared=True, summed=False)
Return inter cluster distances.
- Parameters:
cluster_data (ndarray) – scaled cluster data
memberships (ndarray) – cluster labels
squared (bool, optional) – square distance measurement. The default is True.
summed (bool, optional) – If False returns array showing sum of distance from a given point to all other points in the cluster. If True returns a sum of all distances within a cluster. The results are scaled by 2*number of cluster points. The default is False.
- Returns:
result – list of distances for within the cluster
- Return type:
- _get_cluster_algorithm(algorithm, **kwargs)
Convenience method to lookup cluster algorithm if algorithm is a string and instantiates it with n_clusters or if it’s an object check that the object has a fit method
- _get_cluster_preprocessing_algorithm(algorithm, **kwargs)
Convenience method to lookup method if algorithm is a string or if it’s an object check that the object has a fit_transform method
- _get_cluster_signal(cluster_source, number_of_components=None, navigation_mask=None, signal_mask=None)
A cluster source can be an external signal, the signal data or the decomposition or bss results Return a flatten version of the data, nav and signal mask
- Parameters:
cluster_source (str or BaseSignal) – “decomposition”, “bss”, “signal” or a Signal
number_of_components (int, optional) – Number of components to use with decomposition sources. The default is None.
navigation_mask (ndarray, optional) – mask used to select regions of the cluster_source to use. The default is None.
signal_mask (ndarray, optional) – mask used to select regions of the cluster_source signal. For decomposition or bss this is not used. The default is None.
reproject (bool, optional) – If False the and the cluster_source is decomposition or bss the loadings are returned. If True the factor @ loadings result is used. The default is False.
- Returns:
toreturn – Returns an unfolded dataset from the selected cluster_source
- Return type:
ndarray
- _get_number_of_components_for_clustering()
Returns the number of components
- _scale_data_for_clustering(cluster_signal, preprocessing='norm', preprocessing_kwargs={})
Scale data for cluster analysis
Results are stored in learning_results.
- Parameters:
cluster_signal ({"bss", "decomposition", "signal", Signal}) – If “bss” the blind source separation results are used If “decomposition” the decomposition results are used if “signal” the signal data is used (signal should be unfolded)
preprocessing ({"standard","norm","minmax",None or scikit learn preprocessing method}) – default: ‘norm’ Preprocessing the data before cluster analysis requires preprocessing the data to be clustered to similar scales. Standard preprocessing adjusts each feature to have uniform variation. Norm preprocessing adjusts treats the set of features like a vector and each measurement is scaled to length 1. You can also pass a cikit-learn preprocessing object See scaling methods in scikit-learn preprocessing for further details.
preprocessing_kwargs – Additional parameters passed to the cluster preprocessing algorithm. See sklearn.preprocessing preprocessing methods for further details
See also
clusters_analysis()
,get_cluster_labels()
,get_cluster_signals()
,plot_cluster_results()
plot_cluster_signals()
plot_cluster_labels()
- Returns:
scaled_data (numpy array - unfolded array of shape (number_of_samples,)
no_of_features) scaled according to the selected algorithm
- blind_source_separation(number_of_components=None, algorithm='sklearn_fastica', diff_order=1, diff_axes=None, factors=None, comp_list=None, mask=None, on_loadings=False, reverse_component_criterion='factors', whiten_method='PCA', return_info=False, print_info=True, **kwargs)
Apply blind source separation (BSS) to the result of a decomposition.
The results are stored in
self.learning_results
.Read more in the User Guide.
- Parameters:
number_of_components (int or None) – Number of principal components to pass to the BSS algorithm. If None, you must specify the
comp_list
argument.algorithm ({"sklearn_fastica", "orthomax", "FastICA", "JADE", "CuBICA", "TDSEP", custom object}, default "sklearn_fastica") – The BSS algorithm to use. If algorithm is an object, it must implement a
fit_transform()
method orfit()
andtransform()
methods, in the same manner as a scikit-learn estimator.diff_order (int, default 1) – Sometimes it is convenient to perform the BSS on the derivative of the signal. If
diff_order
is 0, the signal is not differentiated.diff_axes (None or list of ints or strings) –
If None and on_loadings is False, when diff_order is greater than 1 and signal_dimension is greater than 1, the differences are calculated across all signal axes
If None and on_loadings is True, when diff_order is greater than 1 and navigation_dimension is greater than 1, the differences are calculated across all navigation axes
Otherwise the axes can be specified in a list.
factors (
BaseSignal
or numpy array) – Factors to decompose. If None, the BSS is performed on the factors of a previous decomposition. If a Signal instance, the navigation dimension must be 1 and the size greater than 1.comp_list (None or list or numpy array) – Choose the components to apply BSS to. Unlike
number_of_components
, this argument permits non-contiguous components.mask (
BaseSignal
or subclass) – If not None, the signal locations marked as True are masked. The mask shape must be equal to the signal shape (navigation shape) when on_loadings is False (True).on_loadings (bool, default False) – If True, perform the BSS on the loadings of a previous decomposition, otherwise, perform the BSS on the factors.
reverse_component_criterion ({"factors", "loadings"}, default "factors") – Use either the factors or the loadings to determine if the component needs to be reversed.
whiten_method ({"PCA", "ZCA", None}, default "PCA") – How to whiten the data prior to blind source separation. If None, no whitening is applied. See
whiten_data()
for more details.return_info (bool, default False) – The result of the decomposition is stored internally. However, some algorithms generate some extra information that is not stored. If True, return any extra information if available. In the case of sklearn.decomposition objects, this includes the sklearn Estimator object.
print_info (bool, default True) – If True, print information about the decomposition being performed. In the case of sklearn.decomposition objects, this includes the values of all arguments of the chosen sklearn algorithm.
**kwargs (extra keyword arguments) – Any keyword arguments are passed to the BSS algorithm.
- Returns:
return_info –
If True and ‘algorithm’ is an sklearn Estimator, returns the Estimator object.
Otherwise, returns None
- Return type:
sklearn.Estimator or None
See also
plot_bss_factors()
plot_bss_loadings()
plot_bss_results()
- cluster_analysis(cluster_source, source_for_centers=None, preprocessing=None, preprocessing_kwargs={}, number_of_components=None, navigation_mask=None, signal_mask=None, algorithm=None, return_info=False, **kwargs)
Cluster analysis of a signal or decomposition results of a signal Results are stored in learning_results.
- Parameters:
cluster_source ({"bss", "decomposition", "signal", BaseSignal}) – If “bss” the blind source separation results are used If “decomposition” the decomposition results are used if “signal” the signal data is used Note that using the signal or BaseSignal can be memory intensive and is only recommended if the Signal dimension is small BaseSignal must have the same navigation dimensions as the signal.
source_for_centers ({None,"decomposition","bss","signal",BaseSignal},) – default : None If None the cluster_source is used If “bss” the blind source separation results are used If “decomposition” the decomposition results are used if “signal” the signal data is used BaseSignal must have the same navigation dimensions as the signal.
preprocessing ({"standard","norm","minmax",None or scikit learn preprocessing method}) – default: ‘norm’ Preprocessing the data before cluster analysis requires preprocessing the data to be clustered to similar scales. Standard preprocessing adjusts each feature to have uniform variation. Norm preprocessing adjusts treats the set of features like a vector and each measurement is scaled to length 1. You can also pass one of the scikit-learn preprocessing scale_method = import sklearn.processing.StandadScaler() preprocessing = scale_method See preprocessing methods in scikit-learn preprocessing for further details.
preprocessing_kwargs (dict) – Additional parameters passed to the supported sklearn preprocessing methods. See sklearn.preprocessing scaling methods for further details
number_of_components (int, default None) – If you are getting the cluster centers using the decomposition results (cluster_source_for_centers=”decomposition”) you can define how many components to use. If set to None the method uses the estimate of significant components found in the decomposition step using the elbow method and stored in the
learning_results.number_significant_components
attribute. This applies to both bss and decomposition results.navigation_mask (boolean numpy array) – The navigation locations marked as True are not used.
signal_mask (boolean numpy array) – The signal locations marked as True are not used in the clustering for “signal” or Signals supplied as cluster source. This is not applied to decomposition results or source_for_centers (as it may be a different shape to the cluster source)
algorithm ({ "kmeans" | "agglomerative" | "minibatchkmeans" | "spectralclustering"}) – See scikit-learn documentation. Default “kmeans”
return_info (bool, default False) – The result of the cluster analysis is stored internally. However, the cluster class used contain a number of attributes. If True (the default is False) return the cluster object so the attributes can be accessed.
**kwargs (dict optional, default - empty) – Additional parameters passed to the clustering class for initialization. For example, in case of the “kmeans” algorithm, n_init can be used to define the number of times the algorithm is restarted to optimize results.
n_clusters (int) – Number of clusters to find using the one of the pre-defined methods “kmeans”,”agglomerative”,”minibatchkmeans”,”spectralclustering” See sklearn.cluster for details
See also
get_cluster_labels()
,get_cluster_signals()
,get_cluster_distances()
,plot_cluster_results()
plot_cluster_signals()
plot_cluster_labels()
- Returns:
If ‘return_info’ is True returns the Scikit-learn cluster object
used for clustering. Useful if you wish to
examine inertia or other outputs.
- decomposition(normalize_poissonian_noise=False, algorithm='SVD', output_dimension=None, centre=None, auto_transpose=True, navigation_mask=None, signal_mask=None, var_array=None, var_func=None, reproject=None, return_info=False, print_info=True, svd_solver='auto', copy=True, **kwargs)
Apply a decomposition to a dataset with a choice of algorithms.
The results are stored in
self.learning_results
.Read more in the User Guide.
- Parameters:
normalize_poissonian_noise (bool, default False) – If True, scale the signal to normalize Poissonian noise using the approach described in [Keenan2004].
algorithm ({"SVD", "MLPCA", "sklearn_pca", "NMF", "sparse_pca", "mini_batch_sparse_pca", "RPCA", "ORPCA", "ORNMF", custom object}, default "SVD") – The decomposition algorithm to use. If algorithm is an object, it must implement a
fit_transform()
method orfit()
andtransform()
methods, in the same manner as a scikit-learn estimator. For cupy arrays, only “SVD” is supported.output_dimension (None or int) – Number of components to keep/calculate. Default is None, i.e.
min(data.shape)
.centre ({None, "navigation", "signal"}, default None) –
If None, the data is not centered prior to decomposition.
If “navigation”, the data is centered along the navigation axis. Only used by the “SVD” algorithm.
If “signal”, the data is centered along the signal axis. Only used by the “SVD” algorithm.
auto_transpose (bool, default True) – If True, automatically transposes the data to boost performance. Only used by the “SVD” algorithm.
navigation_mask (boolean numpy array or BaseSignal) – The navigation locations marked as True are not used in the decomposition.
signal_mask (boolean numpy array or BaseSignal) – The signal locations marked as True are not used in the decomposition.
var_array (numpy array) – Array of variance for the maximum likelihood PCA algorithm. Only used by the “MLPCA” algorithm.
var_func (None or function or numpy array, default None) –
If None, ignored
If function, applies the function to the data to obtain
var_array
. Only used by the “MLPCA” algorithm.If numpy array, creates
var_array
by applying a polynomial function defined by the array of coefficients to the data. Only used by the “MLPCA” algorithm.
reproject ({None, "signal", "navigation", "both"}, default None) – If not None, the results of the decomposition will be projected in the selected masked area.
return_info (bool, default False) – The result of the decomposition is stored internally. However, some algorithms generate some extra information that is not stored. If True, return any extra information if available. In the case of sklearn.decomposition objects, this includes the sklearn Estimator object.
print_info (bool, default True) – If True, print information about the decomposition being performed. In the case of sklearn.decomposition objects, this includes the values of all arguments of the chosen sklearn algorithm.
svd_solver ({"auto", "full", "arpack", "randomized"}, default "auto") –
- If auto:
The solver is selected by a default policy based on data.shape and output_dimension: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient “randomized” method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.
- If full:
run exact SVD, calling the standard LAPACK solver via
scipy.linalg.svd()
, and select the components by postprocessing- If arpack:
use truncated SVD, calling ARPACK solver via
scipy.sparse.linalg.svds()
. It requires strictly 0 < output_dimension < min(data.shape)- If randomized:
use truncated SVD, calling
sklearn.utils.extmath.randomized_svd()
to estimate a limited number of componentsFor cupy arrays, only “full” is supported.
copy (bool, default True) –
If True, stores a copy of the data before any pre-treatments such as normalization in
s._data_before_treatments
. The original data can then be restored by callings.undo_treatments()
.If False, no copy is made. This can be beneficial for memory usage, but care must be taken since data will be overwritten.
**kwargs (extra keyword arguments) – Any keyword arguments are passed to the decomposition algorithm.
- Returns:
return_info –
If True and ‘algorithm’ in [‘RPCA’, ‘ORPCA’, ‘ORNMF’], returns the low-rank (X) and sparse (E) matrices from robust PCA/NMF.
If True and ‘algorithm’ is an sklearn Estimator, returns the Estimator object.
Otherwise, returns None
- Return type:
tuple(numpy array, numpy array) or sklearn.Estimator or None
References
See also
plot_decomposition_factors()
plot_decomposition_loadings()
plot_decomposition_results()
decomposition()
for lazy signals
- estimate_elbow_position(explained_variance_ratio=None, log=True, max_points=20)
Estimate the elbow position of a scree plot curve.
Used to estimate the number of significant components in a PCA variance ratio plot or other “elbow” type curves.
Find a line between first and last point on the scree plot. With a classic elbow scree plot, this line more or less defines a triangle. The elbow should be the point which is the furthest distance from this line. For more details, see [Satopää2011].
- Parameters:
explained_variance_ratio ({None, numpy array}) – Explained variance ratio values that form the scree plot. If None, uses the
explained_variance_ratio
array stored ins.learning_results
, so a decomposition must have been performed first.max_points (int) – Maximum number of points to consider in the calculation.
- Returns:
elbow position – Index of the elbow position in the input array. Due to zero-based indexing, the number of significant components is elbow_position + 1.
- Return type:
References
[Satopää2011]V. Satopää, J. Albrecht, D. Irwin, and B. Raghavan. “Finding a “Kneedle” in a Haystack: Detecting Knee Points in System Behavior,. 31st International Conference on Distributed Computing Systems Workshops, pp. 166-171, June 2011.
- estimate_number_of_clusters(cluster_source, max_clusters=10, preprocessing=None, preprocessing_kwargs={}, number_of_components=None, navigation_mask=None, signal_mask=None, algorithm=None, metric='gap', n_ref=4, **kwargs)
Performs cluster analysis of a signal for cluster sizes ranging from n_clusters =2 to max_clusters ( default 12) Note that this can be a slow process for large datasets so please consider reducing max_clusters in this case. For each cluster it evaluates the silhouette score which is a metric of how well separated the clusters are. Maximima or peaks in the scores indicate good choices for cluster sizes.
- Parameters:
cluster_source ({"bss", "decomposition", "signal" or Signal}) – If “bss” the blind source separation results are used If “decomposition” the decomposition results are used if “signal” the signal data is used Note that using the signal can be memory intensive and is only recommended if the Signal dimension is small. Input Signal must have the same navigation dimensions as the signal instance.
max_clusters (int, default 10) – Max number of clusters to use. The method will scan from 2 to max_clusters.
preprocessing ({"standard","norm","minmax" or sklearn-like preprocessing object}) – default: ‘norm’ Preprocessing the data before cluster analysis requires preprocessing the data to be clustered to similar scales. Standard preprocessing adjusts each feature to have uniform variation. Norm preprocessing adjusts treats the set of features like a vector and each measurement is scaled to length 1. You can also pass an instance of a sklearn preprocessing module. See preprocessing methods in scikit-learn preprocessing for further details.
preprocessing_kwargs (dict, default empty) – Additional parameters passed to the cluster preprocessing algorithm. See sklearn.preprocessing preprocessing methods for further details
number_of_components (int, default None) – If you are getting the cluster centers using the decomposition results (cluster_source_for_centers=”decomposition”) you can define how many PCA components to use. If set to None the method uses the estimate of significant components found in the decomposition step using the elbow method and stored in the
learning_results.number_significant_components
attribute.navigation_mask (boolean numpy array, default : None) – The navigation locations marked as True are not used in the clustering.
signal_mask (boolean numpy array, default : None) – The signal locations marked as True are not used in the clustering. Applies to “signal” or Signal cluster sources only.
metric ({'elbow','silhouette','gap'} default 'gap') – Use distance,silhouette analysis or gap statistics to estimate the optimal number of clusters. Gap is believed to be, overall, the best metric but it’s also the slowest. Elbow measures the distances between points in each cluster as an estimate of how well grouped they are and is the fastest metric. For elbow the optimal k is the knee or elbow point. For gap the optimal k is the first k gap(k)>= gap(k+1)-std_error For silhouette the optimal k will be one of the “maxima” found with this method
n_ref (int, default 4) – Number of references to use in gap statistics method Gap statistics compares the results from clustering the data to clustering uniformly distributed data. As clustering has a random variation it is typically averaged n_ref times to get an statistical average
**kwargs (dict {} default empty) – Parameters passed to the clustering algorithm.
n_clusters (int) – Number of clusters to find using the one of the pre-defined methods “kmeans”,”agglomerative”,”minibatchkmeans”,”spectralclustering” See sklearn.cluster for details
- Returns:
best_k – Estimate of the best cluster size
- Return type:
See also
get_cluster_labels()
,get_cluster_signals()
,get_cluster_distances()
,plot_cluster_results()
plot_cluster_signals()
plot_cluster_labels()
- get_bss_model(components=None, chunks='auto')
Generate model with the selected number of independent components.
- Parameters:
components ({None, int, list of ints}, default None) –
If None, rebuilds signal instance from all components
If int, rebuilds signal instance from components in range 0-given int
If list of ints, rebuilds signal instance from only components in given list
- Returns:
A model built from the given components.
- Return type:
Signal instance
- get_decomposition_model(components=None)
Generate model with the selected number of principal components.
- Parameters:
components ({None, int, list of ints}, default None) –
If None, rebuilds signal instance from all components
If int, rebuilds signal instance from components in range 0-given int
If list of ints, rebuilds signal instance from only components in given list
- Returns:
A model built from the given components.
- Return type:
Signal instance
- get_explained_variance_ratio()
Return explained variance ratio of the PCA components as a Signal1D.
Read more in the User Guide.
- Returns:
s – Explained variance ratio.
- Return type:
See also
get_decomposition_loadings()
get_decomposition_factors()
- normalize_bss_components(target='factors', function=<function sum>)
Normalize BSS components.
- Parameters:
target ({"factors", "loadings"}) – Normalize components based on the scale of either the factors or loadings.
function (numpy universal function, default np.sum) – Each target component is divided by the output of
function(target)
. The function must return a scalar when operating on numpy arrays and must have an axis argument.
- normalize_decomposition_components(target='factors', function=<function sum>)
Normalize decomposition components.
- Parameters:
target ({"factors", "loadings"}) – Normalize components based on the scale of either the factors or loadings.
function (numpy universal function, default np.sum) – Each target component is divided by the output of
function(target)
. The function must return a scalar when operating on numpy arrays and must have an axis argument.
- normalize_poissonian_noise(navigation_mask=None, signal_mask=None)
Normalize the signal under the assumption of Poisson noise.
Scales the signal using to “normalize” the Poisson data for subsequent decomposition analysis [Keenan2004].
- Parameters:
navigation_mask ({None, boolean numpy array}, default None) – Optional mask applied in the navigation axis.
signal_mask ({None, boolean numpy array}, default None) – Optional mask applied in the signal axis.
- plot_cluster_metric()
- Plot the cluster metrics calculated
using evaluate_number_of_clusters method
See also
get_cluster_labels()
,get_cluster_signals()
,plot_cluster_results()
plot_cluster_signals()
plot_cluster_labels()
- plot_cumulative_explained_variance_ratio(n=50)
Plot cumulative explained variance up to n principal components.
- Parameters:
n (int) – Number of principal components to show.
- Returns:
ax – Axes object containing the cumulative explained variance plot.
- Return type:
matplotlib.axes
See also
- plot_explained_variance_ratio(n=30, log=True, threshold=0, hline='auto', vline=False, xaxis_type='index', xaxis_labeling=None, signal_fmt=None, noise_fmt=None, fig=None, ax=None, **kwargs)
Plot the decomposition explained variance ratio vs index number.
This is commonly known as a scree plot.
Read more in the User Guide.
- Parameters:
n (int or None) – Number of components to plot. If None, all components will be plot
log (bool, default True) – If True, the y axis uses a log scale.
threshold (float or int) – Threshold used to determine how many components should be highlighted as signal (as opposed to noise). If a float (between 0 and 1),
threshold
will be interpreted as a cutoff value, defining the variance at which to draw a line showing the cutoff between signal and noise; the number of signal components will be automatically determined by the cutoff value. If an int,threshold
is interpreted as the number of components to highlight as signal (and no cutoff line will be drawn)hline ({'auto', True, False}) – Whether or not to draw a horizontal line illustrating the variance cutoff for signal/noise determination. Default is to draw the line at the value given in
threshold
(if it is a float) and not draw in the casethreshold
is an int, or not given. If True, (andthreshold
is an int), the line will be drawn through the last component defined as signal. If False, the line will not be drawn in any circumstance.vline (bool, default False) – Whether or not to draw a vertical line illustrating an estimate of the number of significant components. If True, the line will be drawn at the the knee or elbow position of the curve indicating the number of significant components. If False, the line will not be drawn in any circumstance.
xaxis_type ({'index', 'number'}) – Determines the type of labeling applied to the x-axis. If
'index'
, axis will be labeled starting at 0 (i.e. “pythonic index” labeling); if'number'
, it will start at 1 (number labeling).xaxis_labeling ({'ordinal', 'cardinal', None}) – Determines the format of the x-axis tick labels. If
'ordinal'
, “1st, 2nd, …” will be used; if'cardinal'
, “1, 2, …” will be used. If None, an appropriate default will be selected.signal_fmt (dict) – Dictionary of matplotlib formatting values for the signal components
noise_fmt (dict) – Dictionary of matplotlib formatting values for the noise components
fig (matplotlib figure or None) – If None, a default figure will be created, otherwise will plot into fig
ax (matplotlib ax (subplot) or None) – If None, a default ax will be created, otherwise will plot into ax
**kwargs – remaining keyword arguments are passed to
matplotlib.figure()
- Returns:
ax – Axes object containing the scree plot
- Return type:
matplotlib.axes
Example
To generate a scree plot with customized symbols for signal vs. noise components and a modified cutoff threshold value:
>>> s = hs.load("some_spectrum_image") >>> s.decomposition() >>> s.plot_explained_variance_ratio(n=40, >>> threshold=0.005, >>> signal_fmt={'marker': 'v', >>> 's': 150, >>> 'c': 'pink'} >>> noise_fmt={'marker': '*', >>> 's': 200, >>> 'c': 'green'})
See also
get_decomposition_loadings()
get_decomposition_factors()
- reverse_bss_component(component_number)
Reverse the independent component.
Examples
>>> s = hs.load('some_file') >>> s.decomposition(True) # perform PCA >>> s.blind_source_separation(3) # perform ICA on 3 PCs >>> s.reverse_bss_component(1) # reverse IC 1 >>> s.reverse_bss_component((0, 2)) # reverse ICs 0 and 2
- reverse_decomposition_component(component_number)
Reverse the decomposition component.
Examples
>>> s = hs.load('some_file') >>> s.decomposition(True) # perform PCA >>> s.reverse_decomposition_component(1) # reverse IC 1 >>> s.reverse_decomposition_component((0, 2)) # reverse ICs 0 and 2
- undo_treatments()
Undo Poisson noise normalization and other pre-treatments.
Only valid if calling
s.decomposition(..., copy=True)
.
- hyperspy.learn.mva._get_derivative(signal, diff_axes, diff_order)
Calculate the derivative of a signal.
- hyperspy.learn.mva._normalize_components(target, other, function=<function sum>)
Normalize components according to a function.