Machine Learning#
- class hyperspy.learn.mva.LearningResults#
Bases:
object
Stores the parameters and results from a decomposition.
- crop_decomposition_dimension(n, compute=False)#
Crop the score matrix up to the given number.
It is mainly useful to save memory and reduce the storage size
- load(filename)#
Load the results of a previous decomposition and demixing analysis.
- Parameters:
- filename
str
Path to load the results from.
- filename
- save(filename, overwrite=None)#
Save the result of the decomposition and demixing analysis.
- hyperspy.learn.mlpca.mlpca(X, varX, output_dimension, svd_solver='auto', tol=1e-10, max_iter=50000, **kwargs)#
Performs maximum likelihood PCA with missing data and/or heteroskedastic noise.
Standard PCA based on a singular value decomposition (SVD) approach assumes that the data is corrupted with Gaussian, or homoskedastic noise. For many applications, this assumption does not hold. For example, count data from EDS-TEM experiments is corrupted by Poisson noise, where the noise variance depends on the underlying pixel value. Rather than scaling or transforming the data to approximately “normalize” the noise, MLPCA instead uses estimates of the data variance to perform the decomposition.
This function is a transcription of a MATLAB code obtained from [Andrews1997].
Read more in the User Guide.
- Parameters:
- X
numpy.ndarray
Matrix of observations with shape (m, n).
- varX
numpy.ndarray
Matrix of variances associated with X (zeros for missing measurements).
- output_dimension
int
The model dimensionality.
- svd_solver{
"auto"
,"full"
,"arpack"
,"randomized"
}, default"auto"
- If auto:
The solver is selected by a default policy based on
data.shape
and output_dimension: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient “randomized” method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.- If full:
run exact SVD, calling the standard LAPACK solver via
scipy.linalg.svd()
, and select the components by postprocessing- If arpack:
use truncated SVD, calling ARPACK solver via
scipy.sparse.linalg.svds()
. It requires strictly 0 < output_dimension < min(data.shape)- If randomized:
use truncated SVD, calling
sklearn.utils.extmath.randomized_svd()
to estimate a limited number of components
- tol
float
Tolerance of the stopping condition.
- max_iter
int
Maximum number of iterations before exiting without convergence.
- X
- Returns:
numpy.ndarray
The pseudo-SVD parameters.
float
Value of the objective function.
References
[Andrews1997]Darren T. Andrews and Peter D. Wentzell, “Applications of maximum likelihood principal component analysis: incomplete data sets and calibration transfer”, Analytica Chimica Acta 350, no. 3 (September 19, 1997): 341-352.
- class hyperspy.learn.ornmf.ORNMF(rank, store_error=False, lambda1=1.0, kappa=1.0, method='PGD', subspace_learning_rate=1.0, subspace_momentum=0.5, random_state=None)#
Bases:
object
Performs Online Robust NMF with missing or corrupted data.
The ORNMF code is based on a transcription of the online proximal gradient descent (PGD) algorithm MATLAB code obtained from the authors of [Zhao2016]. It has been updated to also include L2-normalization cost function that is able to deal with sparse corruptions and/or outliers slightly faster (please see ORPCA implementation for details). A further modification has been made to allow for a changing subspace W, where X ~= WH^T + E in the ORNMF framework.
Read more in the User Guide.
References
[Zhao2016]Zhao, Renbo, and Vincent YF Tan. “Online nonnegative matrix factorization with outliers.” Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016.
Creates Online Robust NMF instance that can learn a representation.
- Parameters:
- rank
int
The rank of the representation (number of components/factors)
- store_errorbool, default
False
If True, stores the sparse error matrix.
- lambda1
float
Nuclear norm regularization parameter.
- kappa
float
Step-size for projection solver.
- method{
'PGD'
,'RobustPGD'
,'MomentumSGD'
}, default'PGD'
'PGD'
- Proximal gradient descent'RobustPGD'
- Robust proximal gradient descent'MomentumSGD'
- Stochastic gradient descent with momentum
- subspace_learning_rate
float
Learning rate for the
'MomentumSGD'
method. Should be a float > 0.0- subspace_momentum
float
Momentum parameter for
'MomentumSGD'
method, should be a float between 0 and 1.- random_state
None
orint
orRandomState
, defaultNone
Used to initialize the subspace on the first iteration. See
numpy.random.default_rng()
for more information.
- rank
- finish()#
Return the learnt factors and loadings.
- fit(X, batch_size=None)#
Learn NMF components from the data.
- Parameters:
- Xarray_like
[n_samples x n_features] matrix of observations or an iterator that yields samples, each with n_features elements.
- batch_size{
None
,int
} If not None, learn the data in batches, each of batch_size samples or less.
- project(X, return_error=False)#
Project the learnt components on the data.
- Parameters:
- Xarray_like
The matrix of observations with shape (n_samples, n_features) or an iterator that yields n_samples, each with n_features elements.
- return_errorbool, default
False
If True, returns the sparse error matrix as well. Otherwise only the weights (loadings)
- hyperspy.learn.ornmf.ornmf(X, rank, store_error=False, project=False, batch_size=None, lambda1=1.0, kappa=1.0, method='PGD', subspace_learning_rate=1.0, subspace_momentum=0.5, random_state=None)#
Perform online, robust NMF on the data X.
This is a wrapper function for the ORNMF class.
- Parameters:
- X
numpy.ndarray
The [n_samples, n_features] input data.
- rank
int
The rank of the representation (number of components/factors)
- store_errorbool, default
False
If True, stores the sparse error matrix.
- projectbool, default
False
If True, project the data X onto the learnt model.
- batch_size
None
orint
, defaultNone
If not None, learn the data in batches, each of batch_size samples or less.
- lambda1
float
, default 1.0 Nuclear norm regularization parameter.
- kappa
float
, default 1.0 Step-size for projection solver.
- method{‘PGD’, ‘RobustPGD’, ‘MomentumSGD’}, default ‘PGD’
'PGD'
- Proximal gradient descent'RobustPGD'
- Robust proximal gradient descent'MomentumSGD'
- Stochastic gradient descent with momentum
- subspace_learning_rate
float
, default 1.0 Learning rate for the ‘MomentumSGD’ method. Should be a float > 0.0
- subspace_momentum
float
, default 0.5 Momentum parameter for ‘MomentumSGD’ method, should be a float between 0 and 1.
- random_state
None
orint
orRandomState
, defaultNone
Used to initialize the subspace on the first iteration.
- X
- Returns:
- Xhat
numpy.ndarray
The non-negative matrix with shape (n_features x n_samples). Only returned if store_error is True.
- Ehat
numpy.ndarray
The sparse error matrix with shape (n_features x n_samples). Only returned if store_error is True.
- W
numpy.ndarray
The non-negative factors matrix with shape (n_features, rank).
- H
numpy.ndarray
The non-negative loadings matrix with shape (rank, n_samples).
- Xhat
- hyperspy.learn.orthomax.orthomax(A, gamma=1.0, tol=1.4901e-07, max_iter=256)#
Calculate orthogonal rotations for a matrix of factors or loadings from PCA.
When gamma=1.0, this is known as varimax rotation, which finds a rotation matrix W that maximizes the variance of the squared components of A @ W. The rotation matrix preserves orthogonality of the components.
Taken from metpy.
- Parameters:
- Returns:
- class hyperspy.learn.rpca.ORPCA(rank, store_error=False, lambda1=0.1, lambda2=1.0, method='BCD', init='qr', training_samples=10, subspace_learning_rate=1.0, subspace_momentum=0.5, random_state=None)#
Bases:
object
Performs Online Robust PCA with missing or corrupted data.
The ORPCA code is based on a transcription of MATLAB code from [Feng2013]. It has been updated to include a new initialization method based on a QR decomposition of the first n “training” samples of the data. A stochastic gradient descent (SGD) solver is also implemented, along with a MomentumSGD solver for improved convergence and robustness with respect to local minima. More information about the gradient descent methods and choosing appropriate parameters can be found in [Ruder2016].
Read more in the User Guide.
References
[Feng2013]Jiashi Feng, Huan Xu and Shuicheng Yuan, “Online Robust PCA via Stochastic Optimization”, Advances in Neural Information Processing Systems 26, (2013), pp. 404-412.
[Ruder2016]Sebastian Ruder, “An overview of gradient descent optimization algorithms”, arXiv:1609.04747, (2016), https://arxiv.org/abs/1609.04747.
Creates Online Robust PCA instance that can learn a representation.
- Parameters:
- rank
int
The rank of the representation (number of components/factors)
- store_errorbool, default
False
If True, stores the sparse error matrix.
- lambda1
float
, default 0.1 Nuclear norm regularization parameter.
- lambda2
float
, default 1.0 Sparse error regularization parameter.
- method{‘CF’, ‘BCD’, ‘SGD’, ‘MomentumSGD’}, default ‘BCD’
'CF'
- Closed-form solver'BCD'
- Block-coordinate descent'SGD'
- Stochastic gradient descent'MomentumSGD'
- Stochastic gradient descent with momentum
- init
numpy.ndarray
, {‘qr’, ‘rand’}, default ‘qr’ 'qr'
- QR-based initialization'rand'
- Random initializationnumpy.ndarray if the shape (n_features x rank)
- training_samples
int
, default 10 Specifies the number of training samples to use in the ‘qr’ initialization.
- subspace_learning_rate
float
, default 1.0 Learning rate for the ‘SGD’ and ‘MomentumSGD’ methods. Should be a float > 0.0
- subspace_momentum
float
, default 0.5 Momentum parameter for ‘MomentumSGD’ method, should be a float between 0 and 1.
- random_state
None
,int
orRandomState
, defaultNone
Used to initialize the subspace on the first iteration.
- rank
- finish(**kwargs)#
Return the learnt factors and loadings.
- fit(X, batch_size=None)#
Learn RPCA components from the data.
- Parameters:
- Xarray_like
The matrix of observations with shape (n_samples, n_features) or an iterator that yields samples, each with n_features elements.
- batch_size
None
orint
If not None, learn the data in batches, each of batch_size samples or less.
- project(X, return_error=False)#
Project the learnt components on the data.
- Parameters:
- Xarray_like
The matrix of observations with shape (n_samples, n_features) or an iterator that yields n_samples, each with n_features elements.
- return_errorbool, default
False
If True, returns the sparse error matrix as well. Otherwise only the weights (loadings)
- hyperspy.learn.rpca.orpca(X, rank, store_error=False, project=False, batch_size=None, lambda1=0.1, lambda2=1.0, method='BCD', init='qr', training_samples=10, subspace_learning_rate=1.0, subspace_momentum=0.5, random_state=None, **kwargs)#
Perform online, robust PCA on the data X.
This is a wrapper function for the ORPCA class.
- Parameters:
- Xarray_like
The matrix of observations with shape (n_features x n_samples) or an iterator that yields samples, each with n_features elements.
- rank
int
The rank of the representation (number of components/factors)
- store_errorbool, default
False
If True, stores the sparse error matrix.
- projectbool, default
False
If True, project the data X onto the learnt model.
- batch_size
None
,int
, defaultNone
If not None, learn the data in batches, each of batch_size samples or less.
- lambda1
float
, default 0.1 Nuclear norm regularization parameter.
- lambda2
float
, default 1.0 Sparse error regularization parameter.
- method{‘CF’, ‘BCD’, ‘SGD’, ‘MomentumSGD’}, default ‘BCD’
'CF'
- Closed-form solver'BCD'
- Block-coordinate descent'SGD'
- Stochastic gradient descent'MomentumSGD'
- Stochastic gradient descent with momentum
- init
numpy.ndarray
, {‘qr’, ‘rand’}, default ‘qr’ 'qr'
- QR-based initialization'rand'
- Random initializationnumpyp.ndarray if the shape [n_features x rank]
- training_samples
int
, default 10 Specifies the number of training samples to use in the ‘qr’ initialization.
- subspace_learning_rate
float
, default 1.0 Learning rate for the ‘SGD’ and ‘MomentumSGD’ methods. Should be a float > 0.0
- subspace_momentum
float
, default 0.5 Momentum parameter for ‘MomentumSGD’ method, should be a float between 0 and 1.
- random_state
None
orint
orRandomState
, defaultNone
Used to initialize the subspace on the first iteration.
- Returns:
numpy.ndarray
If project is True, returns the low-rank factors and loadings only
Otherwise, returns the low-rank and sparse error matrices, as well as the results of a singular value decomposition (SVD) applied to the low-rank matrix.
- hyperspy.learn.rpca.rpca_godec(X, rank, lambda1=None, power=0, tol=0.001, maxiter=1000, random_state=None, **kwargs)#
Perform Robust PCA with missing or corrupted data, using the GoDec algorithm.
Decomposes a matrix Y = X + E, where X is low-rank and E is a sparse error matrix. This algorithm is based on the Matlab code from [Zhou2011]. See code here: https://sites.google.com/site/godecomposition/matrix/artifact-1
Read more in the User Guide.
- Parameters:
- X
numpy.ndarray
The matrix of observations with shape (n_features, n_samples)
- rank
int
The model dimensionality.
- lambda1
None
orfloat
Regularization parameter. If None, set to 1 / sqrt(n_features)
- power
int
, default 0 The number of power iterations used in the initialization
- tol
float
, default 1e-3 Convergence tolerance
- maxiter
int
, default 1000 Maximum number of iterations
- random_state
None
,int
orRandomState
, defaultNone
Used to initialize the subspace on the first iteration.
- X
- Returns:
- Xhat
numpy.ndarray
The low-rank matrix with shape (n_features, n_samples)
- Ehat
numpy.ndarray
The sparse error matrix with shape (n_features, n_samples)
- U, S, V
numpy.ndarray
The results of an SVD on Xhat
- Xhat
References
[Zhou2011]Tianyi Zhou and Dacheng Tao, “GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case”, ICML-11, (2011), pp. 33-40.
- hyperspy.learn.svd_pca.svd_flip_signs(u, v, u_based_decision=True)#
Sign correction to ensure deterministic output from SVD.
Adjusts the columns of u and the rows of v such that the loadings in the columns in u that are largest in absolute value are always positive.
- Parameters:
- u, v
numpy.ndarray
u and v are the outputs of a singular value decomposition.
- u_based_decisionbool, default
True
If True, use the columns of u as the basis for sign flipping. Otherwise, use the rows of v. The choice of which variable to base the decision on is generally algorithm dependent.
- u, v
- Returns:
- u, v
numpy.ndarray
Adjusted outputs with same dimensions as inputs.
- u, v
- hyperspy.learn.svd_pca.svd_pca(data, output_dimension=None, svd_solver='auto', centre=None, auto_transpose=True, svd_flip=True, **kwargs)#
Perform PCA using singular value decomposition (SVD).
Read more in the User Guide.
- Parameters:
- data
numpy
array
MxN array of input data (M features, N samples)
- output_dimension
None
orint
Number of components to keep/calculate
- svd_solver{“auto”, “full”, “arpack”, “randomized”}, default “auto”
- If auto:
The solver is selected by a default policy based on data.shape and output_dimension: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient “randomized” method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.
- If full:
run exact SVD, calling the standard LAPACK solver via
scipy.linalg.svd()
, and select the components by postprocessing- If arpack:
use truncated SVD, calling ARPACK solver via
scipy.sparse.linalg.svds()
. It requires strictly 0 < output_dimension < min(data.shape)- If randomized:
use truncated SVD, calling
sklearn.utils.extmath.randomized_svd()
to estimate a limited number of components
- centre{
None
, “navigation”, “signal”}, defaultNone
If None, the data is not centered prior to decomposition.
If
"navigation"
, the data is centered along the navigation axis.If
"signal"
, the data is centered along the signal axis.
- auto_transposebool, default
True
If True, automatically transposes the data to boost performance.
- svd_flipbool, default
True
If True, adjusts the signs of the loadings and factors such that the loadings that are largest in absolute value are always positive. See
svd_flip_signs()
for more details.
- data
- Returns:
- factors
numpy.ndarray
- loadings
numpy.ndarray
- explained_variance
numpy.ndarray
- mean
numpy.ndarray
orNone
None if centre is None
- factors
- hyperspy.learn.svd_pca.svd_solve(data, output_dimension=None, svd_solver='auto', svd_flip=True, u_based_decision=True, **kwargs)#
Apply singular value decomposition to input data.
- Parameters:
- data
numpy.ndarray
Input data array with shape (m, n)
- output_dimension
None
orint
Number of components to keep/calculate
- svd_solver{“auto”, “full”, “arpack”, “randomized”}, default “auto”
If
"auto"
: The solver is selected by a default policy based on data.shape and output_dimension: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient “randomized” method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards.If
"full"
: Run exact SVD, calling the standard LAPACK solver viascipy.linalg.svd()
, and select the components by postprocessingIf
"arpack"
: Use truncated SVD, calling ARPACK solver viascipy.sparse.linalg.svds()
. It requires strictly 0 < output_dimension < min(data.shape)If
"randomized"
: Use truncated SVD, callingsklearn.utils.extmath.randomized_svd()
to estimate a limited number of components
- svd_flipbool, default
True
If True, adjusts the signs of the loadings and factors such that the loadings that are largest in absolute value are always positive. See
svd_flip_signs()
for more details.- u_based_decisionbool, default
True
If True, and svd_flip is True, use the columns of u as the basis for sign-flipping. Otherwise, use the rows of v. The choice of which variable to base the decision on is generally algorithm dependent.
- data
- Returns:
- U, S, V
numpy.ndarray
Output of SVD such that X = U*S*V.T
- U, S, V
- hyperspy.learn.whitening.whiten_data(X, centre=True, method='PCA', epsilon=1e-10)#
Centre and whiten the data X.
A whitening transformation is used to decorrelate the variables, such that the new covariance matrix of the whitened data is the identity matrix.
If X is a random vector with non-singular covariance matrix C, and W is a whitening matrix satisfying W^T W = C^-1, then the transformation Y = W X will yield a whitened random vector Y with unit diagonal covariance. In ZCA whitening, the matrix W = C^-1/2, while in PCA whitening, the matrix W is the eigensystem of C. More details can be found in [Kessy2015].
- Parameters:
- Xnumpy,ndarray
The input data with shape (m, n).
- centrebool, default
True
If True, centre the data along the features axis. If False, do not centre the data.
- method{“PCA”, “ZCA”}
How to whiten the data. The default is PCA whitening.
- epsilon
float
, default 1e-10 Small floating-point value to avoid divide-by-zero errors.
- Returns:
- Y
numpy.ndarray
The centred and whitened data with shape (m, n).
- W
numpy.ndarray
The whitening matrix with shape (n, n).
- Y
References
[Kessy2015]A. Kessy, A. Lewin, and K. Strimmer, “Optimal Whitening and Decorrelation”, arXiv:1512.00809, (2015), https://arxiv.org/pdf/1512.00809.pdf