LedoitWolf

class ibex.sklearn.covariance.LedoitWolf(store_precision=True, assume_centered=False, block_size=1000)

Bases: sklearn.covariance.shrunk_covariance_.LedoitWolf, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

LedoitWolf Estimator

Ledoit-Wolf is a particular form of shrinkage, where the shrinkage coefficient is computed using O. Ledoit and M. Wolf’s formula as described in “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411.

Read more in the User Guide.

store_precision : bool, default=True
Specify if the estimated precision is stored.
assume_centered : bool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation.
block_size : int, default=1000
Size of the blocks into which the covariance matrix will be split during its Ledoit-Wolf estimation. This is purely a memory optimization and does not affect results.
covariance_ : array-like, shape (n_features, n_features)
Estimated covariance matrix
precision_ : array-like, shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
shrinkage_ : float, 0 <= shrinkage <= 1
Coefficient in the convex combination used for the computation of the shrunk estimate.

The regularised covariance is:

(1 - shrinkage)*cov
        + shrinkage*mu*np.identity(n_features)

where mu = trace(cov) / n_features and shrinkage is given by the Ledoit and Wolf formula (see References)

“A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411.

fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fits the Ledoit-Wolf shrunk covariance model

according to the given training data and parameters.

X : array-like, shape = [n_samples, n_features]
Training data, where n_samples is the number of samples and n_features is the number of features.

y : not used, present for API consistence purpose.

self : object
Returns self.
score(X_test, y=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Computes the log-likelihood of a Gaussian data set with

self.covariance_ as an estimator of its covariance matrix.

X_test : array-like, shape = [n_samples, n_features]
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).

y : not used, present for API consistence purpose.

res : float
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.