OrthogonalMatchingPursuit

class ibex.sklearn.linear_model.OrthogonalMatchingPursuit(n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize=True, precompute='auto')

Bases: sklearn.linear_model.omp.OrthogonalMatchingPursuit, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Note

The documentation following is of the original class wrapped by this class. This class wraps the attribute coef_.

Example:

>>> import pandas as pd
>>> import numpy as np
>>> from ibex.sklearn import datasets
>>> from ibex.sklearn.linear_model import LinearRegression as PdLinearRegression
>>> iris = datasets.load_iris()
>>> features = iris['feature_names']
>>> iris = pd.DataFrame(
...     np.c_[iris['data'], iris['target']],
...     columns=features+['class'])
>>> iris[features]
                sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)
0                5.1               3.5                1.4               0.2
1                4.9               3.0                1.4               0.2
2                4.7               3.2                1.3               0.2
3                4.6               3.1                1.5               0.2
4                5.0               3.6                1.4               0.2
...
>>> from ibex.sklearn import linear_model as pd_linear_model
>>>
>>> prd =  pd_linear_model.OrthogonalMatchingPursuit().fit(iris[features], iris['class'])
>>>
>>> prd.coef_
sepal length (cm)   ...
sepal width (cm)    ...
petal length (cm)   ...
petal width (cm)    ...
dtype: float64

Example:

>>> from ibex.sklearn import linear_model as pd_linear_model
>>> prd =  pd_linear_model.OrthogonalMatchingPursuit().fit(iris[features], iris[['class', 'class']])
>>>
>>> prd.coef_
sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)
0...           0.414988          1.461297          -2.262141         -1.029095
1...           0.416640         -1.600833           0.577658         -1.385538
2...          -1.707525         -1.534268           2.470972          2.555382

Note

The documentation following is of the original class wrapped by this class. This class wraps the attribute intercept_.

Example:

>>> import pandas as pd
>>> import numpy as np
>>> from ibex.sklearn import datasets
>>> from ibex.sklearn.linear_model import LinearRegression as PdLinearRegression
>>> iris = datasets.load_iris()
>>> features = iris['feature_names']
>>> iris = pd.DataFrame(
...     np.c_[iris['data'], iris['target']],
...     columns=features+['class'])
>>> iris[features]
                sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)
0                5.1               3.5                1.4               0.2
1                4.9               3.0                1.4               0.2
2                4.7               3.2                1.3               0.2
3                4.6               3.1                1.5               0.2
4                5.0               3.6                1.4               0.2
...
>>> from ibex.sklearn import linear_model as pd_linear_model
>>> prd = pd_linear_model.OrthogonalMatchingPursuit().fit(iris[features], iris[['class', 'class']])
>>>
>>> prd.intercept_
sepal length (cm)   ...
sepal width (cm)    ...
petal length (cm)   ...
petal width (cm)    ...
dtype: float64

Orthogonal Matching Pursuit model (OMP)

Read more in the User Guide.

n_nonzero_coefs : int, optional
Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features.
tol : float, optional
Maximum norm of the residual. If not None, overrides n_nonzero_coefs.
fit_intercept : boolean, optional
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
normalize : boolean, optional, default True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use sklearn.preprocessing.StandardScaler before calling fit on an estimator with normalize=False.
precompute : {True, False, ‘auto’}, default ‘auto’
Whether to use a precomputed Gram and Xy matrix to speed up calculations. Improves performance when n_targets or n_samples is very large. Note that if you already have such matrices, you can pass them directly to the fit method.
coef_ : array, shape (n_features,) or (n_targets, n_features)
parameter vector (w in the formula)
intercept_ : float or array, shape (n_targets,)
independent term in decision function.
n_iter_ : int or array-like
Number of active features across every target.

Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf)

This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. http://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf

orthogonal_mp orthogonal_mp_gram lars_path Lars LassoLars decomposition.sparse_encode

fit(X, y)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit the model using X, y as training data.

X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,) or (n_samples, n_targets)
Target values. Will be cast to X’s dtype if necessary
self : object
returns an instance of self.
predict(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Predict using the linear model

X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.
C : array, shape = (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True values for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
score : float
R^2 of self.predict(X) wrt. y.