LassoLars
¶
-
class
ibex.sklearn.linear_model.
LassoLars
(alpha=1.0, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, positive=False)¶ Bases:
sklearn.linear_model.least_angle.LassoLars
,ibex._base.FrameMixin
Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
- A parameter
X
denotes apandas.DataFrame
. - A parameter
y
denotes apandas.Series
.
Note
The documentation following is of the original class wrapped by this class. This class wraps the attribute
coef_
.Example:
>>> import pandas as pd >>> import numpy as np >>> from ibex.sklearn import datasets >>> from ibex.sklearn.linear_model import LinearRegression as PdLinearRegression
>>> iris = datasets.load_iris() >>> features = iris['feature_names'] >>> iris = pd.DataFrame( ... np.c_[iris['data'], iris['target']], ... columns=features+['class'])
>>> iris[features] sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 ...
>>> from ibex.sklearn import linear_model as pd_linear_model >>> >>> prd = pd_linear_model.LassoLars().fit(iris[features], iris['class']) >>> >>> prd.coef_ sepal length (cm) ... sepal width (cm) ... petal length (cm) ... petal width (cm) ... dtype: float64
Example:
>>> from ibex.sklearn import linear_model as pd_linear_model >>> prd = pd_linear_model.LassoLars().fit(iris[features], iris[['class', 'class']]) >>> >>> prd.coef_ sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0... 0.414988 1.461297 -2.262141 -1.029095 1... 0.416640 -1.600833 0.577658 -1.385538 2... -1.707525 -1.534268 2.470972 2.555382
Note
The documentation following is of the original class wrapped by this class. This class wraps the attribute
intercept_
.Example:
>>> import pandas as pd >>> import numpy as np >>> from ibex.sklearn import datasets >>> from ibex.sklearn.linear_model import LinearRegression as PdLinearRegression
>>> iris = datasets.load_iris() >>> features = iris['feature_names'] >>> iris = pd.DataFrame( ... np.c_[iris['data'], iris['target']], ... columns=features+['class'])
>>> iris[features] sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 ...
>>> from ibex.sklearn import linear_model as pd_linear_model >>> prd = pd_linear_model.LassoLars().fit(iris[features], iris[['class', 'class']]) >>> >>> prd.intercept_ sepal length (cm) ... sepal width (cm) ... petal length (cm) ... petal width (cm) ... dtype: float64
Lasso model fit with Least Angle Regression a.k.a. Lars
It is a Linear Model trained with an L1 prior as regularizer.
The optimization objective for Lasso is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide.
- alpha : float
- Constant that multiplies the penalty term. Defaults to 1.0.
alpha = 0
is equivalent to an ordinary least square, solved byLinearRegression
. For numerical reasons, usingalpha = 0
with the LassoLars object is not advised and you should prefer the LinearRegression object. - fit_intercept : boolean
- whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
- verbose : boolean or integer, optional
- Sets the verbosity amount
- normalize : boolean, optional, default True
- This parameter is ignored when
fit_intercept
is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please usesklearn.preprocessing.StandardScaler
before callingfit
on an estimator withnormalize=False
. - precompute : True | False | ‘auto’ | array-like
- Whether to use a precomputed Gram matrix to speed up
calculations. If set to
'auto'
let us decide. The Gram matrix can also be passed as argument. - max_iter : integer, optional
- Maximum number of iterations to perform.
- eps : float, optional
- The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the
tol
parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. - copy_X : boolean, optional, default True
- If True, X will be copied; else, it may be overwritten.
- fit_path : boolean
- If
True
the full path is stored in thecoef_path_
attribute. If you compute the solution for a large problem or many targets, settingfit_path
toFalse
will lead to a speedup, especially with a small alpha. - positive : boolean (default=False)
- Restrict coefficients to be >= 0. Be aware that you might want to
remove fit_intercept which is set True by default.
Under the positive restriction the model coefficients will not converge
to the ordinary-least-squares solution for small values of alpha.
Only coefficients up to the smallest alpha value (
alphas_[alphas_ > 0.].min()
when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
- alphas_ : array, shape (n_alphas + 1,) | list of n_targets such arrays
- Maximum of covariances (in absolute value) at each iteration.
n_alphas
is eithermax_iter
,n_features
, or the number of nodes in the path with correlation greater thanalpha
, whichever is smaller. - active_ : list, length = n_alphas | list of n_targets such lists
- Indices of active variables at the end of the path.
- coef_path_ : array, shape (n_features, n_alphas + 1) or list
- If a list is passed it’s expected to be one of n_targets such arrays.
The varying values of the coefficients along the path. It is not
present if the
fit_path
parameter isFalse
. - coef_ : array, shape (n_features,) or (n_targets, n_features)
- Parameter vector (w in the formulation formula).
- intercept_ : float | array, shape (n_targets,)
- Independent term in decision function.
- n_iter_ : array-like or int.
- The number of iterations taken by lars_path to find the grid of alphas for each target.
>>> from sklearn import linear_model >>> reg = linear_model.LassoLars(alpha=0.01) >>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1]) ... LassoLars(alpha=0.01, copy_X=True, eps=..., fit_intercept=True, fit_path=True, max_iter=500, normalize=True, positive=False, precompute='auto', verbose=False) >>> print(reg.coef_) [ 0. -0.963257...]
lars_path lasso_path Lasso LassoCV LassoLarsCV sklearn.decomposition.sparse_encode
-
fit
(X, y, Xy=None)¶ Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
- A parameter
X
denotes apandas.DataFrame
. - A parameter
y
denotes apandas.Series
.
Fit the model using X, y as training data.
- X : array-like, shape (n_samples, n_features)
- Training data.
- y : array-like, shape (n_samples,) or (n_samples, n_targets)
- Target values.
- Xy : array-like, shape (n_samples,) or (n_samples, n_targets), optional
- Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
- self : object
- returns an instance of self.
- A parameter
-
predict
(X)¶ Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
- A parameter
X
denotes apandas.DataFrame
. - A parameter
y
denotes apandas.Series
.
Predict using the linear model
- X : {array-like, sparse matrix}, shape = (n_samples, n_features)
- Samples.
- C : array, shape = (n_samples,)
- Returns predicted values.
- A parameter
-
score
(X, y, sample_weight=None)¶ Note
The documentation following is of the class wrapped by this class. There are some changes, in particular:
- A parameter
X
denotes apandas.DataFrame
. - A parameter
y
denotes apandas.Series
.
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
- X : array-like, shape = (n_samples, n_features)
- Test samples.
- y : array-like, shape = (n_samples) or (n_samples, n_outputs)
- True values for X.
- sample_weight : array-like, shape = [n_samples], optional
- Sample weights.
- score : float
- R^2 of self.predict(X) wrt. y.
- A parameter
- A parameter