RandomizedLasso

class ibex.sklearn.linear_model.RandomizedLasso(*args, **kwargs)

Bases: sklearn.linear_model.randomized_l1.RandomizedLasso, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Randomized Lasso.

Randomized Lasso works by subsampling the training data and computing a Lasso estimate where the penalty of a random subset of coefficients has been scaled. By performing this double randomization several times, the method assigns high scores to features that are repeatedly selected across randomizations. This is known as stability selection. In short, features selected more often are considered good features.

alpha : float, ‘aic’, or ‘bic’, optional
The regularization parameter alpha parameter in the Lasso. Warning: this is not the alpha parameter in the stability selection article which is scaling.
scaling : float, optional
The s parameter used to randomly scale the penalty of different features. Should be between 0 and 1.
sample_fraction : float, optional
The fraction of samples to be used in each randomized design. Should be between 0 and 1. If 1, all samples are used.
n_resampling : int, optional
Number of randomized models.
selection_threshold : float, optional
The score above which features should be selected.
fit_intercept : boolean, optional
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default True
If True, the regressors X will be normalized before regression. This parameter is ignored when fit_intercept is set to False. When the regressors are normalized, note that this makes the hyperparameters learned more robust and almost independent of the number of samples. The same property is not valid for standardized data. However, if you wish to standardize, please use preprocessing.StandardScaler before calling fit on an estimator with normalize=False.
precompute : True | False | ‘auto’ | array-like
Whether to use a precomputed Gram matrix to speed up calculations. If set to ‘auto’ let us decide. The Gram matrix can also be passed as argument, but it will be used only for the selection of parameter alpha, if alpha is ‘aic’ or ‘bic’.
max_iter : integer, optional
Maximum number of iterations to perform in the Lars algorithm.
eps : float, optional
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the ‘tol’ parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
n_jobs : integer, optional
Number of CPUs to use during the resampling. If ‘-1’, use all the CPUs
pre_dispatch : int, or string, optional

Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:

  • None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
  • An int, giving the exact number of total jobs that are spawned
  • A string, giving an expression as a function of n_jobs, as in ‘2*n_jobs’
memory : None, str or object with the joblib.Memory interface, optional (default=None)
Used for internal caching. By default, no caching is done. If a string is given, it is the path to the caching directory.
scores_ : array, shape = [n_features]
Feature scores between 0 and 1.
all_scores_ : array, shape = [n_features, n_reg_parameter]
Feature scores between 0 and 1 for all values of the regularization parameter. The reference article suggests scores_ is the max of all_scores_.
>>> from sklearn.linear_model import RandomizedLasso
>>> randomized_lasso = RandomizedLasso()

Stability selection Nicolai Meinshausen, Peter Buhlmann Journal of the Royal Statistical Society: Series B Volume 72, Issue 4, pages 417-473, September 2010 DOI: 10.1111/j.1467-9868.2010.00740.x

RandomizedLogisticRegression, Lasso, ElasticNet

fit(X, y)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit the model using X, y as training data.

X : array-like, shape = [n_samples, n_features]
Training data.
y : array-like, shape = [n_samples]
Target values. Will be cast to X’s dtype if necessary
self : object
Returns an instance of self.
fit_transform(X, y=None, **fit_params)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

X : numpy array of shape [n_samples, n_features]
Training set.
y : numpy array of shape [n_samples]
Target values.
X_new : numpy array of shape [n_samples, n_features_new]
Transformed array.
inverse_transform(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Reverse the transformation operation

X : array of shape [n_samples, n_selected_features]
The input samples.
X_r : array of shape [n_samples, n_original_features]
X with columns of zeros inserted where features would have been removed by transform.
transform(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Reduce X to the selected features.

X : array of shape [n_samples, n_features]
The input samples.
X_r : array of shape [n_samples, n_selected_features]
The input samples with only the selected features.