RidgeClassifierCV

class ibex.sklearn.linear_model.RidgeClassifierCV(alphas=(0.1, 1.0, 10.0), fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None)

Bases: sklearn.linear_model.ridge.RidgeClassifierCV, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Note

The documentation following is of the original class wrapped by this class. This class wraps the attribute coef_.

Example:

>>> import numpy as np
>>> from sklearn import datasets
>>> import pandas as pd
>>>
>>> iris = datasets.load_iris()
>>> features, targets, iris = iris['feature_names'], iris['target_names'], pd.DataFrame(
...     np.c_[iris['data'], iris['target']],
...     columns=iris['feature_names']+['class'])
>>> iris['class'] = iris['class'].map(pd.Series(targets))
>>>
>>> iris.head()
                sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)             0                5.1               3.5                1.4               0.2
1                4.9               3.0                1.4               0.2
2                4.7               3.2                1.3               0.2
3                4.6               3.1                1.5               0.2
4                5.0               3.6                1.4               0.2

    class
0  setosa
1  setosa
2  setosa
3  setosa
4  setosa
>>>
>>> from ibex.sklearn import linear_model as pd_linear_model
>>>
>>> clf =  pd_linear_model.RidgeClassifierCV().fit(iris[features], iris['class'])
>>>
>>> clf.coef_
sepal length (cm)   ...
sepal width (cm)    ...
petal length (cm)   ...
petal width (cm)    ...
dtype: float64

Example:

>>> from ibex.sklearn import linear_model as pd_linear_model
>>>
>>> clf =  pd_linear_model.RidgeClassifierCV().fit(iris[features], iris[['class', 'class']])
>>>
>>> clf.coef_
            sepal length (cm)  sepal width (cm)  petal length (cm)
setosa              ...
versicolor          ...
virginica           ...

            petal width (cm)
setosa             ...
versicolor         ...
virginica          ...

Note

The documentation following is of the original class wrapped by this class. This class wraps the attribute intercept_.

Example:

>>> import numpy as np
>>> from sklearn import datasets
>>> import pandas as pd
>>>
>>> iris = datasets.load_iris()
>>> features, targets, iris = iris['feature_names'], iris['target_names'], pd.DataFrame(
...     np.c_[iris['data'], iris['target']],
...     columns=iris['feature_names']+['class'])
>>> iris['class'] = iris['class'].map(pd.Series(targets))
>>>
>>> iris.head()
                sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)             0                5.1               3.5                1.4               0.2
1                4.9               3.0                1.4               0.2
2                4.7               3.2                1.3               0.2
3                4.6               3.1                1.5               0.2
4                5.0               3.6                1.4               0.2

    class
0  setosa
1  setosa
2  setosa
3  setosa
4  setosa
>>> from ibex.sklearn import linear_model as pd_linear_model
>>>
>>> clf = pd_linear_model.RidgeClassifierCV().fit(iris[features], iris['class'])
>>>
>>> clf.intercept_
sepal length (cm)   ...
sepal width (cm)    ...
petal length (cm)   ...
petal width (cm)    ...
dtype: float64

Example:

>>>
>>> from ibex.sklearn import linear_model as pd_linear_model
>>>
>>> clf = pd_linear_model.RidgeClassifierCV().fit(iris[features], iris[['class', 'class']])
>>>
>>> clf.intercept_
sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)
0...
1...
2...

Ridge classifier with built-in cross-validation.

By default, it performs Generalized Cross-Validation, which is a form of efficient Leave-One-Out cross-validation. Currently, only the n_features > n_samples case is handled efficiently.

Read more in the User Guide.

alphas : numpy array of shape [n_alphas]
Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to C^-1 in other linear models such as LogisticRegression or LinearSVC.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
normalize : boolean, optional, default False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use sklearn.preprocessing.StandardScaler before calling fit on an estimator with normalize=False.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y).
cv : int, cross-validation generator or an iterable, optional

Determines the cross-validation splitting strategy. Possible inputs for cv are:

  • None, to use the efficient Leave-One-Out cross-validation
  • integer, to specify the number of folds.
  • An object to be used as a cross-validation generator.
  • An iterable yielding train/test splits.

Refer User Guide for the various cross-validation strategies that can be used here.

class_weight : dict or ‘balanced’, optional

Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one.

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))

cv_values_ : array, shape = [n_samples, n_alphas] or shape = [n_samples, n_responses, n_alphas], optional
Cross-validation values for each alpha (if store_cv_values=True and

cv=None). After fit() has been called, this attribute will contain the mean squared errors (by default) or the values of the {loss,score}_func function (if provided in the constructor).

coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if fit_intercept = False.
alpha_ : float
Estimated regularization parameter

Ridge: Ridge regression RidgeClassifier: Ridge classifier RidgeCV: Ridge regression with built-in cross validation

For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge.

decision_function(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Predict confidence scores for samples.

The confidence score for a sample is the signed distance of that sample to the hyperplane.

X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
fit(X, y, sample_weight=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit the ridge classifier.

X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y : array-like, shape (n_samples,)
Target values. Will be cast to X’s dtype if necessary
sample_weight : float or numpy array of shape (n_samples,)
Sample weight.
self : object
Returns self.
predict(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Predict class labels for samples in X.

X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Samples.
C : array, shape = [n_samples]
Predicted class label per sample.
score(X, y, sample_weight=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True labels for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
score : float
Mean accuracy of self.predict(X) wrt. y.