# StandardScaler¶

class ibex.sklearn.preprocessing.StandardScaler(copy=True, with_mean=True, with_std=True)

Bases: sklearn.preprocessing.data.StandardScaler, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Standardize features by removing the mean and scaling to unit variance

Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using the transform method.

Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).

For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.

This scaler can also be applied to sparse CSR or CSC matrices by passing with_mean=False to avoid breaking the sparsity structure of the data.

Read more in the User Guide.

copy : boolean, optional, default True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
with_mean : boolean, True by default
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_std : boolean, True by default
If True, scale the data to unit variance (or equivalently, unit standard deviation).
scale_ : ndarray, shape (n_features,)

Per feature relative scaling of the data.

New in version 0.17: scale_

mean_ : array of floats with shape [n_features]
The mean value for each feature in the training set.
var_ : array of floats with shape [n_features]
The variance for each feature in the training set. Used to compute scale_
n_samples_seen_ : int
The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit calls.
>>> from sklearn.preprocessing import StandardScaler
>>>
>>> data = [[0, 0], [0, 0], [1, 1], [1, 1]]
>>> scaler = StandardScaler()
>>> print(scaler.fit(data))
StandardScaler(copy=True, with_mean=True, with_std=True)
>>> print(scaler.mean_)
[ 0.5  0.5]
>>> print(scaler.transform(data))
[[-1. -1.]
[-1. -1.]
[ 1.  1.]
[ 1.  1.]]
>>> print(scaler.transform([[2, 2]]))
[[ 3.  3.]]


scale: Equivalent function without the estimator API.

sklearn.decomposition.PCA
Further removes the linear correlation across features with ‘whiten=True’.

For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py.

fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Compute the mean and std to be used for later scaling.

X : {array-like, sparse matrix}, shape [n_samples, n_features]
The data used to compute the mean and standard deviation used for later scaling along the features axis.

y : Passthrough for Pipeline compatibility.

fit_transform(X, y=None, **fit_params)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

X : numpy array of shape [n_samples, n_features]
Training set.
y : numpy array of shape [n_samples]
Target values.
X_new : numpy array of shape [n_samples, n_features_new]
Transformed array.
inverse_transform(X, copy=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Scale back the data to the original representation

X : array-like, shape [n_samples, n_features]
The data used to scale along the features axis.
copy : bool, optional (default: None)
Copy the input X or not.
X_tr : array-like, shape [n_samples, n_features]
Transformed array.
partial_fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Online computation of mean and std on X for later scaling.

All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream.

The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. “Algorithms for computing the sample variance: Analysis and recommendations.” The American Statistician 37.3 (1983): 242-247:

X : {array-like, sparse matrix}, shape [n_samples, n_features]
The data used to compute the mean and standard deviation used for later scaling along the features axis.

y : Passthrough for Pipeline compatibility.

transform(X, y='deprecated', copy=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Perform standardization by centering and scaling

X : array-like, shape [n_samples, n_features]
The data used to scale along the features axis.
y : (ignored)

Deprecated since version 0.19: This parameter will be removed in 0.21.

copy : bool, optional (default: None)
Copy the input X or not.