QuadraticDiscriminantAnalysis

class ibex.sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0, store_covariance=False, tol=0.0001, store_covariances=None)

Bases: sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Quadratic Discriminant Analysis

A classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.

The model fits a Gaussian density to each class.

New in version 0.17: QuadraticDiscriminantAnalysis

Read more in the User Guide.

priors : array, optional, shape = [n_classes]
Priors on classes
reg_param : float, optional
Regularizes the covariance estimate as (1-reg_param)*Sigma + reg_param*np.eye(n_features)
store_covariance : boolean

If True the covariance matrices are computed and stored in the self.covariance_ attribute.

New in version 0.17.

tol : float, optional, default 1.0e-4

Threshold used for rank estimation.

New in version 0.17.

covariance_ : list of array-like, shape = [n_features, n_features]
Covariance matrices of each class.
means_ : array-like, shape = [n_classes, n_features]
Class means.
priors_ : array-like, shape = [n_classes]
Class priors (sum to 1).
rotations_ : list of arrays
For each class k an array of shape [n_features, n_k], with n_k = min(n_features, number of elements in class k) It is the rotation of the Gaussian distribution, i.e. its principal axis.
scalings_ : list of arrays
For each class k an array of shape [n_k]. It contains the scaling of the Gaussian distributions along its principal axes, i.e. the variance in the rotated coordinate system.
>>> from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = QuadraticDiscriminantAnalysis()
>>> clf.fit(X, y)
... 
QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0,
                              store_covariance=False,
                              store_covariances=None, tol=0.0001)
>>> print(clf.predict([[-0.8, -1]]))
[1]
sklearn.discriminant_analysis.LinearDiscriminantAnalysis: Linear
Discriminant Analysis
decision_function(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Apply decision function to an array of samples.

X : array-like, shape = [n_samples, n_features]
Array of samples (test vectors).
C : array, shape = [n_samples, n_classes] or [n_samples,]
Decision function values related to each class, per sample. In the two-class case, the shape is [n_samples,], giving the log likelihood ratio of the positive class.
fit(X, y)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit the model according to the given training data and parameters.

Changed in version 0.19: store_covariances has been moved to main constructor as store_covariance

Changed in version 0.19: tol has been moved to main constructor.

X : array-like, shape = [n_samples, n_features]
Training vector, where n_samples is the number of samples and n_features is the number of features.
y : array, shape = [n_samples]
Target values (integers)
predict(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Perform classification on an array of test vectors X.

The predicted class C for each sample in X is returned.

X : array-like, shape = [n_samples, n_features]

C : array, shape = [n_samples]

predict_log_proba(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Return posterior probabilities of classification.

X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
C : array, shape = [n_samples, n_classes]
Posterior log-probabilities of classification per class.
predict_proba(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Return posterior probabilities of classification.

X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
C : array, shape = [n_samples, n_classes]
Posterior probabilities of classification per class.
score(X, y, sample_weight=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True labels for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
score : float
Mean accuracy of self.predict(X) wrt. y.