GMM

class ibex.sklearn.mixture.GMM(*args, **kwargs)

Bases: sklearn.mixture.gmm.GMM, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Legacy Gaussian Mixture Model

Deprecated since version 0.18: This class will be removed in 0.20. Use sklearn.mixture.GaussianMixture instead.

aic(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Akaike information criterion for the current model fit

and the proposed data.

X : array of shape(n_samples, n_dimensions)

aic : float (the lower the better)

bic(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Bayesian information criterion for the current model fit

and the proposed data.

X : array of shape(n_samples, n_dimensions)

bic : float (the lower the better)

fit(X, y=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Estimate model parameters with the EM algorithm.

A initialization step is performed before entering the expectation-maximization (EM) algorithm. If you want to avoid this step, set the keyword argument init_params to the empty string ‘’ when creating the GMM object. Likewise, if you would like just to do an initialization, set n_iter=0.

X : array_like, shape (n, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.

self

fit_predict(X, y=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit and then predict labels for data.

Warning: Due to the final maximization step in the EM algorithm, with low iterations the prediction may not be 100% accurate.

New in version 0.17: fit_predict method in Gaussian Mixture Model.

X : array-like, shape = [n_samples, n_features]

C : array, shape = (n_samples,) component memberships

predict(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Predict label for data.

X : array-like, shape = [n_samples, n_features]

C : array, shape = (n_samples,) component memberships

predict_proba(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Predict posterior probability of data under each Gaussian

in the model.

X : array-like, shape = [n_samples, n_features]

responsibilities : array-like, shape = (n_samples, n_components)
Returns the probability of the sample for each Gaussian (state) in the model.
score(X, y=None)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Compute the log probability under the model.

X : array_like, shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
logprob : array_like, shape (n_samples,)
Log probabilities of each data point in X
score_samples(X)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Return the per-sample likelihood of the data under the model.

Compute the log probability of X under the model and return the posterior distribution (responsibilities) of each mixture component for each element of X.

X : array_like, shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
logprob : array_like, shape (n_samples,)
Log probabilities of each data point in X.
responsibilities : array_like, shape (n_samples, n_components)
Posterior probabilities of each mixture component for each observation