LatentDirichletAllocation

class ibex.sklearn.decomposition.LatentDirichletAllocation(n_components=10, doc_topic_prior=None, topic_word_prior=None, learning_method=None, learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=-1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=1, verbose=0, random_state=None, n_topics=None)

Bases: sklearn.decomposition.online_lda.LatentDirichletAllocation, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Note

The documentation following is of the original class wrapped by this class. This class wraps the attribute components_.

Example:

>>> import pandas as pd
>>> import numpy as np
>>> from ibex.sklearn import datasets
>>> from ibex.sklearn.decomposition import PCA as PdPCA
>>> iris = datasets.load_iris()
>>> features = iris['feature_names']
>>> iris = pd.DataFrame(
...     np.c_[iris['data'], iris['target']],
...     columns=features+['class'])
>>> iris[features]
sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)
0                5.1               3.5                1.4               0.2
1                4.9               3.0                1.4               0.2
2                4.7               3.2                1.3               0.2
3                4.6               3.1                1.5               0.2
4                5.0               3.6                1.4               0.2
...
>>> PdPCA(n_components=2).fit(iris[features], iris['class']).transform(iris[features])
    comp_0    comp_1
0   -2.684207 ...0.326607
1   -2.715391 ...0.169557
2   -2.889820 ...0.137346
3   -2.746437 ...0.311124
4   -2.728593 ...0.333925
...

Latent Dirichlet Allocation with online variational Bayes algorithm

New in version 0.17.

Read more in the User Guide.

n_components : int, optional (default=10)
Number of topics.
doc_topic_prior : float, optional (default=None)
Prior of document topic distribution theta. If the value is None, defaults to 1 / n_components. In the literature, this is called alpha.
topic_word_prior : float, optional (default=None)
Prior of topic word distribution beta. If the value is None, defaults to 1 / n_components. In the literature, this is called eta.
learning_method : ‘batch’ | ‘online’, default=’online’

Method used to update _component. Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update. The default learning method is going to be changed to ‘batch’ in the 0.20 release. Valid options:

'batch': Batch variational Bayes method. Use all training data in
    each EM update.
    Old `components_` will be overwritten in each iteration.
'online': Online variational Bayes method. In each EM update, use
    mini-batch of training data to update the ``components_``
    variable incrementally. The learning rate is controlled by the
    ``learning_decay`` and the ``learning_offset`` parameters.
learning_decay : float, optional (default=0.7)
It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size is n_samples, the update method is same as batch learning. In the literature, this is called kappa.
learning_offset : float, optional (default=10.)
A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.
max_iter : integer, optional (default=10)
The maximum number of iterations.
batch_size : int, optional (default=128)
Number of documents to use in each EM iteration. Only used in online learning.
evaluate_every : int, optional (default=0)
How often to evaluate perplexity. Only used in fit method. set it to 0 or negative number to not evalute perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.
total_samples : int, optional (default=1e6)
Total number of documents. Only used in the partial_fit method.
perp_tol : float, optional (default=1e-1)
Perplexity tolerance in batch learning. Only used when evaluate_every is greater than 0.
mean_change_tol : float, optional (default=1e-3)
Stopping tolerance for updating document topic distribution in E-step.
max_doc_update_iter : int (default=100)
Max number of iterations for updating document topic distribution in the E-step.
n_jobs : int, optional (default=1)
The number of jobs to use in the E-step. If -1, all CPUs are used. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used.
verbose : int, optional (default=0)
Verbosity level.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
n_topics : int, optional (default=None)
This parameter has been renamed to n_components and will be removed in version 0.21. .. deprecated:: 0.19
components_ : array, [n_components, n_features]
Variational parameters for topic word distribution. Since the complete conditional for topic word distribution is a Dirichlet, components_[i, j] can be viewed as pseudocount that represents the number of times word j was assigned to topic i. It can also be viewed as distribution over the words for each topic after normalization: model.components_ / model.components_.sum(axis=1)[:, np.newaxis].
n_batch_iter_ : int
Number of iterations of the EM step.
n_iter_ : int
Number of passes over the dataset.
[1] “Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman,
David M. Blei, Francis Bach, 2010
[2] “Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei,
Chong Wang, John Paisley, 2013
[3] Matthew D. Hoffman’s onlineldavb code. Link:
http://matthewdhoffman.com//code/onlineldavb.tar
fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Learn model for the data X with variational Bayes method.

When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update.

X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.

y : Ignored.

self

fit_transform(X, y=None, **fit_params)

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

X : numpy array of shape [n_samples, n_features]
Training set.
y : numpy array of shape [n_samples]
Target values.
X_new : numpy array of shape [n_samples, n_features_new]
Transformed array.
partial_fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Online VB with Mini-Batch update.

X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.

y : Ignored.

self

perplexity(X, doc_topic_distr='deprecated', sub_sampling=False)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Calculate approximate perplexity for data X.

Perplexity is defined as exp(-1. * log-likelihood per word)

Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution

X : array-like or sparse matrix, [n_samples, n_features]
Document word matrix.
doc_topic_distr : None or array, shape=(n_samples, n_components)

Document topic distribution. This argument is deprecated and is currently being ignored.

Deprecated since version 0.19.

sub_sampling : bool
Do sub-sampling or not.
score : float
Perplexity score.
score(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Calculate approximate log-likelihood as score.

X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.

y : Ignored.

score : float
Use approximate bound as score.
transform(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Transform data X according to the fitted model.

Changed in version 0.18: doc_topic_distr is now normalized

X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.
doc_topic_distr : shape=(n_samples, n_components)
Document topic distribution for X.