# KernelDensity¶

class ibex.sklearn.neighbors.KernelDensity(bandwidth=1.0, algorithm='auto', kernel='gaussian', metric='euclidean', atol=0, rtol=0, breadth_first=True, leaf_size=40, metric_params=None)

Bases: sklearn.neighbors.kde.KernelDensity, ibex._base.FrameMixin

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Kernel Density Estimation

Read more in the User Guide.

bandwidth : float
The bandwidth of the kernel.
algorithm : string
The tree algorithm to use. Valid options are [‘kd_tree’|’ball_tree’|’auto’]. Default is ‘auto’.
kernel : string
The kernel to use. Valid kernels are [‘gaussian’|’tophat’|’epanechnikov’|’exponential’|’linear’|’cosine’] Default is ‘gaussian’.
metric : string
The distance metric to use. Note that not all metrics are valid with all algorithms. Refer to the documentation of BallTree and KDTree for a description of available algorithms. Note that the normalization of the density output is correct only for the Euclidean distance metric. Default is ‘euclidean’.
atol : float
The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.
rtol : float
The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
If true (default), use a breadth-first approach to the problem. Otherwise use a depth-first approach.
leaf_size : int
Specify the leaf size of the underlying tree. See BallTree or KDTree for details. Default is 40.
metric_params : dict
Additional parameters to be passed to the tree for use with the metric. For more information, see the documentation of BallTree or KDTree.
fit(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Fit the Kernel Density model on the data.

X : array_like, shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
score(X, y=None)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Compute the total log probability under the model.

X : array_like, shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
logprob : float
Total log-likelihood of the data in X.
score_samples(X)[source]

Note

The documentation following is of the class wrapped by this class. There are some changes, in particular:

Evaluate the density model on the data.

X : array_like, shape (n_samples, n_features)
An array of points to query. Last dimension should match dimension of training data (n_features).
density : ndarray, shape (n_samples,)
The array of log(density) evaluations.