classMultivariateNormalTriL( mvn_linear_operator.MultivariateNormalLinearOperator): """The multivariate normal distribution on `R^k`. The Multivariate Normal distribution is defined over `R^k` and parameterized by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k` `scale` matrix; `covariance = scale @ scale.T` where `@` denotes matrix-multiplication. #### Mathematical Details The probability density function (pdf) is, ```none pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z, y = inv(scale) @ (x - loc), Z = (2 pi)**(0.5 k) |det(scale)|, where: * `loc` is a vector in `R^k`, * `scale` is a matrix in `R^{k x k}`, `covariance = scale @ scale.T`, * `Z` denotes the normalization constant, and, * `||y||**2` denotes the squared Euclidean norm of `y`. A (non-batch) `scale` matrix is: ```none scale = scale_tril ``` where `scale_tril` is lower-triangular `k x k` matrix with non-zero diagonal, i.e., `tf.diag_part(scale_tril) != 0`. Additional leading dimensions (if any) will index batches. The MultivariateNormal distribution is a member of the [location-scale family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be constructed as, ```none X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift. Y = scale @ X + loc ``` Trainable (batch) lower-triangular matrices can be created with `tfp.distributions.matrix_diag_transform()` and/or `tfp.math.fill_triangular()
classCategorical(distribution.AutoCompositeTensorDistribution): """Categorical distribution over integers. The Categorical distribution is parameterized by either probabilities or log-probabilities of a set of `K` classes. It is defined over the integers `{0, 1, ..., K-1}`. The Categorical distribution is closely related to the `OneHotCategorical` and `Multinomial` distributions. The Categorical distribution can be intuited as generating samples according to `argmax{ OneHotCategorical(probs) }` itself being identical to `argmax{ Multinomial(probs, total_count=1) }`. #### Mathematical Details The probability mass function (pmf) is, ```none pmf(k; pi) = prod_j pi_j**[k == j] Pitfalls The number of classes, `K`, must not exceed: - the largest integer representable by `self.dtype`, i.e., `2**(mantissa_bits+1)` (IEEE 754), - the maximum `Tensor` index, i.e., `2**31-1`. In other words, ```python K <= min(2**31-1, { tf.float16: 2**11, tf.float32: 2**24, tf.float64: 2**53 }[param.dtype]) Note: This condition is validated only when `self.validate_args = True`.
class_MixtureSameFamily(distribution.Distribution): """Mixture (same-family) distribution. The `MixtureSameFamily` distribution implements a (batch of) mixture distribution where all components are from different parameterizations of the same distribution type. It is parameterized by a `Categorical` 'selecting distribution' (over `k` components) and a components distribution, i.e., a `Distribution` with a rightmost batch shape (equal to `[k]`) which indexes each (batch of) component.
例子
1 2 3 4 5 6 7 8 9 10 11 12
tfd = tfp.distributions ### Create a mixture of two scalar Gaussians: gm = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical( probs=[0.3, 0.7]), components_distribution=tfd.Normal( loc=[-1., 1], # One for each component. scale=[0.1, 0.5])) # And same here. import numpy as np x = np.linspace(-2., 3., int(1e4), dtype=np.float32) import matplotlib.pyplot as plt plt.plot(x, gm.prob(x));
deflog_prob(self, value, name='log_prob', **kwargs): """Log probability density/mass function. Args: value: `float` or `double` `Tensor`. name: Python `str` prepended to names of ops created by this function. **kwargs: Named arguments forwarded to subclass implementation. Returns: log_prob: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. """ returnself._call_log_prob(value, name, **kwargs)