Von Mises-Fisher Elliptical Distribution
- URL: http://arxiv.org/abs/2103.07948v1
- Date: Sun, 14 Mar 2021 15:14:04 GMT
- Title: Von Mises-Fisher Elliptical Distribution
- Authors: Shengxi Li, Danilo Mandic
- Abstract summary: We propose to employ the von-Mises-Fisher (vMF) distribution to obtain an explicit and simple probability representation of a skewed elliptical distribution.
This is shown not only to allow us to deal with non-symmetric learning systems, but also to provide a physically meaningful way of generalising skewed distributions.
We also demonstrate that the proposed vMF distribution is both easy to generate and stable to estimate, both theoretically and through examples.
- Score: 5.7559253770425425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large class of modern probabilistic learning systems assumes symmetric
distributions, however, real-world data tend to obey skewed distributions and
are thus not always adequately modelled through symmetric distributions. To
address this issue, elliptical distributions are increasingly used to
generalise symmetric distributions, and further improvements to skewed
elliptical distributions have recently attracted much attention. However,
existing approaches are either hard to estimate or have complicated and
abstract representations. To this end, we propose to employ the
von-Mises-Fisher (vMF) distribution to obtain an explicit and simple
probability representation of the skewed elliptical distribution. This is shown
not only to allow us to deal with non-symmetric learning systems, but also to
provide a physically meaningful way of generalising skewed distributions. For
rigour, our extension is proved to share important and desirable properties
with its symmetric counterpart. We also demonstrate that the proposed vMF
distribution is both easy to generate and stable to estimate, both
theoretically and through examples.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Distributional Matrix Completion via Nearest Neighbors in the Wasserstein Space [8.971989179518216]
Given a sparsely observed matrix of empirical distributions, we seek to impute the true distributions associated with both observed and unobserved matrix entries.
We utilize tools from optimal transport to generalize the nearest neighbors method to the distributional setting.
arXiv Detail & Related papers (2024-10-17T00:50:17Z) - Generative Assignment Flows for Representing and Learning Joint Distributions of Discrete Data [2.6499018693213316]
We introduce a novel generative model for the representation of joint probability distributions of a possibly large number of discrete random variables.
The embedding of the flow via the Segre map in the meta-simplex of all discrete joint distributions ensures that any target distribution can be represented in principle.
Our approach has strong motivation from first principles of modeling coupled discrete variables.
arXiv Detail & Related papers (2024-06-06T21:58:33Z) - SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning [49.94607673097326]
We propose a highly adaptable framework, designated as SimPro, which does not rely on any predefined assumptions about the distribution of unlabeled data.
Our framework, grounded in a probabilistic model, innovatively refines the expectation-maximization algorithm.
Our method showcases consistent state-of-the-art performance across diverse benchmarks and data distribution scenarios.
arXiv Detail & Related papers (2024-02-21T03:39:04Z) - A Distributional Analogue to the Successor Representation [54.99439648059807]
This paper contributes a new approach for distributional reinforcement learning.
It elucidates a clean separation of transition structure and reward in the learning process.
As an illustration, we show that it enables zero-shot risk-sensitive policy evaluation.
arXiv Detail & Related papers (2024-02-13T15:35:24Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - New Classes of the Greedy-Applicable Arm Feature Distributions in the Sparse Linear Bandit Problem [34.51168440208439]
We consider the sparse contextual bandit problem where arm feature affects reward through the inner product of sparse parameters.
Recent studies have developed sparsity-agnostic algorithms based on the greedy arm selection policy.
arXiv Detail & Related papers (2023-12-19T18:35:33Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.