Sample Complexity Bounds for Estimating Probability Divergences under Invariances
- URL: http://arxiv.org/abs/2311.02868v2
- Date: Thu, 6 Jun 2024 02:00:14 GMT
- Title: Sample Complexity Bounds for Estimating Probability Divergences under Invariances
- Authors: Behrooz Tahmasebi, Stefanie Jegelka,
- Abstract summary: Group-invariant probability distributions appear in many data-generative models in machine learning.
In this work, we study how the inherent invariances, with respect to any smooth action of a Lie group on a manifold, improve sample complexity.
Results are completely new for groups of positive dimension and extend recent bounds for finite group actions.
- Score: 31.946304450935628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group-invariant probability distributions appear in many data-generative models in machine learning, such as graphs, point clouds, and images. In practice, one often needs to estimate divergences between such distributions. In this work, we study how the inherent invariances, with respect to any smooth action of a Lie group on a manifold, improve sample complexity when estimating the 1-Wasserstein distance, the Sobolev Integral Probability Metrics (Sobolev IPMs), the Maximum Mean Discrepancy (MMD), and also the complexity of the density estimation problem (in the $L^2$ and $L^\infty$ distance). Our results indicate a two-fold gain: (1) reducing the sample complexity by a multiplicative factor corresponding to the group size (for finite groups) or the normalized volume of the quotient space (for groups of positive dimension); (2) improving the exponent in the convergence rate (for groups of positive dimension). These results are completely new for groups of positive dimension and extend recent bounds for finite group actions.
Related papers
- Multi-Group Fairness Evaluation via Conditional Value-at-Risk Testing [24.553384023323332]
We propose an approach to test for performance disparities based on Conditional Value-at-Risk.
We show that the sample complexity required for discovering performance violations is reduced exponentially to be at most upper bounded by the square root of the number of groups.
arXiv Detail & Related papers (2023-12-06T19:25:32Z) - The Exact Sample Complexity Gain from Invariances for Kernel Regression [37.74032673086741]
In practice, encoding invariances into models improves sample complexity.
We provide minimax optimal rates for kernel ridge regression on compact manifold.
Our results hold for any smooth compact Lie group action, even groups of positive dimension.
arXiv Detail & Related papers (2023-03-24T20:47:31Z) - Mean-Square Analysis of Discretized It\^o Diffusions for Heavy-tailed
Sampling [17.415391025051434]
We analyze the complexity of sampling from a class of heavy-tailed distributions by discretizing a natural class of Ito diffusions associated with weighted Poincar'e inequalities.
Based on a mean-square analysis, we establish the iteration complexity for obtaining a sample whose distribution is $epsilon$ close to the target distribution in the Wasserstein-2 metric.
arXiv Detail & Related papers (2023-03-01T15:16:03Z) - Sample Complexity of Probability Divergences under Group Symmetry [9.478466072397708]
In the cases of the Wasserstein-1 metric and the Lipschitz-regularized $alpha$-divergences, the reduction of sample complexity is proportional to an ambient-dimension-dependent power of the group size.
For the maximum mean discrepancy (MMD), the improvement of sample complexity is more nuanced, as it depends on not only the group size but also the choice of kernel.
arXiv Detail & Related papers (2023-02-03T18:50:15Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Local versions of sum-of-norms clustering [77.34726150561087]
We show that our method can separate arbitrarily close balls in the ball model.
We prove a quantitative bound on the error incurred in the clustering of disjoint connected sets.
arXiv Detail & Related papers (2021-09-20T14:45:29Z) - Sum-of-norms clustering does not separate nearby balls [49.1574468325115]
We show a continuous version of sum-of-norms clustering, where the dataset is replaced by a general measure.
We state and prove a local-global characterization of the clustering that seems to be new even in the case of discrete datapoints.
arXiv Detail & Related papers (2021-04-28T13:35:17Z) - Consistent Estimation of Identifiable Nonparametric Mixture Models from
Grouped Observations [84.81435917024983]
This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations.
A practical implementation is provided for paired observations, and the approach is shown to outperform existing methods.
arXiv Detail & Related papers (2020-06-12T20:44:22Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - Statistical power for cluster analysis [0.0]
Cluster algorithms are increasingly popular in biomedical research.
We estimate power and accuracy for common analysis through simulation.
We recommend that researchers only apply cluster analysis when large subgroup separation is expected.
arXiv Detail & Related papers (2020-03-01T02:43:15Z) - Distributed, partially collapsed MCMC for Bayesian Nonparametrics [68.5279360794418]
We exploit the fact that completely random measures, which commonly used models like the Dirichlet process and the beta-Bernoulli process can be expressed as, are decomposable into independent sub-measures.
We use this decomposition to partition the latent measure into a finite measure containing only instantiated components, and an infinite measure containing all other components.
The resulting hybrid algorithm can be applied to allow scalable inference without sacrificing convergence guarantees.
arXiv Detail & Related papers (2020-01-15T23:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.