Equivariant Representation Learning for Symmetry-Aware Inference with Guarantees
- URL: http://arxiv.org/abs/2505.19809v2
- Date: Tue, 27 May 2025 13:36:17 GMT
- Title: Equivariant Representation Learning for Symmetry-Aware Inference with Guarantees
- Authors: Daniel Ordoñez-Apraez, Vladimir Kostić, Alek Fröhlich, Vivien Brandt, Karim Lounici, Massimiliano Pontil,
- Abstract summary: We introduce an equivariant representation learning framework that simultaneously addresses regression, conditional probability estimation, and uncertainty quantification.<n>Grounded in operator and group representation theory, our framework approximates the spectral decomposition of the conditional expectation operator.<n> Empirical evaluations on synthetic datasets and real-world robotics applications confirm the potential of our approach.
- Score: 20.285132886770146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many real-world applications of regression, conditional probability estimation, and uncertainty quantification, exploiting symmetries rooted in physics or geometry can dramatically improve generalization and sample efficiency. While geometric deep learning has made significant empirical advances by incorporating group-theoretic structure, less attention has been given to statistical learning guarantees. In this paper, we introduce an equivariant representation learning framework that simultaneously addresses regression, conditional probability estimation, and uncertainty quantification while providing first-of-its-kind non-asymptotic statistical learning guarantees. Grounded in operator and group representation theory, our framework approximates the spectral decomposition of the conditional expectation operator, building representations that are both equivariant and disentangled along independent symmetry subgroups. Empirical evaluations on synthetic datasets and real-world robotics applications confirm the potential of our approach, matching or outperforming existing equivariant baselines in regression while additionally providing well-calibrated parametric uncertainty estimates.
Related papers
- Conformal and kNN Predictive Uncertainty Quantification Algorithms in Metric Spaces [3.637162892228131]
We develop a conformal prediction algorithm that offers finite-sample coverage guarantees and fast convergence rates of the oracle estimator.<n>In heteroscedastic settings, we forgo these non-asymptotic guarantees to gain statistical efficiency.<n>We demonstrate the practical utility of our approach in personalized--medicine applications involving random response objects.
arXiv Detail & Related papers (2025-07-21T15:54:13Z) - Debiased Nonparametric Regression for Statistical Inference and Distributionally Robustness [10.470114319701576]
We introduce a model-free debiasing method for smooth nonparametric regression estimators.<n>We obtain a debiased estimator that satisfies pointwise and uniform risk convergence, along with smoothness, under mild conditions.
arXiv Detail & Related papers (2024-12-28T15:01:19Z) - Extrapolation-Aware Nonparametric Statistical Inference [8.090257544386482]
Extrapolating occurs in many data analysis applications and can invalidate the resulting conclusions if not taken into account.
We extend the nonparametric statistical model to explicitly allow for extrapolation and introduce a class of extrapolation assumptions.
arXiv Detail & Related papers (2024-02-15T07:16:50Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Enriching Disentanglement: From Logical Definitions to Quantitative Metrics [59.12308034729482]
Disentangling the explanatory factors in complex data is a promising approach for data-efficient representation learning.
We establish relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics.
We empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
arXiv Detail & Related papers (2023-05-19T08:22:23Z) - Learning Correspondence Uncertainty via Differentiable Nonlinear Least
Squares [47.83169780113135]
We propose a differentiable nonlinear least squares framework to account for uncertainty in relative pose estimation from feature correspondences.
We evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world datasets.
arXiv Detail & Related papers (2023-05-16T15:21:09Z) - Statistical optimality and stability of tangent transform algorithms in
logit models [6.9827388859232045]
We provide conditions on the data generating process to derive non-asymptotic upper bounds to the risk incurred by the logistical optima.
In particular, we establish local variation of the algorithm without any assumptions on the data-generating process.
We explore a special case involving a semi-orthogonal design under which a global convergence is obtained.
arXiv Detail & Related papers (2020-10-25T05:15:13Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.