Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference
- URL: http://arxiv.org/abs/2511.21223v1
- Date: Wed, 26 Nov 2025 09:53:28 GMT
- Title: Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference
- Authors: Jasraj Singh, Shelvia Wongso, Jeremie Houssineau, Badr-Eddine Chérief-Abdellatif,
- Abstract summary: We develop a principled formulation of possibilistic variational inference.<n>Applying it to a special class of exponential-family functions, we highlight parallels with their probabilistic counterparts.
- Score: 5.621958475334369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational inference (VI) is a cornerstone of modern Bayesian learning, enabling approximate inference in complex models that would otherwise be intractable. However, its formulation depends on expectations and divergences defined through high-dimensional integrals, often rendering analytical treatment impossible and necessitating heavy reliance on approximate learning and inference techniques. Possibility theory, an imprecise probability framework, allows to directly model epistemic uncertainty instead of leveraging subjective probabilities. While this framework provides robustness and interpretability under sparse or imprecise information, adapting VI to the possibilistic setting requires rethinking core concepts such as entropy and divergence, which presuppose additivity. In this work, we develop a principled formulation of possibilistic variational inference and apply it to a special class of exponential-family functions, highlighting parallels with their probabilistic counterparts and revealing the distinctive mathematical structures of possibility theory.
Related papers
- Actionable Interpretability Must Be Defined in Terms of Symmetries [37.964025348175504]
This paper argues that interpretability research in Artificial Intelligence (AI) is fundamentally ill-posed as existing definitions fail to describe how interpretability can be formally tested or designed for.<n>We posit that actionable definitions of interpretability must be formulated in terms of *symmetries* that inform model design and lead to testable conditions.
arXiv Detail & Related papers (2026-01-19T10:10:17Z) - Bridging the Gap Between Bayesian Deep Learning and Ensemble Weather Forecasts [100.26854618129039]
Weather forecasting is fundamentally challenged by the chaotic nature of the atmosphere.<n>Recent advances in Bayesian Deep Learning (BDL) offer a promising but often disconnected alternative.<n>We bridge these paradigms through a unified hybrid BDL framework for ensemble weather forecasting.
arXiv Detail & Related papers (2025-11-18T07:49:52Z) - Function-coherent gambles [0.0]
This paper introduces function-coherent gambles, a generalization that accommodates non-linear utility.<n>We prove a representation theorem that characterizes acceptable gambles through continuous linear functionals.<n>We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework.
arXiv Detail & Related papers (2025-02-22T14:44:54Z) - Towards Understanding Extrapolation: a Causal Lens [53.15488984371969]
We provide a theoretical understanding of when extrapolation is possible and offer principled methods to achieve it.<n>Under this formulation, we cast the extrapolation problem into a latent-variable identification problem.<n>Our theory reveals the intricate interplay between the underlying manifold's smoothness and the shift properties.
arXiv Detail & Related papers (2025-01-15T21:29:29Z) - Geometric Understanding of Discriminability and Transferability for Visual Domain Adaptation [27.326817457760725]
Invariant representation learning for unsupervised domain adaptation (UDA) has made significant advances in computer vision and pattern recognition communities.
Recently, empirical connections between transferability and discriminability have received increasing attention.
In this work, we systematically analyze the essentials of transferability and discriminability from the geometric perspective.
arXiv Detail & Related papers (2024-06-24T13:31:08Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Probabilistic computation and uncertainty quantification with emerging
covariance [11.79594512851008]
Building robust, interpretable, and secure AI system requires quantifying and representing uncertainty under a probabilistic perspective.
Probability computation presents significant challenges for most conventional artificial neural network.
arXiv Detail & Related papers (2023-05-30T17:55:29Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Modal Uncertainty Estimation via Discrete Latent Representation [4.246061945756033]
We introduce a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures.
Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-25T05:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.