Connecting Jensen-Shannon and Kullback-Leibler Divergences: A New Bound for Representation Learning
- URL: http://arxiv.org/abs/2510.20644v1
- Date: Thu, 23 Oct 2025 15:18:12 GMT
- Title: Connecting Jensen-Shannon and Kullback-Leibler Divergences: A New Bound for Representation Learning
- Authors: Reuben Dorent, Polina Golland, William Wells III,
- Abstract summary: Mutual Information is a fundamental measure of statistical dependence widely used in representation learning.<n>We derive a new, tight, and tractable lower bound on KLD as a function of JSD in the general case.<n>Our results provide new theoretical justifications and strong empirical evidence for using discriminative learning in MI-based representation learning.
- Score: 4.946476970294861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mutual Information (MI) is a fundamental measure of statistical dependence widely used in representation learning. While direct optimization of MI via its definition as a Kullback-Leibler divergence (KLD) is often intractable, many recent methods have instead maximized alternative dependence measures, most notably, the Jensen-Shannon divergence (JSD) between joint and product of marginal distributions via discriminative losses. However, the connection between these surrogate objectives and MI remains poorly understood. In this work, we bridge this gap by deriving a new, tight, and tractable lower bound on KLD as a function of JSD in the general case. By specializing this bound to joint and marginal distributions, we demonstrate that maximizing the JSD-based information increases a guaranteed lower bound on mutual information. Furthermore, we revisit the practical implementation of JSD-based objectives and observe that minimizing the cross-entropy loss of a binary classifier trained to distinguish joint from marginal pairs recovers a known variational lower bound on the JSD. Extensive experiments demonstrate that our lower bound is tight when applied to MI estimation. We compared our lower bound to state-of-the-art neural estimators of variational lower bound across a range of established reference scenarios. Our lower bound estimator consistently provides a stable, low-variance estimate of a tight lower bound on MI. We also demonstrate its practical usefulness in the context of the Information Bottleneck framework. Taken together, our results provide new theoretical justifications and strong empirical evidence for using discriminative learning in MI-based representation learning.
Related papers
- Beyond I-Con: Exploring New Dimension of Distance Measures in Representation Learning [7.8851393122408515]
We present Beyond I-Con, a framework that enables systematic discovery of novel loss functions.<n>Our results highlight the importance of considering divergence and similarity kernel choices in representation learning optimization.
arXiv Detail & Related papers (2025-09-05T01:23:59Z) - The Hidden Link Between RLHF and Contrastive Learning [56.45346439723488]
We show that Reinforcement Learning from Human Feedback (RLHF) and the simple Direct Preference Optimization (DPO) can be interpreted from the perspective of mutual information (MI)<n>Within this framework, both RLHF and DPO can be interpreted as methods that performing contrastive learning based on the positive and negative samples derived from base model.<n>We propose the Mutual Information Optimization (MIO) to mitigate the late-stage decline in chosen-likelihood observed in DPO.
arXiv Detail & Related papers (2025-06-27T18:51:25Z) - A Deep Bayesian Nonparametric Framework for Robust Mutual Information Estimation [9.68824512279232]
Mutual Information (MI) is a crucial measure for capturing dependencies between variables.<n>We present a solution for training an MI estimator by constructing the MI loss with a finite representation of the Dirichlet process posterior to incorporate regularization.<n>We explore the application of our estimator in maximizing MI between the data space and the latent space of a variational autoencoder.
arXiv Detail & Related papers (2025-03-11T21:27:48Z) - Uncertainty quantification for Markov chain induced martingales with application to temporal difference learning [55.197497603087065]
We analyze the performance of the Temporal Difference (TD) learning algorithm with linear function approximations.<n>We establish novel and general high-dimensional concentration inequalities and Berry-Esseen bounds for vector-valued martingales induced by Markov chains.
arXiv Detail & Related papers (2025-02-19T15:33:55Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z) - Reducing the Variance of Variational Estimates of Mutual Information by
Limiting the Critic's Hypothesis Space to RKHS [0.0]
Mutual information (MI) is an information-theoretic measure of dependency between two random variables.
Recent methods realize parametric probability distributions or critic as a neural network to approximate unknown density ratios.
We argue that the high variance characteristic is due to the uncontrolled complexity of the critic's hypothesis space.
arXiv Detail & Related papers (2020-11-17T14:32:48Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information [105.73798100327667]
We propose a novel Contrastive Log-ratio Upper Bound (CLUB) of mutual information.
We provide a theoretical analysis of the properties of CLUB and its variational approximation.
Based on this upper bound, we introduce a MI minimization training scheme and further accelerate it with a negative sampling strategy.
arXiv Detail & Related papers (2020-06-22T05:36:16Z) - Joint Contrastive Learning for Unsupervised Domain Adaptation [20.799729748233343]
We propose an alternative upper bound on the target error that explicitly considers the joint error to render it more manageable.
We introduce Joint Contrastive Learning to find class-level discriminative features, which is essential for minimizing the joint error.
Experiments on two real-world datasets demonstrate that JCL outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2020-06-18T06:25:34Z) - Neural Methods for Point-wise Dependency Estimation [129.93860669802046]
We focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur.
We demonstrate the effectiveness of our approaches in 1) MI estimation, 2) self-supervised representation learning, and 3) cross-modal retrieval task.
arXiv Detail & Related papers (2020-06-09T23:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.