Towards A Unified PAC-Bayesian Framework for Norm-based Generalization Bounds
- URL: http://arxiv.org/abs/2601.08100v1
- Date: Tue, 13 Jan 2026 00:42:22 GMT
- Title: Towards A Unified PAC-Bayesian Framework for Norm-based Generalization Bounds
- Authors: Xinping Yi, Gaojie Jin, Xiaowei Huang, Shi Jin,
- Abstract summary: We propose a unified framework for PAC-Bayesian norm-based generalization.<n>The key to our approach is a sensitivity matrix that quantifies the network outputs with respect to structured weight perturbations.<n>We derive a family of generalization bounds that recover several existing PAC-Bayesian results as special cases.
- Score: 63.47271262149291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding the generalization behavior of deep neural networks remains a fundamental challenge in modern statistical learning theory. Among existing approaches, PAC-Bayesian norm-based bounds have demonstrated particular promise due to their data-dependent nature and their ability to capture algorithmic and geometric properties of learned models. However, most existing results rely on isotropic Gaussian posteriors, heavy use of spectral-norm concentration for weight perturbations, and largely architecture-agnostic analyses, which together limit both the tightness and practical relevance of the resulting bounds. To address these limitations, in this work, we propose a unified framework for PAC-Bayesian norm-based generalization by reformulating the derivation of generalization bounds as a stochastic optimization problem over anisotropic Gaussian posteriors. The key to our approach is a sensitivity matrix that quantifies the network outputs with respect to structured weight perturbations, enabling the explicit incorporation of heterogeneous parameter sensitivities and architectural structures. By imposing different structural assumptions on this sensitivity matrix, we derive a family of generalization bounds that recover several existing PAC-Bayesian results as special cases, while yielding bounds that are comparable to or tighter than state-of-the-art approaches. Such a unified framework provides a principled and flexible way for geometry-/structure-aware and interpretable generalization analysis in deep learning.
Related papers
- Conjugate Learning Theory: Uncovering the Mechanisms of Trainability and Generalization in Deep Neural Networks [0.0]
We develop a conjugate learning theoretical framework based on convex conjugate duality to characterize this learnability property.<n>We demonstrate that training deep neural networks (DNNs) with mini-batch descent (SGD) achieves global optima of empirical risk.<n>We derive deterministic and probabilistic bounds on generalization error based on conditional generalized entropy measures.
arXiv Detail & Related papers (2026-02-18T04:26:55Z) - Understanding Generalization from Embedding Dimension and Distributional Convergence [13.491874401333021]
We study generalization from a representation-centric perspective and analyze how the geometry of learned embeddings controls predictive performance for a fixed trained model.<n>We show that population risk can be bounded by two factors: (i) the intrinsic dimension of the embedding distribution, which determines the convergence rate of empirical embedding distribution to the population distribution in Wasserstein distance, and (ii) the sensitivity of the downstream mapping from embeddings to predictions, characterized by Lipschitz constants.
arXiv Detail & Related papers (2026-01-30T09:32:04Z) - Mutual Information Free Topological Generalization Bounds via Stability [46.63069403118614]
We introduce a novel learning theoretic framework that departs from the existing strategies.<n>We prove that the generalization error of trajectory-stable algorithms can be upper bounded in terms of TDA terms.
arXiv Detail & Related papers (2025-07-09T12:03:25Z) - Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [53.03951222945921]
We analyze smoothed (perturbed) policies, adding controlled random perturbations to the direction used by the linear oracle.<n>Our main contribution is a generalization bound that decomposes the excess risk into perturbation bias, statistical estimation error, and optimization error.<n>We illustrate the scope of the results on applications such as vehicle scheduling, highlighting how smoothing enables both tractable training and controlled generalization.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - PAC-Chernoff Bounds: Understanding Generalization in the Interpolation Regime [6.645111950779666]
This paper introduces a distribution-dependent PAC-Chernoff bound that exhibits perfect tightness for interpolators.<n>We present a unified theoretical framework revealing why certain interpolators show an exceptional generalization, while others falter.
arXiv Detail & Related papers (2023-06-19T14:07:10Z) - Out-of-distributional risk bounds for neural operators with applications
to the Helmholtz equation [6.296104145657063]
Existing neural operators (NOs) do not necessarily perform well for all physics problems.
We propose a subfamily of NOs enabling an enhanced empirical approximation of the nonlinear operator mapping wave speed to solution.
Our experiments reveal certain surprises in the generalization and the relevance of introducing depth.
We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.
arXiv Detail & Related papers (2023-01-27T03:02:12Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.