Revisiting Generalization Power of a DNN in Terms of Symbolic Interactions
- URL: http://arxiv.org/abs/2502.10162v1
- Date: Fri, 14 Feb 2025 13:46:14 GMT
- Title: Revisiting Generalization Power of a DNN in Terms of Symbolic Interactions
- Authors: Lei Cheng, Junpeng Zhang, Qihan Ren, Quanshi Zhang,
- Abstract summary: This paper aims to analyze the generalization power of deep neural networks (DNNs) from the perspective of interactions.
We find that the generalization power of a DNN can be explained as the generalization power of the interactions.
- Score: 24.77155634841885
- License:
- Abstract: This paper aims to analyze the generalization power of deep neural networks (DNNs) from the perspective of interactions. Unlike previous analysis of a DNN's generalization power in a highdimensional feature space, we find that the generalization power of a DNN can be explained as the generalization power of the interactions. We found that the generalizable interactions follow a decay-shaped distribution, while non-generalizable interactions follow a spindle-shaped distribution. Furthermore, our theory can effectively disentangle these two types of interactions from a DNN. We have verified that our theory can well match real interactions in a DNN in experiments.
Related papers
- Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Towards the Dynamics of a DNN Learning Symbolic Interactions [20.493304123269446]
A series of theorems have been proven in recent years to show that for a given input sample, a small set of interactions between input variables can be considered as primitive inference patterns.
This study proves the two-phase dynamics of interactions, providing a theoretical mechanism for how the generalization power of a DNN changes during the training process.
arXiv Detail & Related papers (2024-07-27T07:34:49Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Two-Phase Dynamics of Interactions Explains the Starting Point of a DNN Learning Over-Fitted Features [68.3512123520931]
We investigate the dynamics of a deep neural network (DNN) learning interactions.
In this paper, we discover the DNN learns interactions in two phases.
The first phase mainly penalizes interactions of medium and high orders, and the second phase mainly learns interactions of gradually increasing orders.
arXiv Detail & Related papers (2024-05-16T17:13:25Z) - Defining and Extracting generalizable interaction primitives from DNNs [22.79131582164054]
We develop a new method to extract interactions that are shared by different deep neural networks (DNNs)
Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.
arXiv Detail & Related papers (2024-01-29T17:21:41Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Explaining Generalization Power of a DNN Using Interactive Concepts [24.712192363947096]
This paper explains the generalization power of a deep neural network (DNN) from the perspective of interactions.
We also discover the detouring dynamics of learning complex concepts, which explains both the high learning difficulty and the low generalization power of complex concepts.
arXiv Detail & Related papers (2023-02-25T14:44:40Z) - Confidence Dimension for Deep Learning based on Hoeffding Inequality and
Relative Evaluation [44.393256948610016]
We propose to use multiple factors to measure and rank the relative generalization of deep neural networks (DNNs) based on a new concept of confidence dimension (CD)
Our CD yields a consistent and reliable measure and ranking for both full-precision DNNs and binary neural networks (BNNs) on all the tasks.
arXiv Detail & Related papers (2022-03-17T04:43:43Z) - Discovering and Explaining the Representation Bottleneck of DNNs [21.121270460158712]
This paper explores the bottleneck of feature representations of deep neural networks (DNNs)
We focus on the multi-order interaction between input variables, where the order represents the complexity of interactions.
We discover that a DNN is more likely to encode both too simple interactions and too complex interactions, but usually fails to learn interactions of intermediate complexity.
arXiv Detail & Related papers (2021-11-11T14:35:20Z) - On Connections between Regularizations for Improving DNN Robustness [67.28077776415724]
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs)
We study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional.
arXiv Detail & Related papers (2020-07-04T23:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.