Analysis of Dirichlet Energies as Over-smoothing Measures
- URL: http://arxiv.org/abs/2512.09890v1
- Date: Wed, 10 Dec 2025 18:17:33 GMT
- Title: Analysis of Dirichlet Energies as Over-smoothing Measures
- Authors: Anna Bison, Alessandro Sperduti,
- Abstract summary: We analyze the distinctions between two functionals often used as over-smoothing measures.<n>We highlight critical distinctions necessary to select the metric that is spectrally compatible with the GNN architecture.
- Score: 48.49843360392601
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We analyze the distinctions between two functionals often used as over-smoothing measures: the Dirichlet energies induced by the unnormalized graph Laplacian and the normalized graph Laplacian. We demonstrate that the latter fails to satisfy the axiomatic definition of a node-similarity measure proposed by Rusch \textit{et al.} By formalizing fundamental spectral properties of these two definitions, we highlight critical distinctions necessary to select the metric that is spectrally compatible with the GNN architecture, thereby resolving ambiguities in monitoring the dynamics.
Related papers
- Analytical topological invariants for 2D non-Hermitian phases using Morse theory [0.0]
We analytically calculate the 2D Zak phase for a 2D non-Hermitian SSH-type Hamiltonian that supports a rich structure and edge currents.<n>Although the band structure breaks down at exceptional points, we show that a specific phase-based topological invariant remains well-defined.
arXiv Detail & Related papers (2026-01-30T18:38:21Z) - Measuring Over-smoothing beyond Dirichlet energy [0.0]
We propose a family of node similarity measures based on the energy of higher-order feature derivatives.<n>We show that attention-based Graph Neural Networks (GNNs) suffer from over-smoothing when evaluated under these proposed metrics.
arXiv Detail & Related papers (2025-12-07T10:53:22Z) - $η$ regularisation and the functional measure [0.0]
We revisit Fujikawa's path integral formulation of the chiral anomaly and develop a generalised framework for systematically defining a regularised functional measure.<n>This construction extends the $eta$ regularisation scheme to operator language, making the connection between spectral asymmetry and measure transformation fully explicit.
arXiv Detail & Related papers (2025-05-02T14:08:45Z) - Relative Representations: Topological and Geometric Perspectives [50.85040046976025]
Relative representations are an established approach to zero-shot model stitching.<n>We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.<n>Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Symmetry & Critical Points for Symmetric Tensor Decomposition Problems [6.123324869194196]
We consider the non optimization problem associated with the decomposition of a real symmetric tensor into a sum of rank-one terms.<n>Use is made of rich symmetry structure to construct infinite families of critical points represented by Puiseux series in the problem dimension.
arXiv Detail & Related papers (2023-06-13T16:25:30Z) - Enriching Disentanglement: From Logical Definitions to Quantitative Metrics [59.12308034729482]
Disentangling the explanatory factors in complex data is a promising approach for data-efficient representation learning.
We establish relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics.
We empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
arXiv Detail & Related papers (2023-05-19T08:22:23Z) - Evaluating the Robustness of Interpretability Methods through
Explanation Invariance and Equivariance [72.50214227616728]
Interpretability methods are valuable only if their explanations faithfully describe the explained model.
We consider neural networks whose predictions are invariant under a specific symmetry group.
arXiv Detail & Related papers (2023-04-13T17:59:03Z) - Boundary theories of critical matchgate tensor networks [59.433172590351234]
Key aspects of the AdS/CFT correspondence can be captured in terms of tensor network models on hyperbolic lattices.
For tensors fulfilling the matchgate constraint, these have previously been shown to produce disordered boundary states.
We show that these Hamiltonians exhibit multi-scale quasiperiodic symmetries captured by an analytical toy model.
arXiv Detail & Related papers (2021-10-06T18:00:03Z) - Symmetry Breaking in Symmetric Tensor Decomposition [44.181747424363245]
We consider the nonsymmetry problem associated with computing the points rank decomposition of symmetric tensors.
We show that critical points the loss function is detected by standard methods.
arXiv Detail & Related papers (2021-03-10T18:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.