Laplacian-LoRA: Delaying Oversmoothing in Deep GCNs via Spectral Low-Rank Adaptation
- URL: http://arxiv.org/abs/2602.07278v1
- Date: Sat, 07 Feb 2026 00:03:19 GMT
- Title: Laplacian-LoRA: Delaying Oversmoothing in Deep GCNs via Spectral Low-Rank Adaptation
- Authors: Sai Vamsi Alisetti,
- Abstract summary: We propose Laplacian-LoRA, a low-rank adaptation of standard graph convolutional networks (GCNs)<n>Rather than redesigning message passing, Laplacian-LoRA introduces a learnable, spectrally anchored correction to the fixed Laplacian propagation operator.<n>We show that Laplacian-LoRA consistently delays the onset of oversmoothing, extending the effective depth of GCNs by up to a factor of two.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Oversmoothing is a fundamental limitation of deep graph convolutional networks (GCNs), causing node representations to collapse as depth increases. While many prior approaches mitigate this effect through architectural modifications or residual mechanisms, the underlying spectral cause of oversmoothing is often left implicit. We propose Laplacian-LoRA, a simple and interpretable low-rank spectral adaptation of standard GCNs. Rather than redesigning message passing, Laplacian-LoRA introduces a learnable, spectrally anchored correction to the fixed Laplacian propagation operator, selectively weakening contraction while preserving stability and the low-pass inductive bias. Across multiple benchmark datasets and depths, Laplacian-LoRA consistently delays the onset of oversmoothing, extending the effective depth of GCNs by up to a factor of two. Embedding variance diagnostics confirm that these gains arise from delayed representational collapse, while learned spectral analysis demonstrates that the correction is smooth, bounded, and well behaved. Our results show that oversmoothing is a depth-dependent spectral phenomenon that can be systematically delayed through modest, low-rank adaptation of the graph propagation operator.
Related papers
- The Malignant Tail: Spectral Segregation of Label Noise in Over-Parameterized Networks [0.0]
We experimentally isolate the Malignant Tail, a failure mode where networks functionally segregate signal and noise.<n>We show that untrained networks actively segregate noise, allowing post-hoc Explicit Spectral Truncation to surgically prune the noise-dominated subspace.<n>Our findings suggest that under label noise, excess spectral capacity is not harmless redundancy but a latent structural liability.
arXiv Detail & Related papers (2026-03-02T16:39:42Z) - Spectral Gating Networks [65.9496901693099]
We introduce Spectral Gating Networks (SGN) to introduce frequency-rich expressivity in feed-forward networks.<n>SGN augments a standard activation pathway with a compact spectral pathway and learnable gates that allow the model to start from a stable base behavior.<n>It consistently improves accuracy-efficiency trade-offs under comparable computational budgets.
arXiv Detail & Related papers (2026-02-07T20:00:49Z) - Spectral Evolution Search: Efficient Inference-Time Scaling for Reward-Aligned Image Generation [45.717539734334906]
Inference-time scaling offers a versatile paradigm for aligning visual generative models with downstream objectives without parameter updates.<n>We show that existing approaches that optimize the high-dimensional initial noise suffer from severe inefficiency, as many search directions exert negligible influence on the final generation.<n>We propose Spectral Evolution Search (SES), a plug-and-play framework for initial noise optimization that executes gradient-free evolutionary search within a low-frequency subspace.
arXiv Detail & Related papers (2026-02-03T07:19:39Z) - Spectral Gradient Descent Mitigates Anisotropy-Driven Misalignment: A Case Study in Phase Retrieval [13.218607858857295]
Spectral gradient methods modify gradient updates by preserving directional information while discarding scale.<n>We investigate the mechanisms underlying these gains through a dynamical analysis of a nonlinear phase retrieval model.
arXiv Detail & Related papers (2026-01-30T07:12:58Z) - The Homogeneity Trap: Spectral Collapse in Doubly-Stochastic Deep Networks [1.7523718031184992]
We identify a critical spectral degradation phenomenon inherent to structure-preserving deep architectures.<n>We show that maximum-entropy bias drives the mixing operator towards the uniform barycenter, suppressing the subdominant singular value .<n>We derive a spectral bound linking to the network's effective depth, showing that high-entropy constraints restrict feature transformation to a shallow receptive field.
arXiv Detail & Related papers (2026-01-05T13:09:42Z) - Frequency Regularization: Unveiling the Spectral Inductive Bias of Deep Neural Networks [21.749207209704906]
We investigate the Spectral Bias of modern Convolutional Neural Networks (CNNs)<n>We introduce a Visual Diagnostic Framework to track the dynamic evolution of weight frequencies during training.<n>We propose a novel metric, the Spectral Suppression Ratio (SSR), to quantify the "low-pass filtering" intensity of different regularizers.
arXiv Detail & Related papers (2025-12-20T11:33:32Z) - SpectrumFM: Redefining Spectrum Cognition via Foundation Modeling [65.65474629224558]
We propose a spectrum foundation model, termed SpectrumFM, which provides a new paradigm for spectrum cognition.<n>An innovative spectrum encoder that exploits the convolutional neural networks is proposed to effectively capture both fine-grained local signal structures and high-level global dependencies in the spectrum data.<n>Two novel self-supervised learning tasks, namely masked reconstruction and next-slot signal prediction, are developed for pre-training SpectrumFM, enabling the model to learn rich and transferable representations.
arXiv Detail & Related papers (2025-08-02T14:40:50Z) - Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection [59.042018542376596]
Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors.<n>Anomaly-Aware Pre-Training and Fine-Tuning (APF) is a framework to mitigate the challenges in GAD.<n> Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-19T09:57:35Z) - Spectral-Spatial Extraction through Layered Tensor Decomposition for Hyperspectral Anomaly Detection [6.292153194561472]
Low rank tensor representation (LRTR) methods are very useful for hyperspectral anomaly detection (HAD)<n>We first apply non-negative matrix factorization (NMF) to alleviate spectral dimensionality redundancy and extract spectral anomaly.<n>We then employ LRTR to extract spatial anomaly while mitigating spatial redundancy, yielding a highly efffcient layered tensor decomposition framework for HAD.<n> Experimental results on the Airport-Beach-Urban and MVTec datasets demonstrate that our approach outperforms state-of-the-art methods in the HAD task.
arXiv Detail & Related papers (2025-03-07T07:08:14Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Hyperspectral Image Denoising Using Non-convex Local Low-rank and Sparse
Separation with Spatial-Spectral Total Variation Regularization [49.55649406434796]
We propose a novel non particular approach to robust principal component analysis for HSI denoising.
We develop accurate approximations to both rank and sparse components.
Experiments on both simulated and real HSIs demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-01-08T11:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.