Decoupling and Damping: Structurally-Regularized Gradient Matching for Multimodal Graph Condensation
- URL: http://arxiv.org/abs/2511.20222v1
- Date: Tue, 25 Nov 2025 11:50:34 GMT
- Title: Decoupling and Damping: Structurally-Regularized Gradient Matching for Multimodal Graph Condensation
- Authors: Lian Shen, Zhendan Chen, Yinhui jiang, Meijia Song, Ziming Su, Juan Liu, Xiangrong Liu,
- Abstract summary: We propose Structurally-Regularized Gradient Matching (SR-GM), a novel condensation framework tailored for multimodal graphs.<n> SR-GM significantly improves accuracy and accelerates convergence compared to baseline methods.<n>This research provides a scalable methodology for multimodal graph-based learning in resource-constrained environments.
- Score: 3.2987327415317895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In critical web applications such as e-commerce and recommendation systems, multimodal graphs integrating rich visual and textual attributes are increasingly central, yet their large scale introduces substantial computational burdens for training Graph Neural Networks (GNNs). While Graph Condensation (GC) offers a promising solution by synthesizing smaller datasets, existing methods falter in the multimodal setting. We identify a dual challenge causing this failure: (1) conflicting gradients arising from semantic misalignments between modalities, and (2) the GNN's message-passing architecture pathologically amplifying this gradient noise across the graph structure. To address this, we propose Structurally-Regularized Gradient Matching (SR-GM), a novel condensation framework tailored for multimodal graphs. SR-GM introduces two synergistic components: first, a gradient decoupling mechanism that resolves inter-modality conflicts at their source via orthogonal projection; and second, a structural damping regularizer that acts directly on the gradient field. By leveraging the graph's Dirichlet energy, this regularizer transforms the topology from a noise amplifier into a stabilizing force during optimization. Extensive experiments demonstrate that SR-GM significantly improves accuracy and accelerates convergence compared to baseline methods. Ablation studies confirm that addressing both gradient conflict and structural amplification in tandem is essential for achieving superior performance. Moreover, the condensed multimodal graphs exhibit strong cross-architecture generalization and promise to accelerate applications like Neural Architecture Search. This research provides a scalable methodology for multimodal graph-based learning in resource-constrained environments.
Related papers
- LION: A Clifford Neural Paradigm for Multimodal-Attributed Graph Learning [36.90213853456115]
We propose LION to implement alignment-then-fusion in multimodal-attributed graphs.<n>We first construct a modality-aware geometric manifold grounded in Clifford algebra.<n>This geometric-induced high-order graph propagation efficiently achieves modality interaction, facilitating modality alignment.
arXiv Detail & Related papers (2026-01-29T09:30:36Z) - GADPN: Graph Adaptive Denoising and Perturbation Networks via Singular Value Decomposition [6.24191713518868]
GADPN is a graph structure learning framework that adaptively refines graph topology via low-rank denoising and generalized structural perturbation.<n>It achieves state-of-the-art performance while significantly improving efficiency.<n>It shows particularly strong gains on challenging disassortative graphs, validating its ability to robustly learn enhanced graph structures.
arXiv Detail & Related papers (2026-01-13T05:25:32Z) - A General Neural Backbone for Mixed-Integer Linear Optimization via Dual Attention [33.27281529953169]
Mixed-integer linear programming (MILP) is a widely used modeling framework for optimization.<n>Recent advances in deep learning address this challenge by representing MILP instances as variable-constraint bipartite graphs.<n>We present an attention-driven neural architecture that learns expressive representations beyond the pure graph view.
arXiv Detail & Related papers (2026-01-08T02:23:47Z) - Dynamic Deep Graph Learning for Incomplete Multi-View Clustering with Masked Graph Reconstruction Loss [26.31060859315329]
We propose a novel textbfDynamic Deep textbfGraph Learning for textbfIncomplete textbfMulti-textbfView textbfView textbfClustering with textbfMasked Graph Reconstruction Loss (DGIMVCM)<n>A graph convolutional embedding layer is then designed to extract primary features and refined dynamic view-specific graph structures, leveraging the global graph for imputation of missing views.
arXiv Detail & Related papers (2025-11-14T11:26:38Z) - Dual-Kernel Graph Community Contrastive Learning [14.92920991249099]
Graph Contrastive Learning (GCL) has emerged as a powerful paradigm for training Graph Neural Networks (GNNs)<n>We propose an efficient GCL framework that transforms the input graph into a compact network of interconnected node sets.<n>Our method outperforms state-of-the-art GCL baselines in both effectiveness and scalability.
arXiv Detail & Related papers (2025-11-11T14:20:39Z) - Towards Pre-trained Graph Condensation via Optimal Transport [52.6504753271008]
Graph condensation aims to distill the original graph into a small-scale graph, mitigating redundancy and accelerating GNN training.<n> conventional GC approaches heavily rely on rigid GNNs and task-specific supervision.<n>Pre-trained Graph Condensation (PreGC) via optimal transport is proposed to transcend the limitations of task- and architecture-dependent GC methods.
arXiv Detail & Related papers (2025-09-18T08:13:24Z) - Fast State-Augmented Learning for Wireless Resource Allocation with Dual Variable Regression [83.27791109672927]
We show how a state-augmented graph neural network (GNN) parametrization for the resource allocation policy circumvents the drawbacks of the ubiquitous dual subgradient methods.<n>Lagrangian maximizing state-augmented policies are learned during the offline training phase.<n>We prove a convergence result and an exponential probability bound on the excursions of the dual function (iterate) optimality gaps.
arXiv Detail & Related papers (2025-06-23T15:20:58Z) - Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers [0.0]
We reformulate the Transformer's attention mechanism as a graph operation.<n>We introduce Sparse GIN-Attention, a fine-tuning approach that employs sparse GINs.
arXiv Detail & Related papers (2025-01-04T22:30:21Z) - Learning Coarse-Grained Dynamics on Graph [4.692217705215042]
We consider a Graph Neural Network (GNN) non-Markovian modeling framework to identify coarse-grained dynamical systems on graphs.<n>Our main idea is to systematically determine the GNN architecture by inspecting how the leading term of the Mori-Zwanzig memory term depends on the coarse-grained interaction coefficients that encode the graph topology.
arXiv Detail & Related papers (2024-05-15T13:25:34Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Adversarial Graph Disentanglement [47.27978741175575]
A real-world graph has a complex topological structure, which is often formed by the interaction of different latent factors.
We propose an underlinetextbfAdversarial underlinetextbfDisentangled underlinetextbfGraph underlinetextbfConvolutional underlinetextbfNetwork (ADGCN) for disentangled graph representation learning.
arXiv Detail & Related papers (2021-03-12T14:11:36Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.