AMOSL: Adaptive Modality-wise Structure Learning in Multi-view Graph Neural Networks For Enhanced Unified Representation
- URL: http://arxiv.org/abs/2406.02348v1
- Date: Tue, 4 Jun 2024 14:24:30 GMT
- Title: AMOSL: Adaptive Modality-wise Structure Learning in Multi-view Graph Neural Networks For Enhanced Unified Representation
- Authors: Peiyu Liang, Hongchang Gao, Xubin He,
- Abstract summary: Multi-view Graph Neural Networks (MVGNNs) excel at leveraging diverse modalities for learning object representation.
Existing methods assume identical local topology structures across modalities that overlook real-world discrepancies.
We propose adaptive modality-wise structure learning (AMoSL) to address these issues.
- Score: 22.84527318463151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While Multi-view Graph Neural Networks (MVGNNs) excel at leveraging diverse modalities for learning object representation, existing methods assume identical local topology structures across modalities that overlook real-world discrepancies. This leads MVGNNs straggles in modality fusion and representations denoising. To address these issues, we propose adaptive modality-wise structure learning (AMoSL). AMoSL captures node correspondences between modalities via optimal transport, and jointly learning with graph embedding. To enable efficient end-to-end training, we employ an efficient solution for the resulting complex bilevel optimization problem. Furthermore, AMoSL adapts to downstream tasks through unsupervised learning on inter-modality distances. The effectiveness of AMoSL is demonstrated by its ability to train more accurate graph classifiers on six benchmark datasets.
Related papers
- Bridging Large Language Models and Graph Structure Learning Models for Robust Representation Learning [22.993015048941444]
Graph representation learning is crucial for real-world applications but often encounters pervasive noise.
We introduce LangGSL, a framework that integrates the complementary strengths of pre-trained language models and graph structure learning models.
arXiv Detail & Related papers (2024-10-15T22:43:32Z) - AdaRC: Mitigating Graph Structure Shifts during Test-Time [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain without re-accessing the source domain.
We propose AdaRC, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Dr.E Bridges Graphs with Large Language Models through Words [12.22063024099311]
We introduce an end-to-end modality-aligning framework for LLM-graph alignment: Dual-Residual Vector Quantized-Variational AutoEncoder.
Our approach is purposefully designed to facilitate token-level alignment with LLMs, enabling an effective translation of the intrinsic '' of graphs into comprehensible natural language.
arXiv Detail & Related papers (2024-06-19T16:43:56Z) - Efficient End-to-end Language Model Fine-tuning on Graphs [21.23522552579571]
Learning from Text-Attributed Graphs (TAGs) has attracted significant attention due to its wide range of real-world applications.
We introduce LEADING, a novel and efficient approach for end-to-end fine-tuning of language models on TAGs.
Our proposed approach demonstrates superior performance, achieving state-of-the-art (SOTA) results on the ogbn-arxiv leaderboard.
arXiv Detail & Related papers (2023-12-07T22:35:16Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Artificial-Spiking Hierarchical Networks for Vision-Language
Representation Learning [16.902924543372713]
State-of-the-art methods achieve impressive performance by pre-training on large-scale datasets.
We propose an efficient framework for multimodal alignment by introducing a novel visual semantic module.
Experiments show that the proposed ASH-Nets achieve competitive results.
arXiv Detail & Related papers (2023-08-18T10:40:25Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.