Modality-Independent Graph Neural Networks with Global Transformers for Multimodal Recommendation
- URL: http://arxiv.org/abs/2412.13994v1
- Date: Wed, 18 Dec 2024 16:12:26 GMT
- Title: Modality-Independent Graph Neural Networks with Global Transformers for Multimodal Recommendation
- Authors: Jun Hu, Bryan Hooi, Bingsheng He, Yinwei Wei,
- Abstract summary: Graph Neural Networks (GNNs) have shown promising performance in this domain.
We propose GNNs with Modality-Independent Receptive Fields, which employ separate GNNs with independent receptive fields.
Our results indicate that the optimal $K$ for certain modalities on specific datasets can be as low as 1 or 2, which may restrict the GNNs' capacity to capture global information.
- Score: 59.4356484322228
- License:
- Abstract: Multimodal recommendation systems can learn users' preferences from existing user-item interactions as well as the semantics of multimodal data associated with items. Many existing methods model this through a multimodal user-item graph, approaching multimodal recommendation as a graph learning task. Graph Neural Networks (GNNs) have shown promising performance in this domain. Prior research has capitalized on GNNs' capability to capture neighborhood information within certain receptive fields (typically denoted by the number of hops, $K$) to enrich user and item semantics. We observe that the optimal receptive fields for GNNs can vary across different modalities. In this paper, we propose GNNs with Modality-Independent Receptive Fields, which employ separate GNNs with independent receptive fields for different modalities to enhance performance. Our results indicate that the optimal $K$ for certain modalities on specific datasets can be as low as 1 or 2, which may restrict the GNNs' capacity to capture global information. To address this, we introduce a Sampling-based Global Transformer, which utilizes uniform global sampling to effectively integrate global information for GNNs. We conduct comprehensive experiments that demonstrate the superiority of our approach over existing methods. Our code is publicly available at https://github.com/CrawlScript/MIG-GT.
Related papers
- DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts [70.21017141742763]
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
Existing methods generally use a fixed number of GNN layers to generate representations for all graphs.
We propose the depth adaptive mixture of expert (DA-MoE) method, which incorporates two main improvements to GNN.
arXiv Detail & Related papers (2024-11-05T11:46:27Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Single-Cell Multimodal Prediction via Transformers [36.525050229323845]
We propose scMoFormer to model the complex interactions among different modalities.
scMoFormer won a Kaggle silver medal with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022 competition.
arXiv Detail & Related papers (2023-03-01T05:03:23Z) - SplitGNN: Splitting GNN for Node Classification with Heterogeneous
Attention [29.307331758493323]
We propose a split learning-based graph neural network (SplitGNN) for graph computation.
Our SplitGNN allows the isolated heterogeneous neighborhood to be collaboratively utilized.
We demonstrate the effectiveness of our SplitGNN on node classification tasks for two standard public datasets and the real-world dataset.
arXiv Detail & Related papers (2023-01-27T12:08:44Z) - Generalizing Aggregation Functions in GNNs:High-Capacity GNNs via
Nonlinear Neighborhood Aggregators [14.573383849211773]
Graph neural networks (GNNs) have achieved great success in many graph learning tasks.
Existing GNNs mainly adopt either linear neighborhood aggregation (mean,sum) or max aggregator in their message propagation.
We re-think the message propagation mechanism in GNNs and aim to develop the general nonlinear aggregators for neighborhood information aggregation in GNNs.
arXiv Detail & Related papers (2022-02-18T11:49:59Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks [127.32203532517953]
We develop a vanilla 1-bit framework that binarizes both the GNN parameters and the graph features.
Despite the lightweight architecture, we observed that this vanilla framework suffered from insufficient discriminative power in distinguishing graph topologies.
This discovery motivates us to devise meta aggregators to improve the expressive power of vanilla binarized GNNs.
arXiv Detail & Related papers (2021-09-27T08:50:37Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.