Deoscillated Graph Collaborative Filtering
- URL: http://arxiv.org/abs/2011.02100v2
- Date: Fri, 28 May 2021 19:30:24 GMT
- Title: Deoscillated Graph Collaborative Filtering
- Authors: Zhiwei Liu, Lin Meng, Fei Jiang, Jiawei Zhang, Philip S. Yu
- Abstract summary: Collaborative Filtering (CF) signals are crucial for a Recommender System(RS) model to learn user and item embeddings.
Recent Graph Neural Networks(GNNs) propose to stack multiple aggregation layers to propagate high-order signals.
We propose a new RS model, named as textbfDeoscillated textbfGraph textbfCollaborative textbfFiltering(DGCF)
- Score: 74.55967586618287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative Filtering (CF) signals are crucial for a Recommender
System~(RS) model to learn user and item embeddings. High-order information can
alleviate the cold-start issue of CF-based methods, which is modelled through
propagating the information over the user-item bipartite graph. Recent Graph
Neural Networks~(GNNs) propose to stack multiple aggregation layers to
propagate high-order signals. However, the oscillation problem, varying
locality of bipartite graph, and the fix propagation pattern spoil the ability
of multi-layer structure to propagate information. The oscillation problem
results from the bipartite structure, as the information from users only
propagates to items. Besides oscillation problem, varying locality suggests the
density of nodes should be considered in the propagation process. Moreover, the
layer-fixed propagation pattern introduces redundant information between
layers. In order to tackle these problems, we propose a new RS model, named as
\textbf{D}eoscillated \textbf{G}raph \textbf{C}ollaborative
\textbf{F}iltering~(DGCF). We introduce cross-hop propagation layers in it to
break the bipartite propagating structure, thus resolving the oscillation
problem. Additionally, we design innovative locality-adaptive layers which
adaptively propagate information. Stacking multiple cross-hop propagation
layers and locality layers constitutes the DGCF model, which models high-order
CF signals adaptively to the locality of nodes and layers. Extensive
experiments on real-world datasets show the effectiveness of DGCF. Detailed
analyses indicate that DGCF solves oscillation problem, adaptively learns local
factor, and has layer-wise propagation pattern. Our code is available online at
https://github.com/JimLiu96/DeosciRec.
Related papers
- TSC: A Simple Two-Sided Constraint against Over-Smoothing [17.274727377858873]
We introduce a simple Two-Sided Constraint (TSC) for Graph Convolutional Neural Network (GCN)
The random masking acts on the representation matrix's columns to regulate the degree of information aggregation from neighbors.
The contrastive constraint, applied to the representation matrix's rows, enhances the discriminability of the nodes.
arXiv Detail & Related papers (2024-08-06T12:52:03Z) - Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Self-Contrastive Graph Diffusion Network [1.14219428942199]
We propose a novel framework called the Self-Contrastive Graph Diffusion Network (SCGDN)
Our framework consists of two main components: the Attentional Module (AttM) and the Diffusion Module (DiFM)
Unlike existing methodologies, SCGDN is an augmentation-free approach that avoids "sampling bias" and semantic drift.
arXiv Detail & Related papers (2023-07-27T04:00:23Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph
Representations with Multiple Localities [4.142375560633827]
Graph neural networks (GNNs) have been widely used to learn vector representation of graph-structured data.
A potential cause is that deep GNN models tend to lose the nodes' local information through many message passing steps.
We propose a multi-level attention pooling architecture to solve this so-called oversmoothing problem.
arXiv Detail & Related papers (2021-03-02T05:58:12Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - MG-GCN: Fast and Effective Learning with Mix-grained Aggregators for
Training Large Graph Convolutional Networks [20.07942308916373]
Graph convolutional networks (GCNs) generate the embeddings of nodes by aggregating the information of their neighbors layer by layer.
The high computational and memory cost of GCNs makes it infeasible for training on large graphs.
A new model, named Mix-grained GCN (MG-GCN), achieves state-of-the-art performance in terms of accuracy, training speed, convergence speed, and memory cost.
arXiv Detail & Related papers (2020-11-17T14:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.