Towards Unified Modeling for Positive and Negative Preferences in
Sign-Aware Recommendation
- URL: http://arxiv.org/abs/2403.08246v1
- Date: Wed, 13 Mar 2024 05:00:42 GMT
- Title: Towards Unified Modeling for Positive and Negative Preferences in
Sign-Aware Recommendation
- Authors: Yuting Liu, Yizhou Dang, Yuliang Liang, Qiang Liu, Guibing Guo,
Jianzhe Zhao, Xingwei Wang
- Abstract summary: We propose a novel textbfLight textbfSigned textbfGraph Convolution Network specifically for textbfRecommendation (textbfLSGRec)
For the negative preferences within high-order heterogeneous interactions, first-order negative preferences are captured by the negative links.
recommendation results are generated based on positive preferences and optimized with negative ones.
- Score: 13.300975621769396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, sign-aware graph recommendation has drawn much attention as it will
learn users' negative preferences besides positive ones from both positive and
negative interactions (i.e., links in a graph) with items. To accommodate the
different semantics of negative and positive links, existing works utilize two
independent encoders to model users' positive and negative preferences,
respectively. However, these approaches cannot learn the negative preferences
from high-order heterogeneous interactions between users and items formed by
multiple links with different signs, resulting in inaccurate and incomplete
negative user preferences. To cope with these intractable issues, we propose a
novel \textbf{L}ight \textbf{S}igned \textbf{G}raph Convolution Network
specifically for \textbf{Rec}ommendation (\textbf{LSGRec}), which adopts a
unified modeling approach to simultaneously model high-order users' positive
and negative preferences on a signed user-item interaction graph. Specifically,
for the negative preferences within high-order heterogeneous interactions,
first-order negative preferences are captured by the negative links, while
high-order negative preferences are propagated along positive edges. Then,
recommendation results are generated based on positive preferences and
optimized with negative ones. Finally, we train representations of users and
items through different auxiliary tasks. Extensive experiments on three
real-world datasets demonstrate that our method outperforms existing baselines
regarding performance and computational efficiency. Our code is available at
\url{https://anonymous.4open.science/r/LSGRec-BB95}.
Related papers
- Negative-Prompt-driven Alignment for Generative Language Model [34.191590966148816]
We propose NEgative-prompt-driven AlignmenT to guide language models away from undesirable behaviors.
NEAT explicitly penalizes the model for producing harmful outputs, guiding it not only toward desirable behaviors but also steering it away from generating undesirable, biased responses.
Extensive experiments validate NEAT's effectiveness in significantly enhancing language models' alignment with human values and preferences.
arXiv Detail & Related papers (2024-10-16T03:30:09Z) - Generating Enhanced Negatives for Training Language-Based Object Detectors [86.1914216335631]
We propose to leverage the vast knowledge built into modern generative models to automatically build negatives that are more relevant to the original data.
Specifically, we use large-language-models to generate negative text descriptions, and text-to-image diffusion models to also generate corresponding negative images.
Our experimental analysis confirms the relevance of the generated negative data, and its use in language-based detectors improves performance on two complex benchmarks.
arXiv Detail & Related papers (2023-12-29T23:04:00Z) - Topology-aware Debiased Self-supervised Graph Learning for
Recommendation [6.893289671937124]
We propose Topology-aware De Self-supervised Graph Learning ( TDSGL) for recommendation.
TDSGL constructs contrastive pairs according to the semantic similarity between users (items)
Our results show that the proposed model outperforms the state-of-the-art models significantly on three public datasets.
arXiv Detail & Related papers (2023-10-24T14:16:19Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Negative Sampling for Recommendation [7.758275614033198]
How to effectively sample high-quality negative instances is important for well training a recommendation model.
We argue that a high-quality negative should be both textitinformativeness and textitunbiasedness
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Bootstrapping User and Item Representations for One-Class Collaborative
Filtering [24.30834981766022]
One-class collaborative filtering (OCCF) is to identify user-item pairs that are positively-related but have not been interacted yet.
This paper proposes a novel OCCF framework, named as BUIR, which does not require negative sampling.
arXiv Detail & Related papers (2021-05-13T14:24:13Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.