Adaptive Diffusion-based Augmentation for Recommendation
- URL: http://arxiv.org/abs/2601.01448v1
- Date: Sun, 04 Jan 2026 09:29:45 GMT
- Title: Adaptive Diffusion-based Augmentation for Recommendation
- Authors: Na Li, Fanghui Sun, Yan Zou, Yangfu Zhu, Xiatian Zhu, Ying Ma,
- Abstract summary: We propose Adaptive Diffusion-based Augmentation for Recommendation (ADAR) to generate controllable negative samples.<n>ADAR simulates a continuous transition from positive to negative, allowing for fine-grained control over sample hardness.<n> Experiments confirm that ADAR is broadly compatible and boosts the performance of existing recommendation models substantially.
- Score: 43.94507945637665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation systems often rely on implicit feedback, where only positive user-item interactions can be observed. Negative sampling is therefore crucial to provide proper negative training signals. However, existing methods tend to mislabel potentially positive but unobserved items as negatives and lack precise control over negative sample selection. We aim to address these by generating controllable negative samples, rather than sampling from the existing item pool. In this context, we propose Adaptive Diffusion-based Augmentation for Recommendation (ADAR), a novel and model-agnostic module that leverages diffusion to synthesize informative negatives. Inspired by the progressive corruption process in diffusion, ADAR simulates a continuous transition from positive to negative, allowing for fine-grained control over sample hardness. To mine suitable negative samples, we theoretically identify the transition point at which a positive sample turns negative and derive a score-aware function to adaptively determine the optimal sampling timestep. By identifying this transition point, ADAR generates challenging negative samples that effectively refine the model's decision boundary. Experiments confirm that ADAR is broadly compatible and boosts the performance of existing recommendation models substantially, including collaborative filtering and sequential recommendation, without architectural modifications.
Related papers
- Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers [80.55429742713623]
ILRec is a novel preference fine-tuning framework for LLM-based recommender systems.<n>We introduce a lightweight collaborative filtering model to assign token-level rewards for negative signals.<n>Experiments on three datasets demonstrate ILRec's effectiveness in enhancing the performance of LLM-based recommender systems.
arXiv Detail & Related papers (2026-02-19T14:37:43Z) - Correct and Weight: A Simple Yet Effective Loss for Implicit Feedback Recommendation [36.820719132176315]
This paper introduces a novel and principled loss function, named Corrected and Weighted (CW) loss.<n>CW loss systematically corrects for the impact of false negatives within the training objective.<n> experiments conducted on four large-scale, sparse benchmark datasets demonstrate the superiority of our proposed loss.
arXiv Detail & Related papers (2026-01-07T15:20:27Z) - Causal Negative Sampling via Diffusion Model for Out-of-Distribution Recommendation [7.354459720418281]
Heuristic negative sampling enhances recommendation performance by selecting negative samples of varying hardness levels from predefined candidate pools.<n>Unobserved environmental confounders in candidate pools may cause sampling methods to introduce false hard negatives (FHNS)<n>We propose a novel method named Causal Negative Sampling via Diffusion (CNSDiff) to address this issue.
arXiv Detail & Related papers (2025-08-10T08:55:21Z) - Diffusion Models with Adaptive Negative Sampling Without External Resources [54.84368884047812]
ANSWER is a training-free technique, applicable to any model that supports CFG, and allows for negative grounding of image concepts without an explicit negative prompts.<n>Experiments show that adding ANSWER to existing DMs outperforms the baselines on multiple benchmarks and is preferred by humans 2x more over the other methods.
arXiv Detail & Related papers (2025-08-05T00:45:54Z) - SyNeg: LLM-Driven Synthetic Hard-Negatives for Dense Retrieval [45.971786380884126]
The performance of Dense retrieval (DR) is significantly influenced by the quality of negative sampling.<n>Recent advancements in large language models (LLMs) offer an innovative solution by generating contextually rich and diverse negative samples.<n>In this work, we present a framework that harnesses LLMs to synthesize high-quality hard negative samples.
arXiv Detail & Related papers (2024-12-23T03:49:00Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Asymptotically Unbiased Estimation for Delayed Feedback Modeling via
Label Correction [14.462884375151045]
Delayed feedback is crucial for the conversion rate prediction in online advertising.
Previous delayed feedback modeling methods balance the trade-off between waiting for accurate labels and consuming fresh feedback.
We propose a new method, DElayed Feedback modeling with UnbiaSed Estimation, (DEFUSE), which aim to respectively correct the importance weights of the immediate positive, the fake negative, the real negative, and the delay positive samples.
arXiv Detail & Related papers (2022-02-14T03:31:09Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.