PERSCEN: Learning Personalized Interaction Pattern and Scenario Preference for Multi-Scenario Matching
- URL: http://arxiv.org/abs/2506.18382v1
- Date: Mon, 23 Jun 2025 08:15:16 GMT
- Title: PERSCEN: Learning Personalized Interaction Pattern and Scenario Preference for Multi-Scenario Matching
- Authors: Haotong Du, Yaqing Wang, Fei Xiong, Lei Shao, Ming Liu, Hao Gu, Quanming Yao, Zhen Wang,
- Abstract summary: Key to effective multi-scenario recommendation lies in capturing both user preferences shared across all scenarios and scenario-aware preferences specific to each scenario.<n>We propose PERSCEN, an innovative approach that incorporates user-specific modeling into multi-scenario matching.<n>PERSCEN constructs a user-specific feature graph based on user characteristics and employs a lightweight graph neural network to capture higher-order interaction patterns.
- Score: 38.829190984763294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the expansion of business scales and scopes on online platforms, multi-scenario matching has become a mainstream solution to reduce maintenance costs and alleviate data sparsity. The key to effective multi-scenario recommendation lies in capturing both user preferences shared across all scenarios and scenario-aware preferences specific to each scenario. However, existing methods often overlook user-specific modeling, limiting the generation of personalized user representations. To address this, we propose PERSCEN, an innovative approach that incorporates user-specific modeling into multi-scenario matching. PERSCEN constructs a user-specific feature graph based on user characteristics and employs a lightweight graph neural network to capture higher-order interaction patterns, enabling personalized extraction of preferences shared across scenarios. Additionally, we leverage vector quantization techniques to distil scenario-aware preferences from users' behavior sequence within individual scenarios, facilitating user-specific and scenario-aware preference modeling. To enhance efficient and flexible information transfer, we introduce a progressive scenario-aware gated linear unit that allows fine-grained, low-latency fusion. Extensive experiments demonstrate that PERSCEN outperforms existing methods. Further efficiency analysis confirms that PERSCEN effectively balances performance with computational cost, ensuring its practicality for real-world industrial systems.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - P-GenRM: Personalized Generative Reward Model with Test-time User-based Scaling [66.55381105691818]
We propose P-GenRM, the first Personalized Generative Reward Model with test-time user-based scaling.<n>P-GenRM transforms preference signals into structured evaluation chains that derive adaptive personas and scoring rubrics.<n>It further clusters users into User Prototypes and introduces a dual-granularity scaling mechanism.
arXiv Detail & Related papers (2026-02-12T16:07:22Z) - Cross-Scenario Unified Modeling of User Interests at Billion Scale [31.293456834853853]
We propose RED-Rec, an advanced Recommender Engine for Diversified scenarios, tailored for industry-level content recommendation systems.<n>Red-Rec unifies user interest representations across multiple behavioral contexts, resulting in comprehensive item and user modeling.<n>We validate RED-Rec through online A/B testing on hundreds of millions of users in RedNote through online A/B testing, showing substantial performance gains in both content recommendation and advertisement targeting tasks.
arXiv Detail & Related papers (2025-10-16T15:20:49Z) - Global-Distribution Aware Scenario-Specific Variational Representation Learning Framework [3.531624622201587]
We introduce a Global-Distribution Aware Scenario-Specific Variational Representation Learning Framework (GSVR)<n>Our approach employs a probabilistic model to generate scenario-specific distributions for each user and item in each scenario, estimated through variational inference (VI)<n>We also introduce the global knowledge-aware multinomial distributions as prior knowledge to regulate the learning of the posterior user and item distributions.
arXiv Detail & Related papers (2025-08-20T07:31:37Z) - Personas within Parameters: Fine-Tuning Small Language Models with Low-Rank Adapters to Mimic User Behaviors [1.8352113484137629]
A long-standing challenge in developing accurate recommendation models is simulating user behavior, mainly due to the complex nature of user interactions.<n>We propose an approach to extracting robust user representations using a frozen Large Language Models (LLMs) and simulating cost-effective, resource-efficient user agents powered by fine-tuned Small Language Models (SLMs)<n>Our experiments provide compelling empirical evidence of the efficacy of our methods, demonstrating that user agents developed using our approach have the potential to bridge the gap between offline metrics and real-world performance of recommender systems.
arXiv Detail & Related papers (2025-08-18T22:14:57Z) - Reinforcing User Interest Evolution in Multi-Scenario Learning for recommender systems [0.7533573796315849]
In real-world recommendation systems, users would engage in variety scenarios, such as homepages, search pages, and related recommendation pages.<n>The user interests may be inconsistent in different scenarios, due to differences in decision-making processes and preference expression.<n>We propose a novel reinforcement learning approach that models user preferences across scenarios by modeling user interest evolution across multiple scenarios.
arXiv Detail & Related papers (2025-06-21T11:27:53Z) - Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment [41.96246165999026]
Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches.<n>This paper introduces a comprehensive framework for scalable personalized alignment of LLMs.
arXiv Detail & Related papers (2025-03-19T17:41:46Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.<n>With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.<n>Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes [50.544186914115045]
Large language models (LLMs) are increasingly embedded in everyday applications.<n> Ensuring their alignment with the diverse preferences of individual users has become a critical challenge.<n>We present a novel framework for few-shot steerable alignment.
arXiv Detail & Related papers (2024-12-18T16:14:59Z) - Transferable and Forecastable User Targeting Foundation Model [37.50233807898246]
We propose FOUND, an industrial-grade, transferable, and forecastable user targeting foundation model.<n>Our framework integrates heterogeneous multi-scenario user data, aligning them with one-sentence targeting demand inputs.<n>Our approach significantly outperforms existing baselines in cross-domain, real-world user targeting scenarios.
arXiv Detail & Related papers (2024-12-17T02:05:09Z) - Scenario-Adaptive Fine-Grained Personalization Network: Tailoring User Behavior Representation to the Scenario Context [3.7566162903515115]
We develop a ranking framework named the Scenario-Adaptive Fine-Grained Personalization Network (SFPNet)
SFPNet designs a kind of fine-grained method for multi-scenario personalized recommendations.
arXiv Detail & Related papers (2024-04-15T12:08:44Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - Scenario-Adaptive and Self-Supervised Model for Multi-Scenario
Personalized Recommendation [35.4495536683099]
We propose a scenario-Adaptive and Self-Supervised (SASS) model to solve the three challenges mentioned above.
The model is created symmetrically both in user side and item side, so that we can get distinguishing representations of items in different scenarios.
This model also achieves more than 8.0% improvement on Average Watching Time Per User in online A/B tests.
arXiv Detail & Related papers (2022-08-24T11:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.