A Framework for Personalized Persuasiveness Prediction via Context-Aware User Profiling
- URL: http://arxiv.org/abs/2601.05654v1
- Date: Fri, 09 Jan 2026 09:22:31 GMT
- Title: A Framework for Personalized Persuasiveness Prediction via Context-Aware User Profiling
- Authors: Sejun Park, Yoonah Park, Jongwon Lim, Yohan Jo,
- Abstract summary: Estimating the persuasiveness of messages is critical in various applications.<n>No established framework to optimize leveraging a persuadee's past activities to the benefit of a persuasiveness prediction model.<n>We propose a context-aware user profiling framework with two trainable components.
- Score: 21.531813748944383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating the persuasiveness of messages is critical in various applications, from recommender systems to safety assessment of LLMs. While it is imperative to consider the target persuadee's characteristics, such as their values, experiences, and reasoning styles, there is currently no established systematic framework to optimize leveraging a persuadee's past activities (e.g., conversations) to the benefit of a persuasiveness prediction model. To address this problem, we propose a context-aware user profiling framework with two trainable components: a query generator that generates optimal queries to retrieve persuasion-relevant records from a user's history, and a profiler that summarizes these records into a profile to effectively inform the persuasiveness prediction model. Our evaluation on the ChangeMyView Reddit dataset shows consistent improvements over existing methods across multiple predictor models, with gains of up to +13.77%p in F1 score. Further analysis shows that effective user profiles are context-dependent and predictor-specific, rather than relying on static attributes or surface-level similarity. Together, these results highlight the importance of task-oriented, context-dependent user profiling for personalized persuasiveness prediction.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - PreferThinker: Reasoning-based Personalized Image Preference Assessment [83.66114370585976]
We propose a reasoning-based personalized image preference assessment framework.<n>It first predicts a user's preference profile from reference images.<n>It then provides interpretable, multi-dimensional scores and assessments of candidate images.
arXiv Detail & Related papers (2025-11-01T16:19:51Z) - Pathways of Thoughts: Multi-Directional Thinking for Long-form Personalized Question Answering [57.12316804290369]
Personalization is essential for adapting question answering systems to user-specific information needs.<n>We propose Pathways of Thoughts (PoT), an inference-stage method that applies to any large language model (LLM) without requiring task-specific fine-tuning.<n>PoT consistently outperforms competitive baselines, achieving up to a 13.1% relative improvement.
arXiv Detail & Related papers (2025-09-23T14:44:46Z) - PREFINE: Personalized Story Generation via Simulated User Critics and User-Specific Rubric Generation [2.8324853634693614]
PREFINE is a novel framework that extends the Critique-and-Refine paradigm to personalization.<n> PREFINE constructs a pseudo-user agent from a user's interaction history and generates user-specific rubrics.<n>Our approach holds potential for enabling efficient personalization in broader applications, such as dialogue systems, education, and recommendation.
arXiv Detail & Related papers (2025-09-16T16:39:40Z) - PREF: Reference-Free Evaluation of Personalised Text Generation in LLMs [32.27940625341602]
Personalised text generation is essential for user-centric information systems.<n>We introduce textbfPREF, a textbfPersonalised textbfReference-free textbfEvaluation textbfFramework.
arXiv Detail & Related papers (2025-08-08T14:32:31Z) - NextQuill: Causal Preference Modeling for Enhancing LLM Personalization [82.15961484963256]
We introduce NextQuill, a novel personalization framework grounded in causal preference modeling.<n>Building on this insight, NextQuill introduces two complementary alignment strategies.<n> Experiments across multiple personalization benchmarks demonstrate that NextQuill significantly improves personalization quality.
arXiv Detail & Related papers (2025-06-03T02:08:55Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Reasoning LLMs for User-Aware Multimodal Conversational Agents [3.533721662684487]
Personalization in social robotics is critical for fostering effective human-robot interactions.<n>This paper proposes a novel framework called USER-LLM R1 for a user-aware conversational agent.<n>Our approach integrates chain-of-thought (CoT) reasoning models to iteratively infer user preferences and vision-language models.
arXiv Detail & Related papers (2025-04-02T13:00:17Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.<n>We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.<n>We propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Incorporating Group Prior into Variational Inference for Tail-User Behavior Modeling in CTR Prediction [8.213386595519928]
We propose a novel variational inference approach, namely Group Prior Sampler Variational Inference (GPSVI)
GPSVI introduces group preferences as priors to refine latent user interests for tail users.
Rigorous analysis and extensive experiments demonstrate that GPSVI consistently improves the performance of tail users.
arXiv Detail & Related papers (2024-10-19T13:15:36Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.