Exploration on Demand: From Algorithmic Control to User Empowerment
- URL: http://arxiv.org/abs/2507.21884v1
- Date: Tue, 29 Jul 2025 14:57:26 GMT
- Title: Exploration on Demand: From Algorithmic Control to User Empowerment
- Authors: Edoardo Bianchi,
- Abstract summary: This paper introduces an adaptive clustering framework with user-controlled exploration that effectively balances personalization and diversity in movie recommendations.<n>We propose a novel exploration mechanism that empowers users to control recommendation diversity by strategically sampling from less-engaged clusters.<n>Our Large Language Model-based A/B testing methodology, conducted with 300 simulated users, reveals that 72.7% of long-term users prefer exploratory recommendations over purely exploitative ones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems often struggle with over-specialization, which severely limits users' exposure to diverse content and creates filter bubbles that reduce serendipitous discovery. To address this fundamental limitation, this paper introduces an adaptive clustering framework with user-controlled exploration that effectively balances personalization and diversity in movie recommendations. Our approach leverages sentence-transformer embeddings to group items into semantically coherent clusters through an online algorithm with dynamic thresholding, thereby creating a structured representation of the content space. Building upon this clustering foundation, we propose a novel exploration mechanism that empowers users to control recommendation diversity by strategically sampling from less-engaged clusters, thus expanding their content horizons while preserving relevance. Experiments on the MovieLens dataset demonstrate the system's effectiveness, showing that exploration significantly reduces intra-list similarity from 0.34 to 0.26 while simultaneously increasing unexpectedness to 0.73. Furthermore, our Large Language Model-based A/B testing methodology, conducted with 300 simulated users, reveals that 72.7% of long-term users prefer exploratory recommendations over purely exploitative ones, providing strong evidence for the system's ability to promote meaningful content discovery without sacrificing user satisfaction.
Related papers
- Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - Beyond Relevance: An Adaptive Exploration-Based Framework for Personalized Recommendations [0.0]
This paper introduces an exploration-based recommendation framework to promote diversity and novelty without compromising relevance.<n>A user-controlled exploration mechanism enhances diversity by selectively sampling from under-explored clusters.<n>Experiments on the MovieLens dataset show that enabling exploration reduces intra-list similarity from 0.34 to 0.26 and increases unexpectedness to 0.73.
arXiv Detail & Related papers (2025-03-25T10:27:32Z) - Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms [68.51708490104687]
We show that a purely relevance-driven policy with low exploration strength boosts short-term user satisfaction but undermines the long-term richness of the content pool.
Our findings reveal a fundamental trade-off between immediate user satisfaction and overall content production on platforms.
arXiv Detail & Related papers (2024-10-31T07:19:22Z) - Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - ECORS: An Ensembled Clustering Approach to Eradicate The Local And Global Outlier In Collaborative Filtering Recommender System [0.0]
outlier detection is a key research area in recommender systems.
We propose an approach that addresses these challenges by employing various clustering algorithms.
Our experimental results demonstrate that this approach significantly improves the accuracy of outlier detection in recommender systems.
arXiv Detail & Related papers (2024-10-01T05:06:07Z) - Retrieval Augmentation via User Interest Clustering [57.63883506013693]
Industrial recommender systems are sensitive to the patterns of user-item engagement.
We propose a novel approach that efficiently constructs user interest and facilitates low computational cost inference.
Our approach has been deployed in multiple products at Meta, facilitating short-form video related recommendation.
arXiv Detail & Related papers (2024-08-07T16:35:10Z) - UOEP: User-Oriented Exploration Policy for Enhancing Long-Term User Experiences in Recommender Systems [7.635117537731915]
Reinforcement learning (RL) has gained traction for enhancing user long-term experiences in recommender systems.
Modern recommender systems exhibit distinct user behavioral patterns among tens of millions of items, which increases the difficulty of exploration.
We propose User-Oriented Exploration Policy (UOEP), a novel approach facilitating fine-grained exploration among user groups.
arXiv Detail & Related papers (2024-01-17T08:01:18Z) - Hierarchical Reinforcement Learning for Modeling User Novelty-Seeking
Intent in Recommender Systems [26.519571240032967]
We propose a novel hierarchical reinforcement learning-based method to model the hierarchical user novelty-seeking intent.
We further incorporate diversity and novelty-related measurement in the reward function of the hierarchical RL (HRL) agent to encourage user exploration.
arXiv Detail & Related papers (2023-06-02T12:02:23Z) - PIE: Personalized Interest Exploration for Large-Scale Recommender
Systems [0.0]
We present a framework for exploration in large-scale recommender systems to address these challenges.
Our methodology can be easily integrated into an existing large-scale recommender system with minimal modifications.
Our work has been deployed in production on Facebook Watch, a popular video discovery and sharing platform serving billions of users.
arXiv Detail & Related papers (2023-04-13T22:25:09Z) - Incentivizing Combinatorial Bandit Exploration [87.08827496301839]
Consider a bandit algorithm that recommends actions to self-interested users in a recommendation system.
Users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations.
While the users prefer to exploit, the algorithm can incentivize them to explore by leveraging the information collected from the previous users.
arXiv Detail & Related papers (2022-06-01T13:46:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.