Multi-Objective Recommender Systems: Survey and Challenges
- URL: http://arxiv.org/abs/2210.10309v1
- Date: Wed, 19 Oct 2022 05:51:54 GMT
- Title: Multi-Objective Recommender Systems: Survey and Challenges
- Authors: Dietmar Jannach
- Abstract summary: We focus on developing machine learning algorithms that aim to predict which content is relevant for individual users.
In real-world applications, however, optimizing the accuracy of such relevance predictions as a single objective is not sufficient.
Multiple and often competing objectives have to be considered, leading to a need for more research in multi-objective recommender systems.
- Score: 5.076419064097734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems can be characterized as software solutions that provide
users convenient access to relevant content. Traditionally, recommender systems
research predominantly focuses on developing machine learning algorithms that
aim to predict which content is relevant for individual users. In real-world
applications, however, optimizing the accuracy of such relevance predictions as
a single objective in many cases is not sufficient. Instead, multiple and often
competing objectives have to be considered, leading to a need for more research
in multi-objective recommender systems. We can differentiate between several
types of such competing goals, including (i) competing recommendation quality
objectives at the individual and aggregate level, (ii) competing objectives of
different involved stakeholders, (iii) long-term vs. short-term objectives,
(iv) objectives at the user interface level, and (v) system level objectives.
In this paper we review these types of multi-objective recommendation settings
and outline open challenges in this area.
Related papers
- Uncertain Multi-Objective Recommendation via Orthogonal Meta-Learning Enhanced Bayesian Optimization [30.031396809114625]
We introduce a novel framework that categorizes RS autonomy into five distinct levels, ranging from basic rule-based accuracy-driven systems to behavior-aware, uncertain multi-objective RSs.
We propose an approach that dynamically identifies and optimize multiple objectives based on individual user preferences, fostering more ethical and intelligent user-centric recommendations.
arXiv Detail & Related papers (2025-02-18T08:10:09Z) - Rethinking Multi-Objective Learning through Goal-Conditioned Supervised Learning [8.593384839118658]
Multi-objective learning aims to optimize multiple objectives simultaneously with a single model.
It suffers from the difficulty to formalize and conduct the exact learning process.
We propose a general framework for automatically learning to achieve multiple objectives based on the existing sequential data.
arXiv Detail & Related papers (2024-12-12T03:47:40Z) - Deep Pareto Reinforcement Learning for Multi-Objective Recommender Systems [60.91599969408029]
optimizing multiple objectives simultaneously is an important task for recommendation platforms.
Existing multi-objective recommender systems do not systematically consider such dynamic relationships.
arXiv Detail & Related papers (2024-07-04T02:19:49Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - Dynamic value alignment through preference aggregation of multiple
objectives [0.0]
We present a methodology for dynamic value alignment, where the values that are to be aligned with are dynamically changing.
We apply this approach to extend Deep $Q$-Learning to accommodate multiple objectives and evaluate this method on a simplified two-leg intersection.
arXiv Detail & Related papers (2023-10-09T17:07:26Z) - Consistent Aggregation of Objectives with Diverse Time Preferences
Requires Non-Markovian Rewards [7.9456318392035845]
It is shown that Markovian aggregation of reward functions is not possible when the time preference for each objective may vary.
It follows that optimal multi-objective agents must admit rewards that are non-Markovian with respect to the individual objectives.
This work offers new insights into sequential, multi-objective agency and intertemporal choice, and has practical implications for the design of AI systems deployed to serve multiple generations of principals with varying time preference.
arXiv Detail & Related papers (2023-09-30T17:06:34Z) - Multi-Target Multiplicity: Flexibility and Fairness in Target
Specification under Resource Constraints [76.84999501420938]
We introduce a conceptual and computational framework for assessing how the choice of target affects individuals' outcomes.
We show that the level of multiplicity that stems from target variable choice can be greater than that stemming from nearly-optimal models of a single target.
arXiv Detail & Related papers (2023-06-23T18:57:14Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Alleviating Search Bias in Bayesian Evolutionary Optimization with Many
Heterogeneous Objectives [9.139734850798124]
We deal with multi-objective optimization problems with heterogeneous objectives (HE-MOPs)
A new acquisition function that mitigates search bias towards the fast objectives is suggested.
We demonstrate the effectiveness of the proposed algorithm by testing it on widely used multi-/many-objective benchmark problems.
arXiv Detail & Related papers (2022-08-25T17:07:40Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z) - A Distributional View on Multi-Objective Policy Optimization [24.690800846837273]
We propose an algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way.
We show that setting different preferences in our framework allows us to trace out the space of nondominated solutions.
arXiv Detail & Related papers (2020-05-15T13:02:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.