Pareto Front Approximation for Multi-Objective Session-Based Recommender Systems
- URL: http://arxiv.org/abs/2407.16828v2
- Date: Tue, 27 Aug 2024 15:07:28 GMT
- Title: Pareto Front Approximation for Multi-Objective Session-Based Recommender Systems
- Authors: Timo Wilm, Philipp Normann, Felix Stepprath,
- Abstract summary: MultiTRON is an approach that adapts approximation techniques to multi-objective session-based recommender systems.
Our approach optimize trade-offs between key metrics such as click-through and conversion rates by training on sampled preference vectors.
We validate the model's performance through extensive offline and online evaluation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work introduces MultiTRON, an approach that adapts Pareto front approximation techniques to multi-objective session-based recommender systems using a transformer neural network. Our approach optimizes trade-offs between key metrics such as click-through and conversion rates by training on sampled preference vectors. A significant advantage is that after training, a single model can access the entire Pareto front, allowing it to be tailored to meet the specific requirements of different stakeholders by adjusting an additional input vector that weights the objectives. We validate the model's performance through extensive offline and online evaluation. For broader application and research, the source code is made available at https://github.com/otto-de/MultiTRON. The results confirm the model's ability to manage multiple recommendation objectives effectively, offering a flexible tool for diverse business needs.
Related papers
- MDAP: A Multi-view Disentangled and Adaptive Preference Learning Framework for Cross-Domain Recommendation [63.27390451208503]
Cross-domain Recommendation systems leverage multi-domain user interactions to improve performance.
We propose the Multi-view Disentangled and Adaptive Preference Learning framework.
Our framework uses a multiview encoder to capture diverse user preferences.
arXiv Detail & Related papers (2024-10-08T10:06:45Z) - A Unified Graph Transformer for Overcoming Isolations in Multi-modal Recommendation [9.720586396359906]
We argue that existing multi-modal recommender systems typically use isolated processes for both feature extraction and modality modelling.
We propose a novel model, called Unified Multi-modal Graph Transformer (UGT), which leverages a multi-way transformer to extract aligned multi-modal features.
We show that the UGT model can achieve significant effectiveness gains, especially when jointly optimised with the commonly-used multi-modal recommendation losses.
arXiv Detail & Related papers (2024-07-29T11:04:31Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - A Model-based Multi-Agent Personalized Short-Video Recommender System [19.03089585214444]
We propose a RL-based industrial short-video recommender ranking framework.
Our proposed framework adopts a model-based learning approach to alleviate the sample selection bias.
Our proposed approach has been deployed in our real large-scale short-video sharing platform.
arXiv Detail & Related papers (2024-05-03T04:34:36Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - LiMAML: Personalization of Deep Recommender Models via Meta Learning [13.69036196446634]
We introduce an innovative meta-learning solution tailored to the personalization of models for individual members and other entities.
We leverage the Model-Agnostic Meta Learning (MAML) algorithm to adapt per-task sub-networks using recent user interaction data.
Our approach has enabled the deployment of a range of highly personalized AI models across diverse LinkedIn applications.
arXiv Detail & Related papers (2024-02-23T22:06:36Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Towards All-in-one Pre-training via Maximizing Multi-modal Mutual
Information [77.80071279597665]
We propose an all-in-one single-stage pre-training approach, named Maximizing Multi-modal Mutual Information Pre-training (M3I Pre-training)
Our approach achieves better performance than previous pre-training methods on various vision benchmarks, including ImageNet classification, object detection, LVIS long-tailed object detection, and ADE20k semantic segmentation.
arXiv Detail & Related papers (2022-11-17T18:59:49Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - Self-Evolutionary Optimization for Pareto Front Learning [34.17125297176668]
Multi-objective optimization (MOO) approaches have been proposed for multitasking problems.
Recent MOO methods approximate multiple optimal solutions (Pareto front) with a single unified model.
We show that PFL can be re-formulated into another MOO problem with multiple objectives, each of which corresponds to different preference weights for the tasks.
arXiv Detail & Related papers (2021-10-07T13:38:57Z) - Learning the Pareto Front with Hypernetworks [44.72371822514582]
Multi-objective optimization (MOO) problems are prevalent in machine learning.
These problems have a set of optimal solutions, where each point on the front represents a different trade-off between possibly conflicting objectives.
Recent MOO methods can target a specific desired ray in loss space however, most approaches still face two grave limitations.
arXiv Detail & Related papers (2020-10-08T16:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.