Multi-Task Recommendations with Reinforcement Learning
- URL: http://arxiv.org/abs/2302.03328v1
- Date: Tue, 7 Feb 2023 09:11:17 GMT
- Title: Multi-Task Recommendations with Reinforcement Learning
- Authors: Ziru Liu, Jiejie Tian, Qingpeng Cai, Xiangyu Zhao, Jingtong Gao,
Shuchang Liu, Dayou Chen, Tonghao He, Dong Zheng, Peng Jiang, Kun Gai
- Abstract summary: Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications.
This paper proposes a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights.
Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models.
- Score: 20.587553899753903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Multi-task Learning (MTL) has yielded immense success in
Recommender System (RS) applications. However, current MTL-based recommendation
models tend to disregard the session-wise patterns of user-item interactions
because they are predominantly constructed based on item-wise datasets.
Moreover, balancing multiple objectives has always been a challenge in this
field, which is typically avoided via linear estimations in existing works. To
address these issues, in this paper, we propose a Reinforcement Learning (RL)
enhanced MTL framework, namely RMTL, to combine the losses of different
recommendation tasks using dynamic weights. To be specific, the RMTL structure
can address the two aforementioned issues by (i) constructing an MTL
environment from session-wise interactions and (ii) training multi-task
actor-critic network structure, which is compatible with most existing
MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL
loss function using the weights generated by critic networks. Experiments on
two real-world public datasets demonstrate the effectiveness of RMTL with a
higher AUC against state-of-the-art MTL-based recommendation models.
Additionally, we evaluate and validate RMTL's compatibility and transferability
across various MTL models.
Related papers
- Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework [81.29965270493238]
We develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) for wireless communication applications.
The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard.
We introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data.
arXiv Detail & Related papers (2025-01-16T16:19:53Z) - R-MTLLMF: Resilient Multi-Task Large Language Model Fusion at the Wireless Edge [78.26352952957909]
Multi-task large language models (MTLLMs) are important for many applications at the wireless edge, where users demand specialized models to handle multiple tasks efficiently.
The concept of model fusion via task vectors has emerged as an efficient approach for combining fine-tuning parameters to produce an MTLLM.
In this paper, the problem of enabling edge users to collaboratively craft such MTLMs via tasks vectors is studied, under the assumption of worst-case adversarial attacks.
arXiv Detail & Related papers (2024-11-27T10:57:06Z) - Pairwise Ranking Loss for Multi-Task Learning in Recommender Systems [8.824514065551865]
In online advertising systems, tasks like Click-Through Rate (CTR) and Conversion Rate (CVR) are often treated as MTL problems concurrently.
In this study, exposure labels corresponding to conversions are regarded as definitive indicators.
A novel task-specific loss is introduced by calculating a textbfpairtextbfwise textbfranking (PWiseR) loss between model predictions.
arXiv Detail & Related papers (2024-06-04T09:52:41Z) - MTLComb: multi-task learning combining regression and classification tasks for joint feature selection [3.708475728683911]
Multi-task learning (MTL) is a learning paradigm that enables the simultaneous training of multiple communicating algorithms.
We propose a provable loss weighting scheme that analytically determines the optimal weights for balancing regression and classification tasks.
We introduce MTLComb, an MTL algorithm and software package encompassing optimization procedures, training protocols, and hyper parameter estimation procedures.
arXiv Detail & Related papers (2024-05-16T08:07:25Z) - Multi-task learning via robust regularized clustering with non-convex group penalties [0.0]
Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
arXiv Detail & Related papers (2024-04-04T07:09:43Z) - MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning [1.4396109429521227]
Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning.
parameter-efficient fine-tuning methods have emerged as a promising way to adapt pre-trained models to different tasks while training only a minimal number of parameters.
We introduce MTLoRA, a novel framework for parameter-efficient training of Multi-Task Learning models.
arXiv Detail & Related papers (2024-03-29T17:43:58Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Online Multi-Task Learning with Recursive Least Squares and Recursive Kernel Methods [50.67996219968513]
We introduce two novel approaches for Online Multi-Task Learning (MTL) Regression Problems.
We achieve exact and approximate recursions with quadratic per-instance cost on the dimension of the input space.
We compare our online MTL methods to other contenders in a real-world wind speed forecasting case study.
arXiv Detail & Related papers (2023-08-03T01:41:34Z) - Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task
Learning Method for Recommender System [35.165907482126464]
We propose a novel multi-task learning method termed Feature Decomposition Network (FDN)
The key idea of the proposed FDN is reducing the phenomenon of feature redundancy by explicitly decomposing features into task-specific features and task-shared features with carefully designed constraints.
Experimental results show that our proposed FDN can outperform the state-of-the-art (SOTA) methods by a noticeable margin.
arXiv Detail & Related papers (2023-02-10T03:08:37Z) - M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
Learning with Model-Accelerator Co-design [95.41238363769892]
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly.
Current MTL regimes have to activate nearly the entire model even to just execute a single task.
We present a model-accelerator co-design framework to enable efficient on-device MTL.
arXiv Detail & Related papers (2022-10-26T15:40:24Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.