Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task
Learning Method for Recommender System
- URL: http://arxiv.org/abs/2302.05031v1
- Date: Fri, 10 Feb 2023 03:08:37 GMT
- Title: Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task
Learning Method for Recommender System
- Authors: Jie Zhou, Qian Yu, Chuan Luo, Jing Zhang
- Abstract summary: We propose a novel multi-task learning method termed Feature Decomposition Network (FDN)
The key idea of the proposed FDN is reducing the phenomenon of feature redundancy by explicitly decomposing features into task-specific features and task-shared features with carefully designed constraints.
Experimental results show that our proposed FDN can outperform the state-of-the-art (SOTA) methods by a noticeable margin.
- Score: 35.165907482126464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, thanks to the rapid development of deep learning (DL),
DL-based multi-task learning (MTL) has made significant progress, and it has
been successfully applied to recommendation systems (RS). However, in a
recommender system, the correlations among the involved tasks are complex.
Therefore, the existing MTL models designed for RS suffer from negative
transfer to different degrees, which will injure optimization in MTL. We find
that the root cause of negative transfer is feature redundancy that features
learned for different tasks interfere with each other. To alleviate the issue
of negative transfer, we propose a novel multi-task learning method termed
Feature Decomposition Network (FDN). The key idea of the proposed FDN is
reducing the phenomenon of feature redundancy by explicitly decomposing
features into task-specific features and task-shared features with carefully
designed constraints. We demonstrate the effectiveness of the proposed method
on two datasets, a synthetic dataset and a public datasets (i.e., Ali-CCP).
Experimental results show that our proposed FDN can outperform the
state-of-the-art (SOTA) methods by a noticeable margin.
Related papers
- Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning [50.73666458313015]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications.
MoE has been emerged as a promising solution with its sparse architecture for effective task decoupling.
Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets.
arXiv Detail & Related papers (2024-04-13T12:14:58Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Mitigating Negative Transfer with Task Awareness for Sexism, Hate
Speech, and Toxic Language Detection [7.661927086611542]
This paper proposes a new approach to mitigate the negative transfer problem based on the task awareness concept.
The proposed approach results in diminishing the negative transfer together with an improvement of performance over classic MTL solution.
The proposed approach has been implemented in two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments.
arXiv Detail & Related papers (2023-07-07T04:10:37Z) - Multi-Task Recommendations with Reinforcement Learning [20.587553899753903]
Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications.
This paper proposes a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights.
Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models.
arXiv Detail & Related papers (2023-02-07T09:11:17Z) - Task Aware Feature Extraction Framework for Sequential Dependence
Multi-Task Learning [1.0765359420035392]
We analyze sequential dependence MTL from rigorous mathematical perspective.
We propose a Task Aware Feature Extraction (TAFE) framework for sequential dependence MTL.
arXiv Detail & Related papers (2023-01-06T13:12:59Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task
Feature Learning [0.0]
Multi-task learning (MTL) can improve task performance overall relative to single-task learning (STL), but can hide negative transfer (NT)
Asymmetric multitask feature learning (AMTFL) is an approach that tries to address this by allowing tasks with higher loss values to have smaller influence on feature representations for learning other tasks.
We present examples of NT in two datasets (image recognition and pharmacogenomics) and tackle this challenge by using aleatoric homoscedastic uncertainty to capture the relative confidence between tasks, and set weights for task loss.
arXiv Detail & Related papers (2020-12-17T13:30:45Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.