CDR-Adapter: Learning Adapters to Dig Out More Transferring Ability for
Cross-Domain Recommendation Models
- URL: http://arxiv.org/abs/2311.02398v1
- Date: Sat, 4 Nov 2023 13:03:24 GMT
- Title: CDR-Adapter: Learning Adapters to Dig Out More Transferring Ability for
Cross-Domain Recommendation Models
- Authors: Yanyu Chen, Yao Yao, Wai Kin Victor Chan, Li Xiao, Kai Zhang, Liang
Zhang, Yun Ye
- Abstract summary: Cross-domain recommendation (CDR) is a promising solution that utilizes knowledge from the source domain to improve the recommendation performance in the target domain.
Previous CDR approaches have mainly followed the Embedding and Mapping (EMCDR) framework, which involves learning a mapping function to facilitate knowledge transfer.
We present a scalable and efficient paradigm to address data sparsity and cold-start issues in CDR, named CDR-Adapter.
- Score: 15.487701831604767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data sparsity and cold-start problems are persistent challenges in
recommendation systems. Cross-domain recommendation (CDR) is a promising
solution that utilizes knowledge from the source domain to improve the
recommendation performance in the target domain. Previous CDR approaches have
mainly followed the Embedding and Mapping (EMCDR) framework, which involves
learning a mapping function to facilitate knowledge transfer. However, these
approaches necessitate re-engineering and re-training the network structure to
incorporate transferrable knowledge, which can be computationally expensive and
may result in catastrophic forgetting of the original knowledge. In this paper,
we present a scalable and efficient paradigm to address data sparsity and
cold-start issues in CDR, named CDR-Adapter, by decoupling the original
recommendation model from the mapping function, without requiring
re-engineering the network structure. Specifically, CDR-Adapter is a novel
plug-and-play module that employs adapter modules to align feature
representations, allowing for flexible knowledge transfer across different
domains and efficient fine-tuning with minimal training costs. We conducted
extensive experiments on the benchmark dataset, which demonstrated the
effectiveness of our approach over several state-of-the-art CDR approaches.
Related papers
- Hyperbolic Knowledge Transfer in Cross-Domain Recommendation System [28.003142450569452]
Cross-Domain Recommendation (CDR) seeks to utilize knowledge from different domains to alleviate the problem of data sparsity in the target recommendation domain.
Most current methods represent users and items in Euclidean space, which is not ideal for handling long-tail distributed data.
We introduce a new framework called Hyperbolic Contrastive Learning (HCTS), designed to capture the unique features of each domain.
arXiv Detail & Related papers (2024-06-25T05:35:02Z) - Transfer Learning Under High-Dimensional Graph Convolutional Regression Model for Node Classification [20.18595334666282]
We propose a Graph Convolutional Multinomial Logistic Regression (GCR) model and a transfer learning method based on the GCR model, called Trans-GCR.
We provide theoretical guarantees of the estimate obtained under GCR model in high-dimensional settings.
arXiv Detail & Related papers (2024-05-26T19:30:14Z) - Diffusion Cross-domain Recommendation [0.0]
We propose Diffusion Cross-domain Recommendation (DiffCDR) to give high-quality outcomes to cold-start users.
We first adopt the theory of DPM and design a Diffusion Module (DIM), which generates user's embedding in target domain.
In addition, we consider the label data of the target domain and form the task-oriented loss function, which enables our DiffCDR to adapt to specific tasks.
arXiv Detail & Related papers (2024-02-03T15:14:51Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - DAFormer: Improving Network Architectures and Training Strategies for
Domain-Adaptive Semantic Segmentation [99.88539409432916]
We study the unsupervised domain adaptation (UDA) process.
We propose a novel UDA method, DAFormer, based on the benchmark results.
DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes.
arXiv Detail & Related papers (2021-11-29T19:00:46Z) - Transfer-Meta Framework for Cross-domain Recommendation to Cold-Start
Users [31.949188328354854]
Cross-domain recommendation (CDR) uses rich information from an auxiliary (source) domain to improve the performance of recommender system in the target domain.
We propose a transfer-meta framework for CDR (TMCDR) which has a transfer stage and a meta stage.
arXiv Detail & Related papers (2021-05-11T05:15:53Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - A Deep Framework for Cross-Domain and Cross-System Recommendations [18.97641276417075]
Cross-Domain Recommendation (CDR) and Cross-System Recommendations (CSR) are promising solutions to address the data sparsity problem in recommender systems.
We propose a Deep framework for both Cross-Domain and Cross-System Recommendations, called DCDCSR, based on Matrix Factorization (MF) models and a fully connected Deep Neural Network (DNN)
arXiv Detail & Related papers (2020-09-14T06:11:17Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.