NeuSE: A Neural Snapshot Ensemble Method for Collaborative Filtering
- URL: http://arxiv.org/abs/2104.07269v1
- Date: Thu, 15 Apr 2021 06:43:40 GMT
- Title: NeuSE: A Neural Snapshot Ensemble Method for Collaborative Filtering
- Authors: Dongsheng Li, Haodong Liu, Chao Chen, Yingying Zhao, Stephen M. Chu,
Bo Yang
- Abstract summary: In collaborative global snapshot filtering (CF) datasets, the optimal models are usually learned by globally minimizing the empirical risks over all the observed data.
In this paper, we show that the proposed method can significantly improve accuracy (up to 15.9%) when applied to a relatively existing collaborative methods.
- Score: 16.347327867397443
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In collaborative filtering (CF) algorithms, the optimal models are usually
learned by globally minimizing the empirical risks averaged over all the
observed data. However, the global models are often obtained via a performance
tradeoff among users/items, i.e., not all users/items are perfectly fitted by
the global models due to the hard non-convex optimization problems in CF
algorithms. Ensemble learning can address this issue by learning multiple
diverse models but usually suffer from efficiency issue on large datasets or
complex algorithms. In this paper, we keep the intermediate models obtained
during global model learning as the snapshot models, and then adaptively
combine the snapshot models for individual user-item pairs using a memory
network-based method. Empirical studies on three real-world datasets show that
the proposed method can extensively and significantly improve the accuracy (up
to 15.9% relatively) when applied to a variety of existing collaborative
filtering methods.
Related papers
- FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning [7.846139591790014]
We propose FedDRL, a model fusion approach using reinforcement learning based on a two staged approach.
In the first stage, Our method could filter out malicious models and selects trusted client models to participate in the model fusion.
In the second stage, the FedDRL algorithm adaptively adjusts the weights of the trusted client models and aggregates the optimal global model.
arXiv Detail & Related papers (2023-07-25T17:24:32Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Efficient Data-specific Model Search for Collaborative Filtering [56.60519991956558]
Collaborative filtering (CF) is a fundamental approach for recommender systems.
In this paper, motivated by the recent advances in automated machine learning (AutoML), we propose to design a data-specific CF model.
Key here is a new framework that unifies state-of-the-art (SOTA) CF methods and splits them into disjoint stages of input encoding, embedding function, interaction and prediction function.
arXiv Detail & Related papers (2021-06-14T14:30:32Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning [23.726336635748783]
Federated learning aims to collaboratively train a strong global model by accessing users' locally trained models but not their own data.
A crucial step is therefore to aggregate local models into a global model, which has been shown challenging when users have non-i.i.d. data.
We propose a novel aggregation algorithm named FedBE, which takes a Bayesian inference perspective by sampling higher-quality global models.
arXiv Detail & Related papers (2020-09-04T01:18:25Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Auto-Ensemble: An Adaptive Learning Rate Scheduling based Deep Learning
Model Ensembling [11.324407834445422]
This paper proposes Auto-Ensemble (AE) to collect checkpoints of deep learning model and ensemble them automatically.
The advantage of this method is to make the model converge to various local optima by scheduling the learning rate in once training.
arXiv Detail & Related papers (2020-03-25T08:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.