Consensus Learning from Heterogeneous Objectives for One-Class
Collaborative Filtering
- URL: http://arxiv.org/abs/2202.13140v1
- Date: Sat, 26 Feb 2022 13:34:29 GMT
- Title: Consensus Learning from Heterogeneous Objectives for One-Class
Collaborative Filtering
- Authors: SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
- Abstract summary: This paper proposes a novel framework, named ConCF, that exploits the complementarity from heterogeneous objectives throughout the training process.
Our experiments on real-world datasets demonstrate that ConCF significantly improves the generalization of the model.
- Score: 30.17063272667769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decades, for One-Class Collaborative Filtering (OCCF), many
learning objectives have been researched based on a variety of underlying
probabilistic models. From our analysis, we observe that models trained with
different OCCF objectives capture distinct aspects of user-item relationships,
which in turn produces complementary recommendations. This paper proposes a
novel OCCF framework, named ConCF, that exploits the complementarity from
heterogeneous objectives throughout the training process, generating a more
generalizable model. ConCF constructs a multi-branch variant of a given target
model by adding auxiliary heads, each of which is trained with heterogeneous
objectives. Then, it generates consensus by consolidating the various views
from the heads, and guides the heads based on the consensus. The heads are
collaboratively evolved based on their complementarity throughout the training,
which again results in generating more accurate consensus iteratively. After
training, we convert the multi-branch architecture back to the original target
model by removing the auxiliary heads, thus there is no extra inference cost
for the deployment. Our extensive experiments on real-world datasets
demonstrate that ConCF significantly improves the generalization of the model
by exploiting the complementarity from heterogeneous objectives.
Related papers
- UNCO: Towards Unifying Neural Combinatorial Optimization through Large Language Model [21.232626415696267]
We propose a unified neural optimization framework to solve different types of optimization problems (COPs) by a single model.
We use natural language to formulate text-attributed instances for different COPs and encode them in the same embedding space by the large language model (LLM)
Experiments show that the UNCO model can solve multiple COPs after a single-session training, and achieves satisfactory performance that is comparable to several traditional or learning-based baselines.
arXiv Detail & Related papers (2024-08-22T08:42:44Z) - Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation [22.359428566363945]
Federated recommendation aims to collect global knowledge by aggregating local models from massive devices.
Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients.
We propose a personalized Federated recommendation model with Composite Aggregation (FedCA)
arXiv Detail & Related papers (2024-06-06T10:17:52Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Spectral Co-Distillation for Personalized Federated Learning [69.97016362754319]
We propose a novel distillation method based on model spectrum information to better capture generic versus personalized representations.
We also introduce a co-distillation framework that establishes a two-way bridge between generic and personalized model training.
We demonstrate the outperformance and efficacy of our proposed spectral co-distillation method, as well as our wait-free training protocol.
arXiv Detail & Related papers (2024-01-29T16:01:38Z) - Universal Semi-supervised Model Adaptation via Collaborative Consistency
Training [92.52892510093037]
We introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA)
We propose a collaborative consistency training framework that regularizes the prediction consistency between two models.
Experimental results demonstrate the effectiveness of our method on several benchmark datasets.
arXiv Detail & Related papers (2023-07-07T08:19:40Z) - Joint Training of Deep Ensembles Fails Due to Learner Collusion [61.557412796012535]
Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model.
Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance.
We show that directly minimizing the loss of the ensemble appears to rarely be applied in practice.
arXiv Detail & Related papers (2023-01-26T18:58:07Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - An attention model for the formation of collectives in real-world
domains [78.1526027174326]
We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals.
We propose a general approach for the formation of collectives based on a novel combination of an attention model and an integer linear program.
arXiv Detail & Related papers (2022-04-30T09:15:36Z) - CD$^2$-pFed: Cyclic Distillation-guided Channel Decoupling for Model
Personalization in Federated Learning [24.08509828106899]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to collaboratively learn a shared global model.
We propose CD2-pFed, a novel Cyclic Distillation-guided Channel Decoupling framework, to personalize the global model in FL.
arXiv Detail & Related papers (2022-04-08T07:13:30Z) - Generalized Adversarially Learned Inference [42.40405470084505]
We develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs.
We incorporate multiple layers of feedback on reconstructions, self-supervision, and other forms of supervision based on prior or learned knowledge about the desired solutions.
arXiv Detail & Related papers (2020-06-15T02:18:13Z) - Multiview Representation Learning for a Union of Subspaces [38.68763142172997]
We show that a proposed model and a set of simple mixtures yield improvements over standard CCA.
Our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.
arXiv Detail & Related papers (2019-12-30T00:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.