Fast Adaptively Weighted Matrix Factorization for Recommendation with
Implicit Feedback
- URL: http://arxiv.org/abs/2003.01892v1
- Date: Wed, 4 Mar 2020 04:50:44 GMT
- Title: Fast Adaptively Weighted Matrix Factorization for Recommendation with
Implicit Feedback
- Authors: Jiawei Chen, Can Wang, Sheng Zhou, Qihao Shi, Jingbang Chen, Yan Feng,
Chun Chen
- Abstract summary: How to assign confidence weights and how to handle the large number of unobserved data are two key problems for implicit recommendation models.
We propose a fast adaptively weighted matrix factorization (FAWMF) based on variational auto-encoder.
Experiments on real-world datasets demonstrate the superiority of the proposed FAWMF and its learning algorithm fBGD.
- Score: 28.30678887024847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation from implicit feedback is a highly challenging task due to the
lack of the reliable observed negative data. A popular and effective approach
for implicit recommendation is to treat unobserved data as negative but
downweight their confidence. Naturally, how to assign confidence weights and
how to handle the large number of the unobserved data are two key problems for
implicit recommendation models. However, existing methods either pursuit fast
learning by manually assigning simple confidence weights, which lacks
flexibility and may create empirical bias in evaluating user's preference; or
adaptively infer personalized confidence weights but suffer from low
efficiency. To achieve both adaptive weights assignment and efficient model
learning, we propose a fast adaptively weighted matrix factorization (FAWMF)
based on variational auto-encoder. The personalized data confidence weights are
adaptively assigned with a parameterized neural network (function) and the
network can be inferred from the data. Further, to support fast and stable
learning of FAWMF, a new specific batch-based learning algorithm fBGD has been
developed, which trains on all feedback data but its complexity is linear to
the number of observed data. Extensive experiments on real-world datasets
demonstrate the superiority of the proposed FAWMF and its learning algorithm
fBGD.
Related papers
- Graph-based Confidence Calibration for Large Language Models [22.394717844099684]
We propose a novel method to develop a well-calibrated confidence estimation model.
We use a weighted graph to represent the consistency among the large language models' responses to a question.
We then train a graph neural network to estimate the probability of correct responses.
arXiv Detail & Related papers (2024-11-03T20:36:44Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models [25.91643745340183]
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.
This poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods.
We propose two novel techniques for robust and efficient unlearning for LLMs.
arXiv Detail & Related papers (2024-08-13T04:18:32Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Grasping Detection Network with Uncertainty Estimation for
Confidence-Driven Semi-Supervised Domain Adaptation [17.16216430459064]
This paper presents an approach enabling the easy domain adaptation through a novel grasping detection network with confidence-driven semi-supervised learning.
The proposed grasping detection network specially provides a prediction uncertainty estimation mechanism by leveraging on Feature Pyramid Network (FPN), and the mean-teacher semi-supervised learning utilizes such uncertainty information to emphasizing the consistency loss only for those unlabelled data with high confidence.
Our results show that the proposed network can achieve high success rate on the Cornell grasping dataset, and for domain adaptation with very limited data, the confidence-driven mean teacher outperforms the original mean teacher and direct training by more than 10% in evaluation
arXiv Detail & Related papers (2020-08-20T07:42:45Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.