When Newer is Not Better: Does Deep Learning Really Benefit
Recommendation From Implicit Feedback?
- URL: http://arxiv.org/abs/2305.01801v1
- Date: Tue, 2 May 2023 22:03:49 GMT
- Title: When Newer is Not Better: Does Deep Learning Really Benefit
Recommendation From Implicit Feedback?
- Authors: Yushun Dong, Jundong Li, Tobias Schnabel
- Abstract summary: We compare recent neural recommendation models against traditional ones in top-n recommendation from implicit data.
Our work illuminates the relative advantages and disadvantages of neural models in recommendation.
- Score: 34.04060791716633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, neural models have been repeatedly touted to exhibit
state-of-the-art performance in recommendation. Nevertheless, multiple recent
studies have revealed that the reported state-of-the-art results of many neural
recommendation models cannot be reliably replicated. A primary reason is that
existing evaluations are performed under various inconsistent protocols.
Correspondingly, these replicability issues make it difficult to understand how
much benefit we can actually gain from these neural models. It then becomes
clear that a fair and comprehensive performance comparison between traditional
and neural models is needed.
Motivated by these issues, we perform a large-scale, systematic study to
compare recent neural recommendation models against traditional ones in top-n
recommendation from implicit data. We propose a set of evaluation strategies
for measuring memorization performance, generalization performance, and
subgroup-specific performance of recommendation models. We conduct extensive
experiments with 13 popular recommendation models (including two neural models
and 11 traditional ones as baselines) on nine commonly used datasets. Our
experiments demonstrate that even with extensive hyper-parameter searches,
neural models do not dominate traditional models in all aspects, e.g., they
fare worse in terms of average HitRate. We further find that there are areas
where neural models seem to outperform non-neural models, for example, in
recommendation diversity and robustness between different subgroups of users
and items. Our work illuminates the relative advantages and disadvantages of
neural models in recommendation and is therefore an important step towards
building better recommender systems.
Related papers
- Rethinking negative sampling in content-based news recommendation [1.5416095780642964]
News recommender systems are hindered by the brief lifespan of articles, as they undergo rapid relevance decay.
Recent studies have demonstrated the potential of content-based neural techniques in tackling this problem.
In this study, we posit that the careful sampling of negative examples has a big impact on the model's outcome.
arXiv Detail & Related papers (2024-11-13T15:42:13Z) - Optimizing Dense Feed-Forward Neural Networks [0.0]
We propose a novel feed-forward neural network constructing method based on pruning and transfer learning.
Our approach can compress the number of parameters by more than 70%.
We also evaluate the transfer learning level comparing the refined model and the original one training from scratch a neural network.
arXiv Detail & Related papers (2023-12-16T23:23:16Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language
Understanding [82.46024259137823]
We propose a cross-model comparative loss for a broad range of tasks.
We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks.
arXiv Detail & Related papers (2023-01-10T03:04:27Z) - Non-neural Models Matter: A Re-evaluation of Neural Referring Expression
Generation Systems [6.651864489482537]
In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG.
We argue that they should not be overlooked, since, for some tasks, well-designed non-neural approaches achieve better performance than neural ones.
arXiv Detail & Related papers (2022-03-15T21:47:25Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Have you tried Neural Topic Models? Comparative Analysis of Neural and
Non-Neural Topic Models with Application to COVID-19 Twitter Data [11.199249808462458]
We conduct a comparative study examining state-of-the-art neural versus non-neural topic models.
We show that neural topic models outperform their classical counterparts on standard evaluation metrics.
We also propose a novel regularization term for neural topic models, which is designed to address the well-documented problem of mode collapse.
arXiv Detail & Related papers (2021-05-21T07:24:09Z) - A Survey on Neural Recommendation: From Collaborative Filtering to
Content and Context Enriched Recommendation [70.69134448863483]
Research in recommendation has shifted to inventing new recommender models based on neural networks.
In recent years, we have witnessed significant progress in developing neural recommender models.
arXiv Detail & Related papers (2021-04-27T08:03:52Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.