On the Generalizability and Predictability of Recommender Systems
- URL: http://arxiv.org/abs/2206.11886v1
- Date: Thu, 23 Jun 2022 17:51:42 GMT
- Title: On the Generalizability and Predictability of Recommender Systems
- Authors: Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, John P.
Dickerson, Colin White
- Abstract summary: We give the first large-scale study of recommender system approaches.
We create Reczilla, a meta-learning approach to recommender systems.
- Score: 33.46314108814183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While other areas of machine learning have seen more and more automation,
designing a high-performing recommender system still requires a high level of
human effort. Furthermore, recent work has shown that modern recommender system
algorithms do not always improve over well-tuned baselines. A natural follow-up
question is, "how do we choose the right algorithm for a new dataset and
performance metric?" In this work, we start by giving the first large-scale
study of recommender system approaches by comparing 18 algorithms and 100 sets
of hyperparameters across 85 datasets and 315 metrics. We find that the best
algorithms and hyperparameters are highly dependent on the dataset and
performance metric, however, there are also strong correlations between the
performance of each algorithm and various meta-features of the datasets.
Motivated by these findings, we create RecZilla, a meta-learning approach to
recommender systems that uses a model to predict the best algorithm and
hyperparameters for new, unseen datasets. By using far more meta-training data
than prior work, RecZilla is able to substantially reduce the level of human
involvement when faced with a new recommender system application. We not only
release our code and pretrained RecZilla models, but also all of our raw
experimental results, so that practitioners can train a RecZilla model for
their desired performance metric: https://github.com/naszilla/reczilla.
Related papers
- No learning rates needed: Introducing SALSA -- Stable Armijo Line Search Adaptation [4.45108516823267]
We identify problems of current state-of-the-art line search methods, propose enhancements, and rigorously assess their effectiveness.
We evaluate these methods on orders of magnitude and more complex data domains than previously done.
Our work is publicly available in a Python package, which provides a simple Pytorch.
arXiv Detail & Related papers (2024-07-30T08:47:02Z) - Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Back to Basics: A Simple Recipe for Improving Out-of-Domain Retrieval in
Dense Encoders [63.28408887247742]
We study whether training procedures can be improved to yield better generalization capabilities in the resulting models.
We recommend a simple recipe for training dense encoders: Train on MSMARCO with parameter-efficient methods, such as LoRA, and opt for using in-batch negatives unless given well-constructed hard negatives.
arXiv Detail & Related papers (2023-11-16T10:42:58Z) - RPLKG: Robust Prompt Learning with Knowledge Graph [11.893917358053004]
We propose a new method, robust prompt learning with knowledge graph (RPLKG)
Based on the knowledge graph, we automatically design diverse interpretable and meaningful prompt sets.
RPLKG shows a significant performance improvement compared to zero-shot learning.
arXiv Detail & Related papers (2023-04-21T08:22:58Z) - Making Look-Ahead Active Learning Strategies Feasible with Neural
Tangent Kernels [6.372625755672473]
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points.
Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining.
arXiv Detail & Related papers (2022-06-25T06:13:27Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Adaptive Optimization with Examplewise Gradients [23.504973357538418]
We propose a new, more general approach to the design of gradient-based optimization methods for machine learning.
In this new framework, iterations assume access to a batch of estimates per parameter, rather than a single estimate.
This better reflects the information that is actually available in typical machine learning setups.
arXiv Detail & Related papers (2021-11-30T23:37:01Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - How much progress have we made in neural network training? A New
Evaluation Protocol for Benchmarking Optimizers [86.36020260204302]
We propose a new benchmarking protocol to evaluate both end-to-end efficiency and data-addition training efficiency.
A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search.
We then apply the proposed benchmarking framework to 7s and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining.
arXiv Detail & Related papers (2020-10-19T21:46:39Z) - Optimization for Supervised Machine Learning: Randomized Algorithms for
Data and Parameters [10.279748604797911]
Key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms.
With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges.
In this thesis, we deal with each of these sources of difficulty in a different way. To efficiently address the big data issue, we develop new methods which in each iteration examine a small random subset of the training data only.
To handle the big model issue, we develop methods which in each iteration update
arXiv Detail & Related papers (2020-08-26T21:15:18Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.