Contextual Bandits for adapting to changing User preferences over time
- URL: http://arxiv.org/abs/2009.10073v2
- Date: Wed, 23 Sep 2020 06:01:59 GMT
- Title: Contextual Bandits for adapting to changing User preferences over time
- Authors: Dattaraj Rao
- Abstract summary: Contextual bandits provide an effective way to model the dynamic data problem in ML by leveraging online (incremental) learning.
We build a novel algorithm to solve this problem using an array of action-based learners.
We will apply this approach to predicting movie ratings over time by different users from the standard Movie Lens dataset.
- Score: 0.4061135251278187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contextual bandits provide an effective way to model the dynamic data problem
in ML by leveraging online (incremental) learning to continuously adjust the
predictions based on changing environment. We explore details on contextual
bandits, an extension to the traditional reinforcement learning (RL) problem
and build a novel algorithm to solve this problem using an array of
action-based learners. We apply this approach to model an article
recommendation system using an array of stochastic gradient descent (SGD)
learners to make predictions on rewards based on actions taken. We then extend
the approach to a publicly available MovieLens dataset and explore the
findings. First, we make available a simplified simulated dataset showing
varying user preferences over time and how this can be evaluated with static
and dynamic learning algorithms. This dataset made available as part of this
research is intentionally simulated with limited number of features and can be
used to evaluate different problem-solving strategies. We will build a
classifier using static dataset and evaluate its performance on this dataset.
We show limitations of static learner due to fixed context at a point of time
and how changing that context brings down the accuracy. Next we develop a novel
algorithm for solving the contextual bandit problem. Similar to the linear
bandits, this algorithm maps the reward as a function of context vector but
uses an array of learners to capture variation between actions/arms. We develop
a bandit algorithm using an array of stochastic gradient descent (SGD)
learners, with separate learner per arm. Finally, we will apply this contextual
bandit algorithm to predicting movie ratings over time by different users from
the standard Movie Lens dataset and demonstrate the results.
Related papers
- Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - MomentDiff: Generative Video Moment Retrieval from Random to Real [71.40038773943638]
We provide a generative diffusion-based framework called MomentDiff.
MomentDiff simulates a typical human retrieval process from random browsing to gradual localization.
We show that MomentDiff consistently outperforms state-of-the-art methods on three public benchmarks.
arXiv Detail & Related papers (2023-07-06T09:12:13Z) - Performance Evaluation and Comparison of a New Regression Algorithm [4.125187280299247]
We compare the performance of a newly proposed regression algorithm against four conventional machine learning algorithms.
The reader is free to replicate our results since we have provided the source code in a GitHub repository.
arXiv Detail & Related papers (2023-06-15T13:01:16Z) - Context-Aware Ensemble Learning for Time Series [11.716677452529114]
We introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves.
Our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem.
arXiv Detail & Related papers (2022-11-30T10:36:13Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Contextual Bandits in a Survey Experiment on Charitable Giving:
Within-Experiment Outcomes versus Policy Learning [21.9468085255912]
We design and implement an adaptive experiment (a contextual bandit'') to learn a targeted treatment assignment policy.
The goal is to use a participant's survey responses to determine which charity to expose them to in a donation solicitation.
We evaluate alternative experimental designs by collecting pilot data and then conducting a simulation study.
arXiv Detail & Related papers (2022-11-22T04:44:17Z) - Making Look-Ahead Active Learning Strategies Feasible with Neural
Tangent Kernels [6.372625755672473]
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points.
Although this is usually infeasible with deep networks, we use the neural tangent kernel to approximate the result of retraining.
arXiv Detail & Related papers (2022-06-25T06:13:27Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z) - Evaluating Models' Local Decision Boundaries via Contrast Sets [119.38387782979474]
We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data.
We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets.
Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets.
arXiv Detail & Related papers (2020-04-06T14:47:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.