Bandit Data-Driven Optimization
- URL: http://arxiv.org/abs/2008.11707v2
- Date: Fri, 14 Jan 2022 21:28:55 GMT
- Title: Bandit Data-Driven Optimization
- Authors: Zheyuan Ryan Shi, Zhiwei Steven Wu, Rayid Ghani, Fei Fang
- Abstract summary: There are four major pain points that a machine learning pipeline must overcome in order to be useful in settings.
We introduce bandit data-driven optimization, the first iterative prediction-prescription framework to address these pain points.
We propose PROOF, a novel algorithm for this framework and formally prove that it has no-regret.
- Score: 62.01362535014316
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Applications of machine learning in the non-profit and public sectors often
feature an iterative workflow of data acquisition, prediction, and optimization
of interventions. There are four major pain points that a machine learning
pipeline must overcome in order to be actually useful in these settings: small
data, data collected only under the default intervention, unmodeled objectives
due to communication gap, and unforeseen consequences of the intervention. In
this paper, we introduce bandit data-driven optimization, the first iterative
prediction-prescription framework to address these pain points. Bandit
data-driven optimization combines the advantages of online bandit learning and
offline predictive analytics in an integrated framework. We propose PROOF, a
novel algorithm for this framework and formally prove that it has no-regret.
Using numerical simulations, we show that PROOF achieves superior performance
than existing baseline. We also apply PROOF in a detailed case study of food
rescue volunteer recommendation, and show that PROOF as a framework works well
with the intricacies of ML models in real-world AI for non-profit and public
sector applications.
Related papers
- Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization [0.0]
We present an integrated learning and optimization procedure that yields the best approximation of an unknown situation.
Numerical results conducted with inventory problems from the literature as well as a bike-sharing problem with real data demonstrate that the proposed approach performs well.
arXiv Detail & Related papers (2024-11-05T21:54:50Z) - Data Shapley in One Training Run [88.59484417202454]
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts.
Existing approaches require re-training models on different data subsets, which is computationally intensive.
This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest.
arXiv Detail & Related papers (2024-06-16T17:09:24Z) - Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - A Unified Linear Programming Framework for Offline Reward Learning from Human Demonstrations and Feedback [6.578074497549894]
Inverse Reinforcement Learning (IRL) and Reinforcement Learning from Human Feedback (RLHF) are pivotal methodologies in reward learning.
This paper introduces a novel linear programming (LP) framework tailored for offline reward learning.
arXiv Detail & Related papers (2024-05-20T23:59:26Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Communication-Efficient Federated Non-Linear Bandit Optimization [26.23638987873429]
We propose a new algorithm, named Fed-GO-UCB, for federated bandit optimization with generic non-linear objective function.
Under some mild conditions, we rigorously prove that Fed-GO-UCB is able to achieve sub-linear rate for both cumulative regret and communication cost.
arXiv Detail & Related papers (2023-11-03T03:50:31Z) - Building Resilience to Out-of-Distribution Visual Data via Input
Optimization and Model Finetuning [13.804184845195296]
We propose a preprocessing model that learns to optimise input data for a specific target vision model.
We investigate several out-of-distribution scenarios in the context of semantic segmentation for autonomous vehicles.
We demonstrate that our approach can enable performance on such data comparable to that of a finetuned model.
arXiv Detail & Related papers (2022-11-29T14:06:35Z) - Deep Active Ensemble Sampling For Image Classification [8.31483061185317]
Active learning frameworks aim to reduce the cost of data annotation by actively requesting the labeling for the most informative data points.
Some proposed approaches include uncertainty-based techniques, geometric methods, implicit combination of uncertainty-based and geometric approaches.
We present an innovative integration of recent progress in both uncertainty-based and geometric frameworks to enable an efficient exploration/exploitation trade-off in sample selection strategy.
Our framework provides two advantages: (1) accurate posterior estimation, and (2) tune-able trade-off between computational overhead and higher accuracy.
arXiv Detail & Related papers (2022-10-11T20:20:20Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.