From Function to Distribution Modeling: A PAC-Generative Approach to
Offline Optimization
- URL: http://arxiv.org/abs/2401.02019v1
- Date: Thu, 4 Jan 2024 01:32:50 GMT
- Title: From Function to Distribution Modeling: A PAC-Generative Approach to
Offline Optimization
- Authors: Qiang Zhang, Ruida Zhou, Yang Shen and Tie Liu
- Abstract summary: This paper considers the problem of offline optimization, where the objective function is unknown except for a collection of offline" data examples.
Instead of learning and then optimizing the unknown objective function, we take on a less intuitive but more direct view that optimization can be thought of as a process of sampling from a generative model.
- Score: 30.689032197123755
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper considers the problem of offline optimization, where the objective
function is unknown except for a collection of ``offline" data examples. While
recent years have seen a flurry of work on applying various machine learning
techniques to the offline optimization problem, the majority of these work
focused on learning a surrogate of the unknown objective function and then
applying existing optimization algorithms. While the idea of modeling the
unknown objective function is intuitive and appealing, from the learning point
of view it also makes it very difficult to tune the objective of the learner
according to the objective of optimization. Instead of learning and then
optimizing the unknown objective function, in this paper we take on a less
intuitive but more direct view that optimization can be thought of as a process
of sampling from a generative model. To learn an effective generative model
from the offline data examples, we consider the standard technique of
``re-weighting", and our main technical contribution is a probably
approximately correct (PAC) lower bound on the natural optimization objective,
which allows us to jointly learn a weight function and a score-based generative
model. The robustly competitive performance of the proposed approach is
demonstrated via empirical studies using the standard offline optimization
benchmarks.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Offline Model-Based Optimization via Policy-Guided Gradient Search [30.87992788876113]
We introduce a new learning-to-search- gradient perspective for offline optimization by reformulating it as an offline reinforcement learning problem.
Our proposed policy-guided search approach explicitly learns the best policy for a given surrogate model created from the offline data.
arXiv Detail & Related papers (2024-05-08T18:27:37Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Sample-Efficient Optimization in the Latent Space of Deep Generative
Models via Weighted Retraining [1.5293427903448025]
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.
This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.
arXiv Detail & Related papers (2020-06-16T14:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.