Conservative Objective Models for Effective Offline Model-Based
Optimization
- URL: http://arxiv.org/abs/2107.06882v1
- Date: Wed, 14 Jul 2021 17:55:28 GMT
- Title: Conservative Objective Models for Effective Offline Model-Based
Optimization
- Authors: Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine
- Abstract summary: Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
- Score: 78.19085445065845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational design problems arise in a number of settings, from synthetic
biology to computer architectures. In this paper, we aim to solve data-driven
model-based optimization (MBO) problems, where the goal is to find a design
input that maximizes an unknown objective function provided access to only a
static dataset of prior experiments. Such data-driven optimization procedures
are the only practical methods in many real-world domains where active data
collection is expensive (e.g., when optimizing over proteins) or dangerous
(e.g., when optimizing over aircraft designs). Typical methods for MBO that
optimize the design against a learned model suffer from distributional shift:
it is easy to find a design that "fools" the model into predicting a high
value. To overcome this, we propose conservative objective models (COMs), a
method that learns a model of the objective function that lower bounds the
actual value of the ground-truth objective on out-of-distribution inputs, and
uses it for optimization. Structurally, COMs resemble adversarial training
methods used to overcome adversarial examples. COMs are simple to implement and
outperform a number of existing methods on a wide range of MBO problems,
including optimizing protein sequences, robot morphologies, neural network
weights, and superconducting materials.
Related papers
- Cliqueformer: Model-Based Optimization with Structured Transformers [102.55764949282906]
We develop a model that learns the structure of an MBO task and empirically leads to improved designs.
We evaluate Cliqueformer on various tasks, ranging from high-dimensional black-box functions to real-world tasks of chemical and genetic design.
arXiv Detail & Related papers (2024-10-17T00:35:47Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - From Function to Distribution Modeling: A PAC-Generative Approach to
Offline Optimization [30.689032197123755]
This paper considers the problem of offline optimization, where the objective function is unknown except for a collection of offline" data examples.
Instead of learning and then optimizing the unknown objective function, we take on a less intuitive but more direct view that optimization can be thought of as a process of sampling from a generative model.
arXiv Detail & Related papers (2024-01-04T01:32:50Z) - ROMO: Retrieval-enhanced Offline Model-based Optimization [14.277672372460785]
Data-driven black-box model-based optimization (MBO) problems arise in a number of practical application scenarios.
We propose retrieval-enhanced offline model-based optimization (ROMO)
ROMO is simple to implement and outperforms state-of-the-art approaches in the CoMBO setting.
arXiv Detail & Related papers (2023-10-11T15:04:33Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - High-Dimensional Bayesian Optimization via Tree-Structured Additive
Models [40.497123136157946]
We consider generalized additive models in which low-dimensional functions with overlapping subsets of variables are composed to model a high-dimensional target function.
Our goal is to lower the computational resources required and facilitate faster model learning.
We demonstrate and discuss the efficacy of our approach via a range of experiments on synthetic functions and real-world datasets.
arXiv Detail & Related papers (2020-12-24T03:56:44Z) - Sample-Efficient Optimization in the Latent Space of Deep Generative
Models via Weighted Retraining [1.5293427903448025]
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.
This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.
arXiv Detail & Related papers (2020-06-16T14:34:40Z) - Model Inversion Networks for Model-Based Optimization [110.24531801773392]
We propose model inversion networks (MINs), which learn an inverse mapping from scores to inputs.
MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.
We evaluate MINs on tasks from the Bayesian optimization literature, high-dimensional model-based optimization problems over images and protein designs, and contextual bandit optimization from logged data.
arXiv Detail & Related papers (2019-12-31T18:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.