Offline Multi-Objective Optimization
- URL: http://arxiv.org/abs/2406.03722v1
- Date: Thu, 6 Jun 2024 03:35:09 GMT
- Title: Offline Multi-Objective Optimization
- Authors: Ke Xue, Rong-Xi Tan, Xiaobin Huang, Chao Qian,
- Abstract summary: offline optimization aims to maximize a black-box objective function with a static dataset and has wide applications.
We propose a first benchmark for offline MOO, covering a range of problems from synthetic to real-world tasks.
Empirical results show improvements over the best value of the training set, demonstrating the effectiveness of offline MOO methods.
- Score: 23.543056729281695
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Offline optimization aims to maximize a black-box objective function with a static dataset and has wide applications. In addition to the objective function being black-box and expensive to evaluate, numerous complex real-world problems entail optimizing multiple conflicting objectives, i.e., multi-objective optimization (MOO). Nevertheless, offline MOO has not progressed as much as offline single-objective optimization (SOO), mainly due to the lack of benchmarks like Design-Bench for SOO. To bridge this gap, we propose a first benchmark for offline MOO, covering a range of problems from synthetic to real-world tasks. This benchmark provides tasks, datasets, and open-source examples, which can serve as a foundation for method comparisons and advancements in offline MOO. Furthermore, we analyze how the current related methods can be adapted to offline MOO from four fundamental perspectives, including data, model architecture, learning algorithm, and search algorithm. Empirical results show improvements over the best value of the training set, demonstrating the effectiveness of offline MOO methods. As no particular method stands out significantly, there is still an open challenge in further enhancing the effectiveness of offline MOO. We finally discuss future challenges for offline MOO, with the hope of shedding some light on this emerging field. Our code is available at \url{https://github.com/lamda-bbo/offline-moo}.
Related papers
- Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.
Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.
We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - Offline Model-Based Optimization via Policy-Guided Gradient Search [30.87992788876113]
We introduce a new learning-to-search- gradient perspective for offline optimization by reformulating it as an offline reinforcement learning problem.
Our proposed policy-guided search approach explicitly learns the best policy for a given surrogate model created from the offline data.
arXiv Detail & Related papers (2024-05-08T18:27:37Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - Large-Batch, Iteration-Efficient Neural Bayesian Design Optimization [37.339567743948955]
We present a novel Bayesian optimization framework specifically tailored to address the limitations of BO.
Our key contribution is a highly scalable, sample-based acquisition function that performs a non-dominated sorting of objectives.
We show that our acquisition function in combination with different Bayesian neural network surrogates is effective in data-intensive environments with a minimal number of iterations.
arXiv Detail & Related papers (2023-06-01T19:10:57Z) - PASTA: Pessimistic Assortment Optimization [25.51792135903357]
We consider a class of assortment optimization problems in an offline data-driven setting.
We propose an algorithm referred to as Pessimistic ASsortment opTimizAtion (PASTA) based on the principle of pessimism.
arXiv Detail & Related papers (2023-02-08T01:11:51Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Design-Bench: Benchmarks for Data-Driven Offline Model-Based
Optimization [82.02008764719896]
Black-box model-based optimization problems are ubiquitous in a wide range of domains, such as the design of proteins, DNA sequences, aircraft, and robots.
We present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods.
Our benchmark includes a suite of diverse and realistic tasks derived from real-world optimization problems in biology, materials science, and robotics.
arXiv Detail & Related papers (2022-02-17T05:33:27Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Offline Meta-Reinforcement Learning with Advantage Weighting [125.21298190780259]
This paper introduces the offline meta-reinforcement learning (offline meta-RL) problem setting and proposes an algorithm that performs well in this setting.
offline meta-RL is analogous to the widely successful supervised learning strategy of pre-training a model on a large batch of fixed, pre-collected data.
We propose Meta-Actor Critic with Advantage Weighting (MACAW), an optimization-based meta-learning algorithm that uses simple, supervised regression objectives for both the inner and outer loop of meta-training.
arXiv Detail & Related papers (2020-08-13T17:57:14Z) - DeepCO: Offline Combinatorial Optimization Framework Utilizing Deep
Learning [1.2183405753834562]
We propose DeepCO, an offline optimization framework utilizing deep learning.
We also design an offline variation of Travelling Salesman Problem (TSP) to model warehouse operation sequence optimization problem.
With only limited historical data, novel proposed distribution regularized optimization outperforms existing baseline method in offline experiment reducing route length by 5.7% averagely.
arXiv Detail & Related papers (2020-07-20T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.