In-Context Multi-Objective Optimization
- URL: http://arxiv.org/abs/2512.11114v1
- Date: Thu, 11 Dec 2025 20:56:42 GMT
- Title: In-Context Multi-Objective Optimization
- Authors: Xinyu Zhang, Conor Hassan, Julien Martinelli, Daolang Huang, Samuel Kaski,
- Abstract summary: TAMO is a fully amortized, universal policy for multi-objective black-box optimization.<n>We present TAMO, a fully amortized, universal policy for multi-objective black-box optimization.
- Score: 24.738414334054358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Balancing competing objectives is omnipresent across disciplines, from drug design to autonomous systems. Multi-objective Bayesian optimization is a promising solution for such expensive, black-box problems: it fits probabilistic surrogates and selects new designs via an acquisition function that balances exploration and exploitation. In practice, it requires tailored choices of surrogate and acquisition that rarely transfer to the next problem, is myopic when multi-step planning is often required, and adds refitting overhead, particularly in parallel or time-sensitive loops. We present TAMO, a fully amortized, universal policy for multi-objective black-box optimization. TAMO uses a transformer architecture that operates across varying input and objective dimensions, enabling pretraining on diverse corpora and transfer to new problems without retraining: at test time, the pretrained model proposes the next design with a single forward pass. We pretrain the policy with reinforcement learning to maximize cumulative hypervolume improvement over full trajectories, conditioning on the entire query history to approximate the Pareto frontier. Across synthetic benchmarks and real tasks, TAMO produces fast proposals, reducing proposal time by 50-1000x versus alternatives while matching or improving Pareto quality under tight evaluation budgets. These results show that transformers can perform multi-objective optimization entirely in-context, eliminating per-task surrogate fitting and acquisition engineering, and open a path to foundation-style, plug-and-play optimizers for scientific discovery workflows.
Related papers
- Task-free Adaptive Meta Black-box Optimization [55.461814601130044]
We propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task.<n>Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop parameter learning mechanism, where parameterized evolutionary operators continuously self-update.<n>This paradigm shift enables zero-shot optimization: ABOM competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks.
arXiv Detail & Related papers (2026-01-29T09:54:10Z) - Neural Nonmyopic Bayesian Optimization in Dynamic Cost Settings [73.44599934855067]
LookaHES is a nonmyopic BO framework designed for dynamic, history-dependent cost environments.<n>LookaHES combines a multi-step variant of $H$-Entropy Search with pathwise sampling and neural policy optimization.<n>Our innovation is the integration of neural policies, including large language models, to effectively navigate structured, domain-specific action spaces.
arXiv Detail & Related papers (2026-01-10T09:49:45Z) - Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning [59.11663802446183]
Flow and diffusion generative models can be adapted to optimize task-specific objectives while preserving prior information.<n>We introduce Flow Density Control (FDC), a simple algorithm that reduces this complex problem to a specific sequence of simpler fine-tuning tasks.<n>We derive convergence guarantees for the proposed scheme under realistic assumptions by leveraging recent understanding of mirror flows.
arXiv Detail & Related papers (2025-11-27T17:19:01Z) - Parametric Expensive Multi-Objective Optimization via Generative Solution Modeling [34.344228998247225]
This paper introduces the first parametric multi-objective Bayesian that learns this inverse model by alternating between acquisition-driven search and generative models.<n>We theoretically justify the faster convergence by leveraging inter-task synergies through task-aware Gaussian processes.
arXiv Detail & Related papers (2025-11-12T15:13:27Z) - A Unified Multi-Task Learning Framework for Generative Auto-Bidding with Validation-Aligned Optimization [51.27959658504722]
Multi-task learning offers a principled framework to train these tasks jointly through shared representations.<n>Existing multi-task optimization strategies are primarily guided by training dynamics and often generalize poorly in volatile bidding environments.<n>We present Validation-Aligned Multi-task Optimization (VAMO), which adaptively assigns task weights based on the alignment between per-task training gradients and a held-out validation gradient.
arXiv Detail & Related papers (2025-10-09T03:59:51Z) - FoMEMO: Towards Foundation Models for Expensive Multi-objective Optimization [19.69959362934787]
We propose a new paradigm named FoMEMO, which enables the establishment of a foundation model conditioned on any domain trajectory and user preference.<n>Rather than accessing extensive domain experiments in the real world, we demonstrate that pre-training the foundation model with a diverse set of hundreds of millions of synthetic data can lead to superior adaptability to unknown problems.
arXiv Detail & Related papers (2025-09-03T12:00:24Z) - MPFormer: Adaptive Framework for Industrial Multi-Task Personalized Sequential Retriever [22.507173183511153]
MPFormer is a dynamic multi-task Transformer framework for industrial recommendation systems.<n>It is successfully integrated into Kuaishou short video recommendation system, serving over 400 million daily active users.
arXiv Detail & Related papers (2025-08-28T03:53:55Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.<n>Our guidance is a preference model trained to predict the probability that one design dominates another.<n>Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - Continual Optimization with Symmetry Teleportation for Multi-Task Learning [73.28772872740744]
Multi-task learning (MTL) enables the simultaneous learning of multiple tasks using a single model.<n>We propose a novel approach based on Continual Optimization with Symmetry Teleportation (COST)<n>COST seeks an alternative loss-equivalent point on the loss landscape to reduce conflict gradients.
arXiv Detail & Related papers (2025-03-06T02:58:09Z) - Multi-Fidelity Bayesian Optimization With Across-Task Transferable Max-Value Entropy Search [36.14499894307206]
This paper introduces a novel information-theoretic acquisition function that balances the need to acquire information about the current task with the goal of collecting information transferable to future tasks.<n>Results show that the proposed acquisition strategy can significantly improve the optimization efficiency as soon as a sufficient number of tasks is processed.
arXiv Detail & Related papers (2024-03-14T17:00:01Z) - MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement Learning for Discrete Prompt Optimization [45.410121761165634]
RL-based techniques can be employed to search for prompts that, when fed into a target language model, maximize a set of user-specified reward functions.
Current techniques focus on maximizing the average of reward functions, which does not necessarily lead to prompts that achieve balance across rewards.
arXiv Detail & Related papers (2024-02-18T21:25:09Z) - Studying Evolutionary Solution Adaption Using a Flexibility Benchmark Based on a Metal Cutting Process [39.05320053926048]
We consider optimizing for different production requirements from the viewpoint of a bio-inspired framework for system flexibility.<n>We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, which optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, which accommodates different possibilities that can be activated or deactivated.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.