Distributional Multi-objective Black-box Optimization for Diffusion-model Inference-time Multi-Target Generation
- URL: http://arxiv.org/abs/2510.26278v1
- Date: Thu, 30 Oct 2025 09:00:42 GMT
- Title: Distributional Multi-objective Black-box Optimization for Diffusion-model Inference-time Multi-Target Generation
- Authors: Kim Yong Tan, Yueming Lyu, Ivor Tsang, Yew-Soon Ong,
- Abstract summary: We propose the Inference-time Multi-target Generation (IMG) algorithm, which optimize the diffusion process at inference-time.<n>Specifically, our IMG performs weighted resampling during the diffusion generation process according to the expected aggregated multi-objective values.<n>Experiments show that IMG, requiring only a single generation pass, achieves a significantly higher hypervolume than baseline optimization algorithms.
- Score: 42.27623008321844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have been successful in learning complex data distributions. This capability has driven their application to high-dimensional multi-objective black-box optimization problem. Existing approaches often employ an external optimization loop, such as an evolutionary algorithm, to the diffusion model. However, these approaches treat the diffusion model as a black-box refiner, which overlooks the internal distribution transition of the diffusion generation process, limiting their efficiency. To address these challenges, we propose the Inference-time Multi-target Generation (IMG) algorithm, which optimizes the diffusion process at inference-time to generate samples that simultaneously satisfy multiple objectives. Specifically, our IMG performs weighted resampling during the diffusion generation process according to the expected aggregated multi-objective values. This weighted resampling strategy ensures the diffusion-generated samples are distributed according to our desired multi-target Boltzmann distribution. We further derive that the multi-target Boltzmann distribution has an interesting log-likelihood interpretation, where it is the optimal solution to the distributional multi-objective optimization problem. We implemented IMG for a multi-objective molecule generation task. Experiments show that IMG, requiring only a single generation pass, achieves a significantly higher hypervolume than baseline optimization algorithms that often require hundreds of diffusion generations. Notably, our algorithm can be viewed as an optimized diffusion process and can be integrated into existing methods to further improve their performance.
Related papers
- Discrete Diffusion Trajectory Alignment via Stepwise Decomposition [80.66701883088935]
We propose an offline preference optimization method to approach trajectory alignment for discrete diffusion models.<n>We decompose the problem into a set of stepwise alignment objectives by matching the per-step posterior.<n> Experiments across multiple domains including DNA sequence design, protein inverse folding, and language modeling consistently demonstrate the superiority of our approach.
arXiv Detail & Related papers (2025-07-07T09:52:56Z) - Test-time Alignment of Diffusion Models without Reward Over-optimization [8.981605934618349]
Diffusion models excel in generative tasks, but aligning them with specific objectives remains challenging.<n>We propose a training-free, test-time method based on Sequential Monte Carlo (SMC) to sample from the reward-aligned target distribution.<n>We demonstrate its effectiveness in single-reward optimization, multi-objective scenarios, and online black-box optimization.
arXiv Detail & Related papers (2025-01-10T09:10:30Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Generative diffusion models are popular in various cross-domain applications.<n>These models hold promise in tackling complex network optimization problems.<n>We propose a new framework for generative diffusion models called Diffusion Model-based Solution Generation.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Covariance-Adaptive Sequential Black-box Optimization for Diffusion Targeted Generation [60.41803046775034]
We show how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users.
Experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores.
arXiv Detail & Related papers (2024-06-02T17:26:27Z) - Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints [55.39203337683045]
We propose to perform optimization within the data manifold using diffusion models.<n>Depending on the differentiability of the objective function, we propose two different sampling methods.<n>Our method achieves better or comparable performance with previous state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-28T03:09:12Z) - Diffusion Model-Based Multiobjective Optimization for Gasoline Blending
Scheduling [30.040728803996256]
Gasoline blending scheduling uses resource allocation and operation sequencing to meet a refinery's production requirements.
The presence of nonlinearity, integer constraints, and a large number of decision variables adds complexity to this problem.
This paper introduces a novel multiobjective optimization approach driven by a diffusion model (named DMO)
arXiv Detail & Related papers (2024-02-04T05:46:28Z) - Where to Diffuse, How to Diffuse, and How to Get Back: Automated
Learning for Multivariate Diffusions [22.04182099405728]
Diffusion-based generative models (DBGMs) perturb data to a target noise distribution and reverse this inference diffusion process to generate samples.
We show how to maximize a lower-bound on the likelihood for any number of auxiliary variables.
We then demonstrate how to parameterize the diffusion for a specified target noise distribution.
arXiv Detail & Related papers (2023-02-14T18:57:04Z) - Fast Inference in Denoising Diffusion Models via MMD Finetuning [23.779985842891705]
We present MMD-DDM, a novel method for fast sampling of diffusion models.
Our approach is based on the idea of using the Maximum Mean Discrepancy (MMD) to finetune the learned distribution with a given budget of timesteps.
Our findings show that the proposed method is able to produce high-quality samples in a fraction of the time required by widely-used diffusion models.
arXiv Detail & Related papers (2023-01-19T09:48:07Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.