Diffusion Model-Based Multiobjective Optimization for Gasoline Blending
Scheduling
- URL: http://arxiv.org/abs/2402.14600v1
- Date: Sun, 4 Feb 2024 05:46:28 GMT
- Title: Diffusion Model-Based Multiobjective Optimization for Gasoline Blending
Scheduling
- Authors: Wenxuan Fang and Wei Du and Renchu He and Yang Tang and Yaochu Jin and
Gary G. Yen
- Abstract summary: Gasoline blending scheduling uses resource allocation and operation sequencing to meet a refinery's production requirements.
The presence of nonlinearity, integer constraints, and a large number of decision variables adds complexity to this problem.
This paper introduces a novel multiobjective optimization approach driven by a diffusion model (named DMO)
- Score: 30.040728803996256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gasoline blending scheduling uses resource allocation and operation
sequencing to meet a refinery's production requirements. The presence of
nonlinearity, integer constraints, and a large number of decision variables
adds complexity to this problem, posing challenges for traditional and
evolutionary algorithms. This paper introduces a novel multiobjective
optimization approach driven by a diffusion model (named DMO), which is
designed specifically for gasoline blending scheduling. To address integer
constraints and generate feasible schedules, the diffusion model creates
multiple intermediate distributions between Gaussian noise and the feasible
domain. Through iterative processes, the solutions transition from Gaussian
noise to feasible schedules while optimizing the objectives using the gradient
descent method. DMO achieves simultaneous objective optimization and constraint
adherence. Comparative tests are conducted to evaluate DMO's performance across
various scales. The experimental results demonstrate that DMO surpasses
state-of-the-art multiobjective evolutionary algorithms in terms of efficiency
when solving gasoline blending scheduling problems.
Related papers
- Harnessing the Power of Gradient-Based Simulations for Multi-Objective Optimization in Particle Accelerators [5.565261874218803]
This paper demonstrates the power of differentiability for solving MOO problems using a Deep Differentiable Reinforcement Learning algorithm in particle accelerators.
The underlying problem enforces strict constraints on both individual states and actions as well as cumulative (global) constraint for energy requirements of the beam.
arXiv Detail & Related papers (2024-11-07T15:55:05Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation [49.49868273653921]
Diffusion models are promising for joint trajectory prediction and controllable generation in autonomous driving.
We introduce Optimal Gaussian Diffusion (OGD) and Estimated Clean Manifold (ECM) Guidance.
Our methodology streamlines the generative process, enabling practical applications with reduced computational overhead.
arXiv Detail & Related papers (2024-08-01T17:59:59Z) - TMPQ-DM: Joint Timestep Reduction and Quantization Precision Selection for Efficient Diffusion Models [40.5153344875351]
We introduce TMPQ-DM, which jointly optimize timestep reduction and quantization to achieve a superior performance-efficiency trade-off.
For timestep reduction, we devise a non-uniform grouping scheme tailored to the non-uniform nature of the denoising process.
In terms of quantization, we adopt a fine-grained layer-wise approach to allocate varying bit-widths to different layers based on their respective contributions to the final generative performance.
arXiv Detail & Related papers (2024-04-15T07:51:40Z) - M-HOF-Opt: Multi-Objective Hierarchical Output Feedback Optimization via Multiplier Induced Loss Landscape Scheduling [4.499391876093543]
We address the online choice of weight multipliers for multi-objective optimization of many loss terms parameterized by neural works.
Our method is multiplier-free and operates at the timescale of epochs.
It also circumvents the excessive memory requirements and heavy computational burden of existing multi-objective deep learning methods.
arXiv Detail & Related papers (2024-03-20T16:38:26Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - AdaDiff: Accelerating Diffusion Models through Step-Wise Adaptive Computation [32.74923906921339]
Diffusion models achieve great success in generating diverse and high-fidelity images, yet their widespread application is hampered by their inherently slow generation speed.
We propose AdaDiff, an adaptive framework that dynamically allocates computation resources in each sampling step to improve the generation efficiency of diffusion models.
arXiv Detail & Related papers (2023-09-29T09:10:04Z) - Multi-Agent Deep Reinforcement Learning in Vehicular OCC [14.685237010856953]
We introduce a spectral efficiency optimization approach in vehicular OCC.
We model the optimization problem as a Markov decision process (MDP) to enable the use of solutions that can be applied online.
We verify the performance of our proposed scheme through extensive simulations and compare it with various variants of our approach and a random method.
arXiv Detail & Related papers (2022-05-05T14:25:54Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.