Taming Lagrangian Chaos with Multi-Objective Reinforcement Learning
- URL: http://arxiv.org/abs/2212.09612v1
- Date: Mon, 19 Dec 2022 16:50:58 GMT
- Title: Taming Lagrangian Chaos with Multi-Objective Reinforcement Learning
- Authors: Chiara Calascibetta, Luca Biferale, Francesco Borra, Antonio Celani
and Massimo Cencini
- Abstract summary: We consider the problem of two active particles in 2D complex flows with the multi-objective goals of minimizing both the dispersion rate and the energy consumption of the pair.
We approach the problem by means of Multi Objective Reinforcement Learning (MORL), combining scalarization techniques together with a Q-learning algorithm, for Lagrangian drifters that have variable swimming velocity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of two active particles in 2D complex flows with the
multi-objective goals of minimizing both the dispersion rate and the energy
consumption of the pair. We approach the problem by means of Multi Objective
Reinforcement Learning (MORL), combining scalarization techniques together with
a Q-learning algorithm, for Lagrangian drifters that have variable swimming
velocity. We show that MORL is able to find a set of trade-off solutions
forming an optimal Pareto frontier. As a benchmark, we show that a set of
heuristic strategies are dominated by the MORL solutions. We consider the
situation in which the agents cannot update their control variables
continuously, but only after a discrete (decision) time, $\tau$. We show that
there is a range of decision times, in between the Lyapunov time and the
continuous updating limit, where Reinforcement Learning finds strategies that
significantly improve over heuristics. In particular, we discuss how large
decision times require enhanced knowledge of the flow, whereas for smaller
$\tau$ all a priori heuristic strategies become Pareto optimal.
Related papers
- Q-VLM: Post-training Quantization for Large Vision-Language Models [73.19871905102545]
We propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference.
We mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy.
Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation.
arXiv Detail & Related papers (2024-10-10T17:02:48Z) - M$^{2}$M: Learning controllable Multi of experts and multi-scale operators are the Partial Differential Equations need [43.534771810528305]
This paper introduces a framework of multi-scale and multi-expert (M$2$M) neural operators to simulate and learn PDEs efficiently.
We employ a divide-and-conquer strategy to train a multi-expert gated network for the dynamic router policy.
Our method incorporates a controllable prior gating mechanism that determines the selection rights of experts, enhancing the model's efficiency.
arXiv Detail & Related papers (2024-10-01T15:42:09Z) - A Re-solving Heuristic for Dynamic Assortment Optimization with Knapsack Constraints [14.990988698038686]
We consider a multi-stage dynamic assortment optimization problem with multi-nomial choice modeling (MNL) under resource knapsack constraints.
With the exact optimal dynamic assortment solution being computationally intractable, a practical strategy is to adopt the re-solving technique that periodically re-optimizes deterministic linear programs.
We propose a new epoch-based re-solving algorithm that effectively transforms the denominator of the objective into the constraint.
arXiv Detail & Related papers (2024-07-08T02:40:20Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Towards Geometry-Aware Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization [19.631213689157995]
Multi-objective diversity optimization (MOCO) problems are prevalent in various real-world applications.
Most existing neural MOCO methods rely on problem to transform an MOCO problem into a series of singe-objective diversity enhancement (SOCO) problems.
These methods often approximate partial regions of the front because of ambiguous and time-consuming precise hypervolume calculation.
arXiv Detail & Related papers (2024-05-14T13:42:19Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - FlowPG: Action-constrained Policy Gradient with Normalizing Flows [14.98383953401637]
Action-constrained reinforcement learning (ACRL) is a popular approach for solving safety-critical resource-alential related decision making problems.
A major challenge in ACRL is to ensure agent taking a valid action satisfying constraints in each step.
arXiv Detail & Related papers (2024-02-07T11:11:46Z) - Exploring and Exploiting Decision Boundary Dynamics for Adversarial
Robustness [59.948529997062586]
It is unclear whether existing robust training methods effectively increase the margin for each vulnerable point during training.
We propose a continuous-time framework for quantifying the relative speed of the decision boundary with respect to each individual point.
We propose Dynamics-aware Robust Training (DyART), which encourages the decision boundary to engage in movement that prioritizes increasing smaller margins.
arXiv Detail & Related papers (2023-02-06T18:54:58Z) - Adversarially Robust Learning for Security-Constrained Optimal Power
Flow [55.816266355623085]
We tackle the problem of N-k security-constrained optimal power flow (SCOPF)
N-k SCOPF is a core problem for the operation of electrical grids.
Inspired by methods in adversarially robust training, we frame N-k SCOPF as a minimax optimization problem.
arXiv Detail & Related papers (2021-11-12T22:08:10Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.