Controlling Topological Defects in Polar Fluids via Reinforcement Learning
- URL: http://arxiv.org/abs/2507.19298v1
- Date: Fri, 25 Jul 2025 14:12:11 GMT
- Title: Controlling Topological Defects in Polar Fluids via Reinforcement Learning
- Authors: Abhinav Singh, Petros Koumoutsakos,
- Abstract summary: We investigate closed-loop steering of integer-charged defects in a confined active fluid.<n>We show that localized control of active stress induces flow fields that can reposition and direct defects along prescribed trajectories.<n>Results highlight how AI agents can learn the underlying dynamics and spatially structure activity to manipulate topological excitations.
- Score: 1.523267496998255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topological defects in active polar fluids exhibit complex dynamics driven by internally generated stresses, reflecting the deep interplay between topology, flow, and non-equilibrium hydrodynamics. Feedback control offers a powerful means to guide such systems, enabling transitions between dynamic states. We investigated closed-loop steering of integer-charged defects in a confined active fluid by modulating the spatial profile of activity. Using a continuum hydrodynamic model, we show that localized control of active stress induces flow fields that can reposition and direct defects along prescribed trajectories by exploiting non-linear couplings in the system. A reinforcement learning framework is used to discover effective control strategies that produce robust defect transport across both trained and novel trajectories. The results highlight how AI agents can learn the underlying dynamics and spatially structure activity to manipulate topological excitations, offering insights into the controllability of active matter and the design of adaptive, self-organized materials.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Model-Based Reinforcement Learning for Control of Strongly-Disturbed Unsteady Aerodynamic Flows [0.0]
We propose a model-based reinforcement learning (MBRL) approach by incorporating a novel reduced-order model as a surrogate for the full environment.<n>The accuracy and robustness of the model are demonstrated in the scenario of a pitching airfoil within a highly disturbed environment.<n>An application to a vertical-axis wind turbine in a disturbance-free environment is discussed in the Appendix.
arXiv Detail & Related papers (2024-08-26T23:21:44Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - Amortized Network Intervention to Steer the Excitatory Point Processes [8.15558505134853]
Excitatory point processes (i.e., event flows) occurring over dynamic graphs provide a fine-grained model to capture how discrete events may spread over time and space.
How to effectively steer the event flows by modifying the dynamic graph structures presents an interesting problem, motivated by curbing the spread of infectious diseases.
We design an Amortized Network Interventions framework, allowing for the pooling of optimal policies from history and other contexts.
arXiv Detail & Related papers (2023-10-06T11:17:28Z) - How to Control Hydrodynamic Force on Fluidic Pinball via Deep
Reinforcement Learning [3.1635451288803638]
We present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball.
By adequately designing reward functions and encoding historical observations, the DRL-based control was shown to make reasonable and valid control decisions.
One of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process.
arXiv Detail & Related papers (2023-04-23T03:39:50Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Isolating and Leveraging Controllable and Noncontrollable Visual
Dynamics in World Models [65.97707691164558]
We present Iso-Dream, which improves the Dream-to-Control framework in two aspects.
First, by optimizing inverse dynamics, we encourage world model to learn controllable and noncontrollable sources.
Second, we optimize the behavior of the agent on the decoupled latent imaginations of the world model.
arXiv Detail & Related papers (2022-05-27T08:07:39Z) - Reinforcement Learning reveals fundamental limits on the mixing of
active particles [2.294014185517203]
In active materials, non-linear dynamics and long-range interactions between particles prohibit closed-form descriptions of the system's dynamics.
We show that RL can only find good strategies to the canonical active matter task of mixing for systems that combine attractive and repulsive particle interactions.
arXiv Detail & Related papers (2021-05-28T21:04:55Z) - Trajectory Tracking of Underactuated Sea Vessels With Uncertain
Dynamics: An Integral Reinforcement Learning Approach [2.064612766965483]
An online machine learning mechanism based on integral reinforcement learning is proposed to find a solution for a class of nonlinear tracking problems.
The solution is implemented using an online value iteration process which is realized by employing means of the adaptive critics and gradient descent approaches.
arXiv Detail & Related papers (2021-04-01T01:41:49Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.