CHyLL: Learning Continuous Neural Representations of Hybrid Systems
- URL: http://arxiv.org/abs/2512.10117v1
- Date: Wed, 10 Dec 2025 22:07:16 GMT
- Title: CHyLL: Learning Continuous Neural Representations of Hybrid Systems
- Authors: Sangli Teng, Hang Liu, Jingyu Song, Koushil Sreenath,
- Abstract summary: We propose CHyLL, which learns a continuous neural representation of a hybrid system without trajectory segmentation, event functions, or mode switching.<n>We showcase that CHyLL can accurately predict the flow of hybrid systems with superior accuracy and identify the topological invariants of the hybrid systems.
- Score: 11.771902164764514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the flows of hybrid systems that have both continuous and discrete time dynamics is challenging. The existing method learns the dynamics in each discrete mode, which suffers from the combination of mode switching and discontinuities in the flows. In this work, we propose CHyLL (Continuous Hybrid System Learning in Latent Space), which learns a continuous neural representation of a hybrid system without trajectory segmentation, event functions, or mode switching. The key insight of CHyLL is that the reset map glues the state space at the guard surface, reformulating the state space as a piecewise smooth quotient manifold where the flow becomes spatially continuous. Building upon these insights and the embedding theorems grounded in differential topology, CHyLL concurrently learns a singularity-free neural embedding in a higher-dimensional space and the continuous flow in it. We showcase that CHyLL can accurately predict the flow of hybrid systems with superior accuracy and identify the topological invariants of the hybrid systems. Finally, we apply CHyLL to the stochastic optimal control problem.
Related papers
- KoopGen: Koopman Generator Networks for Representing and Predicting Dynamical Systems with Continuous Spectra [65.11254608352982]
We introduce a generator-based neural Koopman framework that models dynamics through a structured, state-dependent representation of Koopman generators.<n>By exploiting the intrinsic Cartesian decomposition into skew-adjoint and self-adjoint components, KoopGen separates conservative transport from irreversible dissipation.
arXiv Detail & Related papers (2026-02-15T06:32:23Z) - Foundations of Diffusion Models in General State Spaces: A Self-Contained Introduction [54.95522167029998]
This article is a self-contained primer on diffusion over general state spaces.<n>We develop the discrete-time view (forward noising via Markov kernels and learned reverse dynamics) alongside its continuous-time limits.<n>A common variational treatment yields the ELBO that underpins standard training losses.
arXiv Detail & Related papers (2025-12-04T18:55:36Z) - Curly Flow Matching for Learning Non-gradient Field Dynamics [49.480209466896035]
We introduce Curly Flow Matching (Curly-FM), a novel approach to learning non-gradient field dynamics.<n>Curly-FM is capable of learning non-gradient field dynamics by designing and solving a Schr"odinger bridge problem.<n>Curly-FM can learn trajectories that better match both the reference process and population marginals.
arXiv Detail & Related papers (2025-10-30T16:11:39Z) - Temporal Lifting as Latent-Space Regularization for Continuous-Time Flow Models in AI Systems [0.0]
We present a latent-space formulation of adaptive temporal reparametrization for continuous-time dynamical systems.<n>From the standpoint of machine-learning dynamics, temporal lifting acts as a continuous-time normalization or time-warping operator.
arXiv Detail & Related papers (2025-10-10T19:06:32Z) - Equilibrium flow: From Snapshots to Dynamics [4.741100658955037]
We introduce the Equilibrium flow method, a framework that learns continuous dynamics that preserve a given pattern distribution.<n>For high-dimensional Turing patterns from the Gray-Scott model, we develop an efficient, training-free variant that achieves high fidelity to the ground truth.<n>This capability extends beyond recovering known systems, enabling a new paradigm of inverse design for Artificial Life.
arXiv Detail & Related papers (2025-09-22T16:33:20Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Neural Hybrid Automata: Learning Dynamics with Multiple Modes and
Stochastic Transitions [36.81150424798492]
We introduce Neural Hybrid Automata (NHAs), a recipe for learning SHS dynamics without a priori knowledge on the number of modes and inter-modal transition dynamics.
NHAs provide a systematic inference method based on normalizing flows, neural differential equations and self-supervision.
We showcase NHAs on several tasks, including mode recovery and flow learning in systems with transitions, and end-to-end learning of hierarchical robot controllers.
arXiv Detail & Related papers (2021-06-08T08:04:39Z) - Deep Learning of Conjugate Mappings [2.9097303137825046]
Henri Poincar'e first made the connection by tracking consecutive iterations of the continuous flow with a lower-dimensional, transverse subspace.
This work proposes a method for obtaining explicit Poincar'e mappings by using deep learning to construct an invertible coordinate transformation into a conjugate representation.
arXiv Detail & Related papers (2021-04-01T16:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.