Scalable Formal Verification via Autoencoder Latent Space Abstraction
- URL: http://arxiv.org/abs/2512.13593v2
- Date: Tue, 16 Dec 2025 16:58:02 GMT
- Title: Scalable Formal Verification via Autoencoder Latent Space Abstraction
- Authors: Robert Reed, Luca Laurenti, Morteza Lahijanian,
- Abstract summary: We provide a formal approach to reduce the dimensionality of systems via convex autoencoders and learn the dynamics in the latent space through a kernel-based method.<n>We show that the verification results in the latent space can be mapped back to the original system.
- Score: 13.740416712776456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finite Abstraction methods provide a powerful formal framework for proving that systems satisfy their specifications. However, these techniques face scalability challenges for high-dimensional systems, as they rely on state-space discretization which grows exponentially with dimension. Learning-based approaches to dimensionality reduction, utilizing neural networks and autoencoders, have shown great potential to alleviate this problem. However, ensuring the correctness of the resulting verification results remains an open question. In this work, we provide a formal approach to reduce the dimensionality of systems via convex autoencoders and learn the dynamics in the latent space through a kernel-based method. We then construct a finite abstraction from the learned model in the latent space and guarantee that the abstraction contains the true behaviors of the original system. We show that the verification results in the latent space can be mapped back to the original system. Finally, we demonstrate the effectiveness of our approach on multiple systems, including a 26D system controlled by a neural network, showing significant scalability improvements without loss of rigor.
Related papers
- Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Physics-Informed Latent Neural Operator for Real-time Predictions of time-dependent parametric PDEs [0.0]
Deep operator network (DeepONet) has shown promise as surrogate models for systems governed by partial differential equations (PDEs)<n>In this work, we propose PI-Latent-NO, a physics-informed latent neural operator framework that integrates governing physics directly into the learning process.
arXiv Detail & Related papers (2025-01-14T20:38:30Z) - Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space [2.527926867319859]
We argue that a promising avenue is to leverage powerful and well-understood closed-form strategies from control theory literature.
We identify three fundamental shortcomings in existing latent-space models that have so far prevented this powerful combination.
We propose a novel Coupled Network (CON) model that simultaneously tackles all these issues.
arXiv Detail & Related papers (2024-09-13T00:11:09Z) - Latent Dynamics Networks (LDNets): learning the intrinsic dynamics of
spatio-temporal processes [2.3694122563610924]
Latent Dynamics Network (LDNet) is able to discover low-dimensional intrinsic dynamics of possibly non-Markovian dynamical systems.
LDNets are lightweight and easy-to-train, with excellent accuracy and generalization properties, even in time-extrapolation regimes.
arXiv Detail & Related papers (2023-04-28T21:11:13Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - Learning on Abstract Domains: A New Approach for Verifiable Guarantee in
Reinforcement Learning [9.428825075908131]
We propose an abstraction-based approach to train DRL systems on finite abstract domains.
It yields neural networks whose input states are finite, making hosting DRL systems directly verifiable.
arXiv Detail & Related papers (2021-06-13T06:28:40Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Explicitly Encouraging Low Fractional Dimensional Trajectories Via
Reinforcement Learning [6.548580592686076]
We show that the dimensionality of trajectories induced by model free reinforcement learning agents can be influenced adding a post processing function to the agents reward signal.
We verify that the dimensionality reduction is robust to noise being added to the system and show that that the modified agents are more actually more robust to noise and push disturbances in general for the systems we examined.
arXiv Detail & Related papers (2020-12-21T20:09:17Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.