Using Neural Implicit Flow To Represent Latent Dynamics Of Canonical Systems
- URL: http://arxiv.org/abs/2404.17535v1
- Date: Fri, 26 Apr 2024 17:01:38 GMT
- Title: Using Neural Implicit Flow To Represent Latent Dynamics Of Canonical Systems
- Authors: Imran Nasim, JoaƵ Lucas de Sousa Almeida,
- Abstract summary: We present the capabilities of Neural Implicit Flow (NIF), a recently developed mesh-agnostic neural operator.
NIF represents the latent dynamics of canonical systems such as the Kuramoto-Sivashinsky (KS), forced Korteweg-de Vries (fKdV), and Sine-Gordon (SG) equations.
We also conduct a comparative analysis with another widely recognized family of neural operators, known as Deep Operator Networks (DeepONets)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently introduced class of architectures known as Neural Operators has emerged as highly versatile tools applicable to a wide range of tasks in the field of Scientific Machine Learning (SciML), including data representation and forecasting. In this study, we investigate the capabilities of Neural Implicit Flow (NIF), a recently developed mesh-agnostic neural operator, for representing the latent dynamics of canonical systems such as the Kuramoto-Sivashinsky (KS), forced Korteweg-de Vries (fKdV), and Sine-Gordon (SG) equations, as well as for extracting dynamically relevant information from them. Finally we assess the applicability of NIF as a dimensionality reduction algorithm and conduct a comparative analysis with another widely recognized family of neural operators, known as Deep Operator Networks (DeepONets).
Related papers
- Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - High-order expansion of Neural Ordinary Differential Equations flows [4.4569182855550755]
We introduce Event Transitions, a framework based on high-order differentials that provides a rigorous mathematical description of neural ODE dynamics on event gradient.
Our findings contribute to a deeper theoretical foundation for event-triggered neural differential equations and provide a mathematical construct for explaining complex system dynamics.
arXiv Detail & Related papers (2025-04-02T08:57:34Z) - A Low-complexity Structured Neural Network to Realize States of Dynamical Systems [0.0]
This paper stems from data-driven learning to advance states of dynamical systems utilizing a structured neural network (StNN)
We present numerical simulations to solve dynamical systems utilizing the StNN based on the Hankel operator.
We show that the proposed StNN paves way for realizing state-space dynamical systems with a low-complexity learning enabling prediction and understanding of future states.
arXiv Detail & Related papers (2025-03-31T03:52:38Z) - A Mathematical Framework, a Taxonomy of Modeling Paradigms, and a Suite of Learning Techniques for Neural-Symbolic Systems [22.42431063362667]
We introduce Neural-Symbolic Energy-Based Models (NeSy-EBMs), a unifying mathematical framework for discnative and generative modeling.
We utilize NeSy-EBMs to develop a taxonomy of modeling paradigms focusing on a system's neural-symbolic interface and reasoning capabilities.
We also present Neural Probabilistic Soft Logic (NeuPSL), an open-source NeSy-EBM library designed for scalability and expressivity.
arXiv Detail & Related papers (2024-07-12T21:26:21Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Resolution-Invariant Image Classification based on Fourier Neural
Operators [1.3190581566723918]
We investigate the use of generalization Neural Operators (FNOs) for image classification in comparison to standard Convolutional Neural Networks (CNNs)
We derive the FNO architecture as an example for continuous and Fr'echet-differentiable neural operators on Lebesgue spaces.
arXiv Detail & Related papers (2023-04-02T10:23:36Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Reachability Analysis of a General Class of Neural Ordinary Differential
Equations [7.774796410415387]
Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years.
Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems.
We introduce a novel reachability framework that allows for the formal analysis of their behavior.
arXiv Detail & Related papers (2022-07-13T22:05:52Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Learn Like The Pro: Norms from Theory to Size Neural Computation [3.848947060636351]
We investigate how dynamical systems with nonlinearities can inform the design of neural systems that seek to emulate them.
We propose a Learnability metric and quantify its associated features to the near-equilibrium behavior of learning dynamics.
It reveals exact sizing for a class of neural networks with multiplicative nodes that mimic continuous- or discrete-time dynamics.
arXiv Detail & Related papers (2021-06-21T20:58:27Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - FastONN -- Python based open-source GPU implementation for Operational
Neural Networks [25.838282412957675]
This work introduces a fast GPU-enabled library for training operational neural networks, FastONN.
FastONN is based on a novel vectorized formulation of the operational neurons.
bundled auxiliary modules offer interfaces for performance tracking and checkpointing across different data partitions and customized metrics.
arXiv Detail & Related papers (2020-06-03T13:33:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.