Fold Bifurcation Identification through Scientific Machine Learning
- URL: http://arxiv.org/abs/2312.14210v2
- Date: Thu, 30 Jan 2025 17:08:18 GMT
- Title: Fold Bifurcation Identification through Scientific Machine Learning
- Authors: Giuseppe Habib, Ádám Horváth,
- Abstract summary: This study employs scientific machine learning to identify transient time series of dynamical systems near a fold bifurcation of periodic solutions.<n>A convolutional neural network (CNN) is trained with a relatively small amount of data and on a single, very simple system.<n>CNN was able to correctly classify transient trajectories near a fold for a mass-on-moving-belt system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study employs scientific machine learning to identify transient time series of dynamical systems near a fold bifurcation of periodic solutions. The unique aspect of this work is that a convolutional neural network (CNN) is trained with a relatively small amount of data and on a single, very simple system, yet it is tested on much more complicated systems. This task requires strong generalization capabilities, which are achieved by incorporating physics-based information. This information is provided through a specific pre-processing of the input data, which includes transformation into polar coordinates, normalization, transformation into the logarithmic scale, and filtering through a moving mean. The results demonstrate that such data pre-processing enables the CNN to grasp the important features related to transient time-series near a fold bifurcation, namely, the trend of the oscillation amplitude, and disregard other characteristics that are not particularly relevant, such as the vibration frequency. The developed CNN was able to correctly classify transient trajectories near a fold for a mass-on-moving-belt system, a van der Pol-Duffing oscillator with an attached tuned mass damper, and a pitch-and-plunge wing profile. The results contribute to the progress towards the development of similar CNNs effective in real-life applications such as safety monitoring of dynamical systems.
Related papers
- Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Learning noise-induced transitions by multi-scaling reservoir computing [2.9170682727903863]
We develop a machine learning model, reservoir computing as a type of recurrent neural network, to learn noise-induced transitions.
The trained model generates accurate statistics of transition time and the number of transitions.
It is also aware of the asymmetry of the double-well potential, the rotational dynamics caused by non-detailed balance, and transitions in multi-stable systems.
arXiv Detail & Related papers (2023-09-11T12:26:36Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Flow Functions from Data with Applications to Nonlinear
Oscillators [0.0]
We show that learning the flow function is equivalent to learning the input-to-state map of a discrete-time dynamical system.
This motivates the use of an RNN together with encoder and decoder networks which map the state of the system to the hidden state of the RNN and back.
arXiv Detail & Related papers (2023-03-29T13:04:04Z) - Neuronal architecture extracts statistical temporal patterns [1.9662978733004601]
We show how higher-order temporal (co-)fluctuations can be employed to represent and process information.
A simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification.
arXiv Detail & Related papers (2023-01-24T18:21:33Z) - Interpreting convolutional neural networks' low dimensional
approximation to quantum spin systems [1.631115063641726]
Convolutional neural networks (CNNs) have been employed along with Variational Monte Carlo methods for finding the ground state of quantum many-body spin systems.
We provide a theoretical and experimental analysis of how the CNN optimize learning for spin systems, and investigate the CNN's low dimensional approximation.
Our results allow us to gain a comprehensive, improved understanding of how CNNs successfully approximate quantum spin Hamiltonians.
arXiv Detail & Related papers (2022-10-03T02:49:16Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Estimating permeability of 3D micro-CT images by physics-informed CNNs
based on DNS [1.6274397329511197]
This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples.
The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM)
We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner.
arXiv Detail & Related papers (2021-09-04T08:43:19Z) - Adaptive Machine Learning for Time-Varying Systems: Low Dimensional
Latent Space Tuning [91.3755431537592]
We present a recently developed method of adaptive machine learning for time-varying systems.
Our approach is to map very high (N>100k) dimensional inputs into the low dimensional (N2) latent space at the output of the encoder section of an encoder-decoder CNN.
This method allows us to learn correlations within and to track their evolution in real time based on feedback without interrupts.
arXiv Detail & Related papers (2021-07-13T16:05:28Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Transfer Learning with Convolutional Networks for Atmospheric Parameter
Retrieval [14.131127382785973]
The Infrared Atmospheric Sounding Interferometer (IASI) on board the MetOp satellite series provides important measurements for Numerical Weather Prediction (NWP)
Retrieving accurate atmospheric parameters from the raw data provided by IASI is a large challenge, but necessary in order to use the data in NWP models.
We show how features extracted from the IASI data by a CNN trained to predict a physical variable can be used as inputs to another statistical method designed to predict a different physical variable at low altitude.
arXiv Detail & Related papers (2020-12-09T09:28:42Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - The use of Convolutional Neural Networks for signal-background
classification in Particle Physics experiments [0.4301924025274017]
We present an extensive convolutional neural architecture search, achieving high accuracy for signal/background discrimination for a HEP classification use-case.
We demonstrate among other things that we can achieve the same accuracy as complex ResNet architectures with CNNs with less parameters.
arXiv Detail & Related papers (2020-02-13T19:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.