On the reproducibility of fully convolutional neural networks for
modeling time-space evolving physical systems
- URL: http://arxiv.org/abs/2105.05482v1
- Date: Wed, 12 May 2021 07:39:30 GMT
- Title: On the reproducibility of fully convolutional neural networks for
modeling time-space evolving physical systems
- Authors: Wagner Gon\c{c}alves Pinto, Antonio Alguacil and Micha\"el Bauerheim
- Abstract summary: Deep-learning fully convolutional neural network is evaluated by training several times the same network on identical conditions.
Trainings performed with double floating-point precision provide slightly better estimations and a significant reduction of the variability of both the network parameters and its testing error range.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reproducibility of a deep-learning fully convolutional neural network is
evaluated by training several times the same network on identical conditions
(database, hyperparameters, hardware) with non-deterministic Graphics
Processings Unit (GPU) operations. The propagation of two-dimensional acoustic
waves, typical of time-space evolving physical systems, is studied on both
recursive and non-recursive tasks. Significant changes in models properties
(weights, featured fields) are observed. When tested on various propagation
benchmarks, these models systematically returned estimations with a high level
of deviation, especially for the recurrent analysis which strongly amplifies
variability due to the non-determinism. Trainings performed with double
floating-point precision provide slightly better estimations and a significant
reduction of the variability of both the network parameters and its testing
error range.
Related papers
- Reconstruction of neuromorphic dynamics from a single scalar time series using variational autoencoder and neural network map [0.0]
A model of a physiological neuron based on the Hodgkin-Huxley formalism is considered.
Single time series of one of its variables is shown to be enough to train a neural network that can operate as a discrete time dynamical system.
arXiv Detail & Related papers (2024-11-11T15:15:55Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A critical look at deep neural network for dynamic system modeling [0.0]
This paper questions the capability of (deep) neural networks for the modeling of dynamic systems using input-output data.
For the identification of linear time-invariant (LTI) dynamic systems, two representative neural network models are compared.
For the LTI system, both LSTM and CFNN fail to deliver consistent models even in noise-free cases.
arXiv Detail & Related papers (2023-01-27T09:03:05Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Learning Continuous-Time Dynamics by Stochastic Differential Networks [32.63114111531396]
We propose a flexible continuous-time recurrent neural network named Variational Differential Networks (VSDN)
VSDN embeds the complicated dynamics of the sporadic time series by neural Differential Equations (SDE)
We show that VSDNs outperform state-of-the-art continuous-time deep learning models and achieve remarkable performance on prediction and tasks for sporadic time series.
arXiv Detail & Related papers (2020-06-11T01:40:34Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Comparing recurrent and convolutional neural networks for predicting
wave propagation [0.0]
We investigate the performance of recurrent and convolutional deep neural network architectures to predict the surface waves.
We improve on the long-term prediction over previous methods while keeping the inference time at a fraction of numerical simulations.
We also show that convolutional networks perform at least as well as recurrent networks in this task.
arXiv Detail & Related papers (2020-02-20T19:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.