Discovery of slow variables in a class of multiscale stochastic systems
via neural networks
- URL: http://arxiv.org/abs/2104.13911v1
- Date: Wed, 28 Apr 2021 17:48:25 GMT
- Title: Discovery of slow variables in a class of multiscale stochastic systems
via neural networks
- Authors: Przemyslaw Zielinski and Jan S. Hesthaven
- Abstract summary: We propose a new method to encode in an artificial neural network a map that extracts the slow representation from the system.
We test the method on a number of examples that illustrate the ability to discover a correct slow representation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding a reduction of complex, high-dimensional dynamics to its essential,
low-dimensional "heart" remains a challenging yet necessary prerequisite for
designing efficient numerical approaches. Machine learning methods have the
potential to provide a general framework to automatically discover such
representations. In this paper, we consider multiscale stochastic systems with
local slow-fast time scale separation and propose a new method to encode in an
artificial neural network a map that extracts the slow representation from the
system. The architecture of the network consists of an encoder-decoder pair
that we train in a supervised manner to learn the appropriate low-dimensional
embedding in the bottleneck layer. We test the method on a number of examples
that illustrate the ability to discover a correct slow representation.
Moreover, we provide an error measure to assess the quality of the embedding
and demonstrate that pruning the network can pinpoint an essential coordinates
of the system to build the slow representation.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - HGFF: A Deep Reinforcement Learning Framework for Lifetime Maximization in Wireless Sensor Networks [5.4894758104028245]
We propose a new framework combining heterogeneous graph neural network with deep reinforcement learning to automatically construct the movement path of the sink.
We design ten types of static and dynamic maps to simulate different wireless sensor networks in the real world.
Our approach consistently outperforms the existing methods on all types of maps.
arXiv Detail & Related papers (2024-04-11T13:09:11Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - A Proper Orthogonal Decomposition approach for parameters reduction of
Single Shot Detector networks [0.0]
We propose a dimensionality reduction framework based on Proper Orthogonal Decomposition, a classical model order reduction technique.
We have applied such framework to SSD300 architecture using PASCAL VOC dataset, demonstrating a reduction of the network dimension and a remarkable speedup in the fine-tuning of the network in a transfer learning context.
arXiv Detail & Related papers (2022-07-27T14:43:14Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Dimensionality Reduction in Deep Learning via Kronecker Multi-layer
Architectures [4.836352379142503]
We propose a new deep learning architecture based on fast matrix multiplication of a Kronecker product decomposition.
We show that this architecture allows a neural network to be trained and implemented with a significant reduction in computational time and resources.
arXiv Detail & Related papers (2022-04-08T19:54:52Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Progressive Spatio-Temporal Graph Convolutional Network for
Skeleton-Based Human Action Recognition [97.14064057840089]
We propose a method to automatically find a compact and problem-specific network for graph convolutional networks in a progressive manner.
Experimental results on two datasets for skeleton-based human action recognition indicate that the proposed method has competitive or even better classification performance.
arXiv Detail & Related papers (2020-11-11T09:57:49Z) - Supervised Learning with First-to-Spike Decoding in Multilayer Spiking
Neural Networks [0.0]
We propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems.
The proposed learning rule supports multiple spikes fired by hidden neurons, and yet is stable by relying on firstspike responses generated by a deterministic output layer.
We also explore several distinct spike-based encoding strategies in order to form compact representations of input data.
arXiv Detail & Related papers (2020-08-16T15:34:48Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.