Deep Learning-based Analysis of Basins of Attraction
- URL: http://arxiv.org/abs/2309.15732v2
- Date: Wed, 14 Feb 2024 13:04:57 GMT
- Title: Deep Learning-based Analysis of Basins of Attraction
- Authors: David Valle, Alexandre Wagemakers, Miguel A.F. Sanju\'an
- Abstract summary: This research addresses the challenge of characterizing the complexity and unpredictability of basins within various dynamical systems.
The main focus is on demonstrating the efficiency of convolutional neural networks (CNNs) in this field.
- Score: 49.812879456944984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research addresses the challenge of characterizing the complexity and
unpredictability of basins within various dynamical systems. The main focus is
on demonstrating the efficiency of convolutional neural networks (CNNs) in this
field. Conventional methods become computationally demanding when analyzing
multiple basins of attraction across different parameters of dynamical systems.
Our research presents an innovative approach that employs CNN architectures for
this purpose, showcasing their superior performance in comparison to
conventional methods. We conduct a comparative analysis of various CNN models,
highlighting the effectiveness of our proposed characterization method while
acknowledging the validity of prior approaches. The findings not only showcase
the potential of CNNs but also emphasize their significance in advancing the
exploration of diverse behaviors within dynamical systems.
Related papers
- Exploiting Chaotic Dynamics as Deep Neural Networks [1.9282110216621833]
We show that the essence of chaos can be found in various state-of-the-art deep neural networks.
Our framework presents superior results in terms of accuracy, convergence speed, and efficiency.
This study offers a new path for the integration of chaos, which has long been overlooked in information processing.
arXiv Detail & Related papers (2024-05-29T22:03:23Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Direct Learning-Based Deep Spiking Neural Networks: A Review [17.255056657521195]
spiking neural network (SNN) is a promising brain-inspired computational model with binary spike information transmission mechanism.
In this paper, we present a survey of direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods.
arXiv Detail & Related papers (2023-05-31T10:32:16Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Neural DAEs: Constrained neural networks [4.212663349859165]
We implement related methods in residual neural networks, despite some fundamental scenario differences.
We show when to use which method based on experiments involving simulations of multi-body pendulums and molecular dynamics scenarios.
Several of our methods are easy to implement in existing code and have limited impact on training performance while giving significant boosts in terms of inference.
arXiv Detail & Related papers (2022-11-25T18:58:28Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.