A Deep Learning Approach to Analyzing Continuous-Time Systems
- URL: http://arxiv.org/abs/2209.12128v2
- Date: Wed, 19 Apr 2023 18:54:36 GMT
- Title: A Deep Learning Approach to Analyzing Continuous-Time Systems
- Authors: Cory Shain and William Schuler
- Abstract summary: We show that deep learning can be used to analyze complex processes.
Our approach relaxes standard assumptions that are implausible for many natural systems.
We demonstrate substantial improvements on behavioral and neuroimaging data.
- Score: 20.89961728689037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scientists often use observational time series data to study complex natural
processes, but regression analyses often assume simplistic dynamics. Recent
advances in deep learning have yielded startling improvements to the
performance of models of complex processes, but deep learning is generally not
used for scientific analysis. Here we show that deep learning can be used to
analyze complex processes, providing flexible function approximation while
preserving interpretability. Our approach relaxes standard simplifying
assumptions (e.g., linearity, stationarity, and homoscedasticity) that are
implausible for many natural systems and may critically affect the
interpretation of data. We evaluate our model on incremental human language
processing, a domain with complex continuous dynamics. We demonstrate
substantial improvements on behavioral and neuroimaging data, and we show that
our model enables discovery of novel patterns in exploratory analyses, controls
for diverse confounds in confirmatory analyses, and opens up research questions
that are otherwise hard to study.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - Temporal receptive field in dynamic graph learning: A comprehensive analysis [15.161255747900968]
We present a comprehensive analysis of the temporal receptive field in dynamic graph learning.
Our results demonstrate that appropriately chosen temporal receptive field can significantly enhance model performance.
For some models, overly large windows may introduce noise and reduce accuracy.
arXiv Detail & Related papers (2024-07-17T07:46:53Z) - Improving Self-supervised Molecular Representation Learning using
Persistent Homology [6.263470141349622]
Self-supervised learning (SSL) has great potential for molecular representation learning.
In this paper, we study SSL based on persistent homology (PH), a mathematical tool for modeling topological features of data that persist across multiple scales.
arXiv Detail & Related papers (2023-11-29T02:58:30Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - An Information-Theoretic Framework for Supervised Learning [22.280001450122175]
We propose a novel information-theoretic framework with its own notions of regret and sample complexity.
We study the sample complexity of learning from data generated by deep neural networks with ReLU activation units.
We conclude by corroborating our theoretical results with experimental analysis of random single-hidden-layer neural networks.
arXiv Detail & Related papers (2022-03-01T05:58:28Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Deep learning of contagion dynamics on complex networks [0.0]
We propose a complementary approach based on deep learning to build effective models of contagion dynamics on networks.
By allowing simulations on arbitrary network structures, our approach makes it possible to explore the properties of the learned dynamics beyond the training data.
Our results demonstrate how deep learning offers a new and complementary perspective to build effective models of contagion dynamics on networks.
arXiv Detail & Related papers (2020-06-09T17:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.