Joint trajectory and network inference via reference fitting
- URL: http://arxiv.org/abs/2409.06879v1
- Date: Tue, 10 Sep 2024 21:49:57 GMT
- Title: Joint trajectory and network inference via reference fitting
- Authors: Stephen Y Zhang,
- Abstract summary: We propose an approach for leveraging both dynamical and perturbational single cell data to jointly learn cellular trajectories and power network inference.
Our approach is motivated by min-entropy estimation for dynamics and can infer directed and signed networks from time-stamped single cell snapshots.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network inference, the task of reconstructing interactions in a complex system from experimental observables, is a central yet extremely challenging problem in systems biology. While much progress has been made in the last two decades, network inference remains an open problem. For systems observed at steady state, limited insights are available since temporal information is unavailable and thus causal information is lost. Two common avenues for gaining causal insights into system behaviour are to leverage temporal dynamics in the form of trajectories, and to apply interventions such as knock-out perturbations. We propose an approach for leveraging both dynamical and perturbational single cell data to jointly learn cellular trajectories and power network inference. Our approach is motivated by min-entropy estimation for stochastic dynamics and can infer directed and signed networks from time-stamped single cell snapshots.
Related papers
- Inferring the time-varying coupling of dynamical systems with temporal convolutional autoencoders [0.0]
We introduce Temporal Autoencoders for Causal Inference (TACI)
TACI combines a new surrogate data metric for assessing causal interactions with a novel two-headed machine learning architecture.
We demonstrate TACI's ability to accurately quantify dynamic causal interactions across a variety of systems.
arXiv Detail & Related papers (2024-06-05T12:51:20Z) - LINOCS: Lookahead Inference of Networked Operators for Continuous Stability [4.508868068781057]
We introduce Lookahead-driven Inference of Networked Operators for Continuous Stability (LINOCS)
LINOCS is a robust learning procedure for identifying hidden dynamical interactions in noisy time-series data.
We demonstrate LINOCS' ability to recover the ground truth dynamical operators underlying synthetic time-series data.
arXiv Detail & Related papers (2024-04-28T18:16:58Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Data-Driven Blind Synchronization and Interference Rejection for Digital
Communication Signals [98.95383921866096]
We study the potential of data-driven deep learning methods for separation of two communication signals from an observation of their mixture.
We show that capturing high-resolution temporal structures (nonstationarities) leads to substantial performance gains.
We propose a domain-informed neural network (NN) design that is able to improve upon both "off-the-shelf" NNs and classical detection and interference rejection methods.
arXiv Detail & Related papers (2022-09-11T14:10:37Z) - Input correlations impede suppression of chaos and learning in balanced
rate networks [58.720142291102135]
Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
arXiv Detail & Related papers (2022-01-24T19:20:49Z) - Divide and Rule: Recurrent Partitioned Network for Dynamic Processes [25.855428321990328]
Many dynamic processes are involved with interacting variables, from physical systems to sociological analysis.
Our goal is to represent a system with a part-whole hierarchy and discover the implied dependencies among intra-system variables.
The proposed architecture consists of (i) a perceptive module that extracts a hierarchical and temporally consistent representation of the observation at multiple levels, (ii) a deductive module for determining the relational connection between neurons at each level, and (iii) a statistical module that can predict the future by conditioning on the temporal distributional estimation.
arXiv Detail & Related papers (2021-06-01T06:45:56Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Estimating Linear Dynamical Networks of Cyclostationary Processes [0.0]
We present a novel algorithm for guaranteed topology learning in networks excited by cyclostationary processes.
Unlike prior work, the framework applies to linear dynamic system with complex valued dependencies.
In the second part of the article, we analyze conditions for consistent topology learning for bidirected radial networks when a subset of the network is unobserved.
arXiv Detail & Related papers (2020-09-26T18:54:50Z) - Detecting structural perturbations from time series with deep learning [0.0]
We present a graph neural network approach to infer structural perturbations from functional time series.
We show our data-driven approach outperforms typical reconstruction methods.
This work uncovers a practical avenue to study the resilience of real-world complex systems.
arXiv Detail & Related papers (2020-06-09T13:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.