Towards Learned Simulators for Cell Migration
- URL: http://arxiv.org/abs/2210.01123v1
- Date: Sun, 2 Oct 2022 14:01:09 GMT
- Title: Towards Learned Simulators for Cell Migration
- Authors: Koen Minartz, Yoeri Poels, Vlado Menkovski
- Abstract summary: A neural simulator for cellular dynamics can augment lab experiments and traditional methods to enhance our understanding of a cell's interaction with its physical environment.
We propose an autoregressive probabilistic model that can reproduce dynamics of single cell migration.
We observe that standard single-step training methods do not only lead to inconsistent stability, but also fail to accurately capture the aspects of the dynamics.
- Score: 2.5331228143087565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulators driven by deep learning are gaining popularity as a tool for
efficiently emulating accurate but expensive numerical simulators. Successful
applications of such neural simulators can be found in the domains of physics,
chemistry, and structural biology, amongst others. Likewise, a neural simulator
for cellular dynamics can augment lab experiments and traditional computational
methods to enhance our understanding of a cell's interaction with its physical
environment. In this work, we propose an autoregressive probabilistic model
that can reproduce spatiotemporal dynamics of single cell migration,
traditionally simulated with the Cellular Potts model. We observe that standard
single-step training methods do not only lead to inconsistent rollout
stability, but also fail to accurately capture the stochastic aspects of the
dynamics, and we propose training strategies to mitigate these issues. Our
evaluation on two proof-of-concept experimental scenarios shows that neural
methods have the potential to faithfully simulate stochastic cellular dynamics
at least an order of magnitude faster than a state-of-the-art implementation of
the Cellular Potts model.
Related papers
- Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics [48.99021224773799]
We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections.
We also propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images.
arXiv Detail & Related papers (2024-10-10T17:43:36Z) - Learning Quadruped Locomotion Using Differentiable Simulation [31.80380408663424]
Differentiable simulation promises fast convergence and stable training.
This work proposes a new differentiable simulation framework to overcome these challenges.
Our framework enables learning quadruped walking in simulation in minutes without parallelization.
arXiv Detail & Related papers (2024-03-21T22:18:59Z) - A Multi-Grained Symmetric Differential Equation Model for Learning Protein-Ligand Binding Dynamics [73.35846234413611]
In drug discovery, molecular dynamics (MD) simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning (ML) surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding dynamics.
We demonstrate the efficiency and effectiveness of NeuralMD, achieving over 1K$times$ speedup compared to standard numerical MD simulations.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Rethinking materials simulations: Blending direct numerical simulations
with neural operators [1.6874375111244329]
We develop a new method that blends numerical solvers with neural operators to accelerate such simulations.
We demonstrate the effectiveness of this framework on simulations of microstructure evolution during physical vapor deposition.
arXiv Detail & Related papers (2023-12-08T23:44:54Z) - Real-time simulation of viscoelastic tissue behavior with physics-guided
deep learning [0.8250374560598492]
We propose a deep learning method for predicting displacement fields of soft tissues with viselastic properties.
The proposed method achieves a better accuracy over the conventional CNN models.
It is hoped that the present investigation will help in filling the gap in applying deep learning in virtual reality.
arXiv Detail & Related papers (2023-01-11T18:17:10Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Constraint-based graph network simulator [9.462808515258464]
We present a framework for constraint-based learned simulation.
We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver.
Our model achieves better or comparable performance to top learned simulators.
arXiv Detail & Related papers (2021-12-16T19:15:11Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.