Dynamic Structured Illumination Microscopy with a Neural Space-time
Model
- URL: http://arxiv.org/abs/2206.01397v1
- Date: Fri, 3 Jun 2022 05:24:06 GMT
- Title: Dynamic Structured Illumination Microscopy with a Neural Space-time
Model
- Authors: Ruiming Cao, Fanglin Linda Liu, Li-Hao Yeh, Laura Waller
- Abstract summary: We propose a new method, Speckle Flow SIM, that models sample motion during the data capture in order to reconstruct dynamic scenes with super-resolution.
We demonstrated that Speckle Flow SIM can reconstruct a dynamic scene with deformable motion and 1.88x the-temporal resolution in experiment.
- Score: 5.048742886625779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured illumination microscopy (SIM) reconstructs a super-resolved image
from multiple raw images; hence, acquisition speed is limited, making it
unsuitable for dynamic scenes. We propose a new method, Speckle Flow SIM, that
models sample motion during the data capture in order to reconstruct dynamic
scenes with super-resolution. Speckle Flow SIM uses fixed speckle illumination
and relies on sample motion to capture a sequence of raw images. Then, the
spatio-temporal relationship of the dynamic scene is modeled using a neural
space-time model with coordinate-based multi-layer perceptrons (MLPs), and the
motion dynamics and the super-resolved scene are jointly recovered. We
validated Speckle Flow SIM in simulation and built a simple, inexpensive
experimental setup with off-the-shelf components. We demonstrated that Speckle
Flow SIM can reconstruct a dynamic scene with deformable motion and 1.88x the
diffraction-limited resolution in experiment.
Related papers
- Physics-guided Shape-from-Template: Monocular Video Perception through Neural Surrogate Models [4.529832252085145]
We propose a novel SfT reconstruction algorithm for cloth using a pre-trained neural surrogate model.
Differentiable rendering of the simulated mesh enables pixel-wise comparisons between the reconstruction and a target video sequence.
This allows to retain a precise, stable, and smooth reconstructed geometry while reducing the runtime by a factor of 400-500 compared to $phi$-SfT.
arXiv Detail & Related papers (2023-11-21T18:59:58Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose DynaMoN to handle dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
Our novel iterative learning scheme switches between training the NeRF and updating the pose parameters for an improved reconstruction and trajectory estimation quality.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Robust Dynamic Radiance Fields [79.43526586134163]
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms.
We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters.
arXiv Detail & Related papers (2023-01-05T18:59:51Z) - Spatio-temporal Vision Transformer for Super-resolution Microscopy [2.8348950186890467]
Structured illumination microscopy (SIM) is an optical super-resolution technique that enables live-cell imaging beyond the diffraction limit.
We propose a new transformer-based reconstruction method, VSR-SIM, that uses shifted 3-dimensional window multi-head attention.
We demonstrate a use case enabled by VSR-SIM referred to as rolling SIM imaging, which increases temporal resolution in SIM by a factor of 9.
arXiv Detail & Related papers (2022-02-28T19:01:10Z) - MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary
Monocular Cameras [98.40768911788854]
We introduce MoCo-Flow, a representation that models the dynamic scene using a 4D continuous time-variant function.
At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow.
We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity.
arXiv Detail & Related papers (2021-06-08T16:03:50Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z) - Learning a Generative Motion Model from Image Sequences based on a
Latent Motion Matrix [8.774604259603302]
We learn a probabilistic motion model from simulating temporal-temporal registration in a sequence of images.
We show improved registration accuracy-temporally smoother consistencys compared to three state-of-the-art registration algorithms.
We also demonstrate the model's applicability for motion analysis, simulation and super-resolution by an improved motion reconstruction from sequences with missing frames.
arXiv Detail & Related papers (2020-11-03T14:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.