Untrained, physics-informed neural networks for structured illumination
microscopy
- URL: http://arxiv.org/abs/2207.07705v1
- Date: Fri, 15 Jul 2022 19:02:07 GMT
- Title: Untrained, physics-informed neural networks for structured illumination
microscopy
- Authors: Zachary Burns, Zhaowei Liu
- Abstract summary: We show that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data.
The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction limited sub-images.
- Score: 0.456877715768796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years there has been great interest in using deep neural networks
(DNN) for super-resolution image reconstruction including for structured
illumination microscopy (SIM). While these methods have shown very promising
results, they all rely on data-driven, supervised training strategies that need
a large number of ground truth images, which is experimentally difficult to
realize. For SIM imaging, there exists a need for a flexible, general, and
open-source reconstruction method that can be readily adapted to different
forms of structured illumination. We demonstrate that we can combine a deep
neural network with the forward model of the structured illumination process to
reconstruct sub-diffraction images without training data. The resulting
physics-informed neural network (PINN) can be optimized on a single set of
diffraction limited sub-images and thus doesn't require any training set. We
show with simulated and experimental data that this PINN can be applied to a
wide variety of SIM methods by simply changing the known illumination patterns
used in the loss function and can achieve resolution improvements that match
well with theoretical expectations.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Super-Resolution and Image Re-projection for Iris Recognition [67.42500312968455]
Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images.
In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment.
Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems.
arXiv Detail & Related papers (2022-10-20T09:46:23Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - BIM Hyperreality: Data Synthesis Using BIM and Hyperrealistic Rendering
for Deep Learning [3.4461633417989184]
We present a concept of a hybrid system for training a neural network for building object recognition in photos.
For the specific case study presented in this paper, our results show that a neural network trained with synthetic data can be used to identify building objects from photos without using photos in the training data.
arXiv Detail & Related papers (2021-05-10T04:08:24Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - Deep learning-based super-resolution fluorescence microscopy on small
datasets [20.349746411933495]
Deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images.
We demonstrate a new convolutional neural network-based approach that is successfully trained with small datasets and super-resolution images.
This model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
arXiv Detail & Related papers (2021-03-07T03:17:47Z) - Sill-Net: Feature Augmentation with Separated Illumination
Representation [35.25230715669166]
We propose a novel neural network architecture called Separating-Illumination Network (Sill-Net)
Sill-Net learns to separate illumination features from images, and then during training we augment training samples with these separated illumination features in the feature space.
Experimental results demonstrate that our approach outperforms current state-of-the-art methods in several object classification benchmarks.
arXiv Detail & Related papers (2021-02-06T09:00:10Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Deep neural networks for the evaluation and design of photonic devices [0.0]
Review: How deep neural networks can learn from training sets and operate as high-speed surrogate electromagnetic solvers.
Fundamental data sciences framed within the context of photonics will also be discussed.
arXiv Detail & Related papers (2020-06-30T19:52:54Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.