Physics-enhanced machine learning for virtual fluorescence microscopy
- URL: http://arxiv.org/abs/2004.04306v2
- Date: Tue, 21 Apr 2020 22:19:15 GMT
- Title: Physics-enhanced machine learning for virtual fluorescence microscopy
- Authors: Colin L. Cooke, Fanjie Kong, Amey Chaware, Kevin C. Zhou, Kanghyun
Kim, Rong Xu, D. Michael Ando, Samuel J. Yang, Pavan Chandra Konda, Roarke
Horstmeyer
- Abstract summary: This paper introduces a new method of data-driven microscope design for virtual fluorescence microscopy.
By including a model of illumination within the first layers of a deep convolutional neural network, it is possible to learn task-specific LED patterns.
- Score: 3.7817498232858857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a new method of data-driven microscope design for
virtual fluorescence microscopy. Our results show that by including a model of
illumination within the first layers of a deep convolutional neural network, it
is possible to learn task-specific LED patterns that substantially improve the
ability to infer fluorescence image information from unstained transmission
microscopy images. We validated our method on two different experimental
setups, with different magnifications and different sample types, to show a
consistent improvement in performance as compared to conventional illumination
methods. Additionally, to understand the importance of learned illumination on
inference task, we varied the dynamic range of the fluorescent image targets
(from one to seven bits), and showed that the margin of improvement for learned
patterns increased with the information content of the target. This work
demonstrates the power of programmable optical elements at enabling better
machine learning algorithm performance and at providing physical insight into
next generation of machine-controlled imaging systems.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Gravitational cell detection and tracking in fluorescence microscopy
data [0.18828620190012021]
We present a novel approach based on gravitational force fields that can compete with, and potentially outperform modern machine learning models.
This method includes detection, segmentation, and tracking elements, with the results demonstrated on a Cell Tracking Challenge dataset.
arXiv Detail & Related papers (2023-12-06T14:08:05Z) - LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network [0.31402652384742363]
This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images.
Lucyd combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network.
Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images.
arXiv Detail & Related papers (2023-07-16T10:34:23Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - Untrained, physics-informed neural networks for structured illumination
microscopy [0.456877715768796]
We show that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data.
The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction limited sub-images.
arXiv Detail & Related papers (2022-07-15T19:02:07Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Computational ghost imaging for transmission electron microscopy [4.8776835876287805]
We explore using computational ghost imaging techniques in electron microscopy to reduce the total required intensity.
The technological lack of the equivalent high-resolution, optical spatial light modulator for electrons means that a different approach needs to be pursued.
We show a beam shaping technique based on the use of a distribution of electrically charged metal needles to structure the beam, alongside a novel reconstruction method to handle the resulting highly non-orthogonal patterns.
arXiv Detail & Related papers (2022-04-21T09:43:54Z) - Controllable Data Augmentation Through Deep Relighting [75.96144853354362]
We explore how to augment a varied set of image datasets through relighting so as to improve the ability of existing models to be invariant to illumination changes.
We develop a tool, based on an encoder-decoder network, that is able to quickly generate multiple variations of the illumination of various input scenes.
We demonstrate that by training models on datasets that have been augmented with our pipeline, it is possible to achieve higher performance on localization benchmarks.
arXiv Detail & Related papers (2021-10-26T20:02:51Z) - Global Voxel Transformer Networks for Augmented Microscopy [54.730707387866076]
We introduce global voxel transformer networks (GVTNets), an advanced deep learning tool for augmented microscopy.
GVTNets are built on global voxel transformer operators (GVTOs), which are able to aggregate global information.
We apply the proposed methods on existing datasets for three different augmented microscopy tasks under various settings.
arXiv Detail & Related papers (2020-08-05T20:11:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.