Physics-informed generative real-time lens-free imaging
- URL: http://arxiv.org/abs/2403.07786v4
- Date: Sun, 15 Jun 2025 05:15:47 GMT
- Title: Physics-informed generative real-time lens-free imaging
- Authors: Ronald B. Liu, Zhe Liu, Max G. A. Wolf, Krishna P. Purohit, Gregor Fritz, Yi Feng, Carsten G. Hansen, Pierre O. Bagnaninchi, Xavier Casadevall i Solvas, Yunjie Yang,
- Abstract summary: We introduce GenLFI, combining a generative unsupervised physics-informed neural network (PINN) with a large field-of-view (FOV) setup for straightforward holographic image reconstruction.<n>We demonstrate a real-time FOV exceeding 550 mm$2$, over 20 times larger than current real-time LFI systems.
- Score: 8.474666653683638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in high-throughput biomedical applications require real-time, large field-of-view (FOV) imaging. While current 2D lens-free imaging (LFI) systems improve FOV, they are often hindered by time-consuming multi-position measurements, extensive data pre-processing, and strict optical parameterization, limiting their application to static, thin samples. To overcome these limitations, we introduce GenLFI, combining a generative unsupervised physics-informed neural network (PINN) with a large FOV LFI setup for straightforward holographic image reconstruction, without multi-measurement. GenLFI enables real-time 2D imaging for 3D samples, such as droplet-based microfluidics and 3D cell models, in dynamic complex optical fields. Unlike previous methods, our approach decouples the reconstruction algorithm from optical setup parameters, enabling a large FOV limited only by hardware. We demonstrate a real-time FOV exceeding 550 mm$^2$, over 20 times larger than current real-time LFI systems. This framework unlocks the potential of LFI systems, providing a robust tool for advancing automated high-throughput biomedical applications.
Related papers
- Deep Learning for Optical Misalignment Diagnostics in Multi-Lens Imaging Systems [0.0]
We present two complementary deep learning-based inverse-design methods for diagnosing misalignments in multi-element lens systems.<n>First, we use ray-traced spot diagrams to predict five-degree-of-freedom (5-DOF) errors in a 6-lens photographic prime, achieving a mean absolute error of 0.031mm in lateral translation and 0.011$circ$ in tilt.<n>We also introduce a physics-based simulation pipeline that utilizes grayscale synthetic camera images, enabling a deep learning model to estimate 4-DOF, decenter and tilt errors in both two- and six-lens multi-lens systems.
arXiv Detail & Related papers (2025-06-29T10:13:40Z) - Fourier-Based 3D Multistage Transformer for Aberration Correction in Multicellular Specimens [1.288373532663608]
We introduce AOViFT -- a machine learning-based aberration sensing framework built around a 3D multistage Vision Transformer.<n> AOViFT infers aberrations and restores diffraction-limited performance in puncta-labeled specimens.<n>We validated AOViFT on live gene-edited zebrafish embryos, demonstrating its ability to correct spatially varying aberrations.
arXiv Detail & Related papers (2025-03-16T17:59:20Z) - Neuromorphic Retina: An FPGA-based Emulator [1.6444558948529873]
We are emulating a neuromorphic retina model on an FPGA.
Phasic and tonic cells are realizable in the retina in the simplest way possible.
arXiv Detail & Related papers (2025-01-15T16:45:45Z) - ZoomLDM: Latent Diffusion Model for multi-scale image generation [57.639937071834986]
We present ZoomLDM, a diffusion model tailored for generating images across multiple scales.
Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings.
ZoomLDM synthesizes coherent histopathology images that remain contextually accurate and detailed at different zoom levels.
arXiv Detail & Related papers (2024-11-25T22:39:22Z) - Enhancing Fluorescence Lifetime Parameter Estimation Accuracy with Differential Transformer Based Deep Learning Model Incorporating Pixelwise Instrument Response Function [0.3441582801949978]
Fluorescence Lifetime Imaging (FLI) provides unique information about the tissue microenvironment.<n>Recent advancements in Deep Learning have enabled improved fluorescence lifetime parameter estimation.<n>We present MFliNet, a novel DL architecture that integrates the Instrument Response Function (IRF) as an additional input alongside experimental photon time-of-arrival histograms.
arXiv Detail & Related papers (2024-11-25T20:03:41Z) - Leveraging Computational Pathology AI for Noninvasive Optical Imaging Analysis Without Retraining [3.6835809728620634]
Noninvasive optical imaging modalities can probe patient's tissue in 3D and over time generate gigabytes of clinically relevant data per sample.
There is a need for AI models to analyze this data and assist clinical workflow.
In this paper we introduce FoundationShift, a method to apply any AI model from computational pathology without retraining.
arXiv Detail & Related papers (2024-11-18T14:35:01Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.
Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - FLex: Joint Pose and Dynamic Radiance Fields Optimization for Stereo Endoscopic Videos [79.50191812646125]
Reconstruction of endoscopic scenes is an important asset for various medical applications, from post-surgery analysis to educational training.
We adress the challenging setup of a moving endoscope within a highly dynamic environment of deforming tissue.
We propose an implicit scene separation into multiple overlapping 4D neural radiance fields (NeRFs) and a progressive optimization scheme jointly optimizing for reconstruction and camera poses from scratch.
This improves the ease-of-use and allows to scale reconstruction capabilities in time to process surgical videos of 5,000 frames and more; an improvement of more than ten times compared to the state of the art while being agnostic to external tracking information
arXiv Detail & Related papers (2024-03-18T19:13:02Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - Calibration-free quantitative phase imaging in multi-core fiber
endoscopes using end-to-end deep learning [49.013721992323994]
We demonstrate a learning-based MCF phase imaging method, that significantly reduced the phase reconstruction time to 5.5 ms.
We also introduce an innovative optical system that automatically generated the first open-source dataset tailored for MCF phase imaging.
Our trained deep neural network (DNN) demonstrates robust phase reconstruction performance in experiments with a mean fidelity of up to 99.8%.
arXiv Detail & Related papers (2023-12-12T09:30:12Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Fast light-field 3D microscopy with out-of-distribution detection and
adaptation through Conditional Normalizing Flows [16.928404625892625]
Real-time 3D fluorescence microscopy is crucial for the analysis of live organisms.
We propose a novel architecture to perform fast 3D reconstructions of live immobilized zebrafish neural activity.
arXiv Detail & Related papers (2023-06-10T10:42:49Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - Physics Embedded Machine Learning for Electromagnetic Data Imaging [83.27424953663986]
Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries.
It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging.
This article surveys various schemes to incorporate physics in learning-based EM imaging.
arXiv Detail & Related papers (2022-07-26T02:10:15Z) - FOF: Learning Fourier Occupancy Field for Monocular Real-time Human
Reconstruction [73.85709132666626]
Existing representations, such as parametric models, voxel grids, meshes and implicit neural representations, have difficulties achieving high-quality results and real-time speed at the same time.
We propose Fourier Occupancy Field (FOF), a novel powerful, efficient and flexible 3D representation, for monocular real-time and accurate human reconstruction.
A FOF can be stored as a multi-channel image, which is compatible with 2D convolutional neural networks and can bridge the gap between 3D and 2D images.
arXiv Detail & Related papers (2022-06-05T14:45:02Z) - Physics to the Rescue: Deep Non-line-of-sight Reconstruction for
High-speed Imaging [13.271762773872476]
We present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction.
Our method outperforms prior physics and learning based approaches on both synthetic and real measurements.
arXiv Detail & Related papers (2022-05-03T02:47:02Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Deep learning-based super-resolution fluorescence microscopy on small
datasets [20.349746411933495]
Deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images.
We demonstrate a new convolutional neural network-based approach that is successfully trained with small datasets and super-resolution images.
This model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
arXiv Detail & Related papers (2021-03-07T03:17:47Z) - Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy [8.3011168382078]
Miniature fluorescence microscopes capture only 2D information, and modifications that enable 3D capabilities increase the size and weight.
Here, we achieve the 3D capability by replacing the tube lens of a conventional 2D Miniscope with an optimized multifocal phase mask at the objective's aperture stop.
We demonstrate a prototype that is 17 mm tall and weighs 2.5 grams, achieving 2.76 $mu$m lateral, and 15 $mu$m axial resolution across most of the 900x700x390 $mu m3$ volume at 40 volumes per second.
arXiv Detail & Related papers (2020-10-12T01:19:31Z) - Learning to Reconstruct Confocal Microscopy Stacks from Single Light
Field Images [19.24428734909019]
We introduce the LFMNet, a novel neural network architecture inspired by the U-Net design.
It is able to reconstruct with high-accuracy a 112x112x57.6$mu m3$ volume in 50ms given a single light field image of 1287x1287 pixels.
Because of the drastic reduction in scan time and storage space, our setup and method are directly applicable to real-time in vivo 3D microscopy.
arXiv Detail & Related papers (2020-03-24T17:46:03Z) - Microscopy with undetected photons in the mid-infrared [0.0]
We show how nonlinear interferometry with entangled light can provide a powerful tool for mid-IR microscopy.
In this proof-of-principle implementation, we demonstrate intensity imaging over a broad wavelength range covering 3.4-4.3um.
We demonstrate our technique is fit for purpose, acquiring microscopic images of biological tissue samples in the mid-IR.
arXiv Detail & Related papers (2020-02-14T10:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.