Fourier-Based 3D Multistage Transformer for Aberration Correction in Multicellular Specimens
- URL: http://arxiv.org/abs/2503.12593v1
- Date: Sun, 16 Mar 2025 17:59:20 GMT
- Title: Fourier-Based 3D Multistage Transformer for Aberration Correction in Multicellular Specimens
- Authors: Thayer Alshaabi, Daniel E. Milkie, Gaoxiang Liu, Cyna Shirazinejad, Jason L. Hong, Kemal Achour, Frederik Görlitz, Ana Milunovic-Jevtic, Cat Simmons, Ibrahim S. Abuzahriyeh, Erin Hong, Samara Erin Williams, Nathanael Harrison, Evan Huang, Eun Seok Bae, Alison N. Killilea, David G. Drubin, Ian A. Swinburne, Srigokul Upadhyayula, Eric Betzig,
- Abstract summary: We introduce AOViFT -- a machine learning-based aberration sensing framework built around a 3D multistage Vision Transformer.<n> AOViFT infers aberrations and restores diffraction-limited performance in puncta-labeled specimens.<n>We validated AOViFT on live gene-edited zebrafish embryos, demonstrating its ability to correct spatially varying aberrations.
- Score: 1.288373532663608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-resolution tissue imaging is often compromised by sample-induced optical aberrations that degrade resolution and contrast. While wavefront sensor-based adaptive optics (AO) can measure these aberrations, such hardware solutions are typically complex, expensive to implement, and slow when serially mapping spatially varying aberrations across large fields of view. Here, we introduce AOViFT (Adaptive Optical Vision Fourier Transformer) -- a machine learning-based aberration sensing framework built around a 3D multistage Vision Transformer that operates on Fourier domain embeddings. AOViFT infers aberrations and restores diffraction-limited performance in puncta-labeled specimens with substantially reduced computational cost, training time, and memory footprint compared to conventional architectures or real-space networks. We validated AOViFT on live gene-edited zebrafish embryos, demonstrating its ability to correct spatially varying aberrations using either a deformable mirror or post-acquisition deconvolution. By eliminating the need for the guide star and wavefront sensing hardware and simplifying the experimental workflow, AOViFT lowers technical barriers for high-resolution volumetric microscopy across diverse biological samples.
Related papers
- A Value Mapping Virtual Staining Framework for Large-scale Histological Imaging [36.95712533471744]
We introduce a general virtual staining framework that is adaptable to various conditions.<n>We propose a loss function based on the value mapping constraint to ensure the accuracy of virtual coloring between different pathological modalities.
arXiv Detail & Related papers (2025-01-07T07:45:21Z) - Enhancing Fluorescence Lifetime Parameter Estimation Accuracy with Differential Transformer Based Deep Learning Model Incorporating Pixelwise Instrument Response Function [0.3441582801949978]
Fluorescence Lifetime Imaging (FLI) provides unique information about the tissue microenvironment.<n>Recent advancements in Deep Learning have enabled improved fluorescence lifetime parameter estimation.<n>We present MFliNet, a novel DL architecture that integrates the Instrument Response Function (IRF) as an additional input alongside experimental photon time-of-arrival histograms.
arXiv Detail & Related papers (2024-11-25T20:03:41Z) - Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization [88.5582111768376]
We study the optimization of a Transformer composed of a self-attention layer with softmax followed by a fully connected layer under gradient descent on a certain data distribution model.
Our results establish a sharp condition that can distinguish between the small test error phase and the large test error regime, based on the signal-to-noise ratio in the data model.
arXiv Detail & Related papers (2024-09-28T13:24:11Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Aperture Diffraction for Compact Snapshot Spectral Imaging [27.321750056840706]
We demonstrate a compact, cost-effective snapshot spectral imaging system named Aperture Diffraction Imaging Spectrometer (ADIS)
A new optical design that each point in the object space is multiplexed to discrete encoding locations on the mosaic filter sensor is introduced.
The Cascade Shift-Shuffle Spectral Transformer (CSST) with strong perception of the diffraction degeneration is designed to solve a sparsity-constrained inverse problem.
arXiv Detail & Related papers (2023-09-27T16:48:46Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - SwinVFTR: A Novel Volumetric Feature-learning Transformer for 3D OCT Fluid Segmentation [0.23967405016776386]
We propose SwinVFTR, a transformer architecture for precise fluid segmentation in 3D OCT images.
SwinVFTR employs channel-wise volumetric sampling and a shifted window transformer block to improve fluid localization.
It outperforms existing models on Spectralis, Cirrus, and Topcon OCT datasets.
arXiv Detail & Related papers (2023-03-16T11:16:02Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Intriguing Properties of Vision Transformers [114.28522466830374]
Vision transformers (ViT) have demonstrated impressive performance across various machine vision problems.
We systematically study this question via an extensive set of experiments and comparisons with a high-performing convolutional neural network (CNN)
We show effective features of ViTs are due to flexible receptive and dynamic fields possible via the self-attention mechanism.
arXiv Detail & Related papers (2021-05-21T17:59:18Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.