Foreground-aware Virtual Staining for Accurate 3D Cell Morphological Profiling
- URL: http://arxiv.org/abs/2507.05383v1
- Date: Mon, 07 Jul 2025 18:11:56 GMT
- Title: Foreground-aware Virtual Staining for Accurate 3D Cell Morphological Profiling
- Authors: Alexandr A. Kalinin, Paula Llanos, Theresa Maria Sommer, Giovanni Sestini, Xinhai Hou, Jonathan Z. Sexton, Xiang Wan, Ivo D. Dinov, Brian D. Athey, Nicolas Rivron, Anne E. Carpenter, Beth Cimini, Shantanu Singh, Matthew J. O'Meara,
- Abstract summary: We introduce Spotlight, a virtual staining approach that guides the model to focus on relevant cellular structures.<n> Spotlight improves morphological representation while preserving pixel-level accuracy, resulting in virtual stains better suited for downstream tasks such as segmentation and profiling.
- Score: 44.74519211679785
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Microscopy enables direct observation of cellular morphology in 3D, with transmitted-light methods offering low-cost, minimally invasive imaging and fluorescence microscopy providing specificity and contrast. Virtual staining combines these strengths by using machine learning to predict fluorescence images from label-free inputs. However, training of existing methods typically relies on loss functions that treat all pixels equally, thus reproducing background noise and artifacts instead of focusing on biologically meaningful signals. We introduce Spotlight, a simple yet powerful virtual staining approach that guides the model to focus on relevant cellular structures. Spotlight uses histogram-based foreground estimation to mask pixel-wise loss and to calculate a Dice loss on soft-thresholded predictions for shape-aware learning. Applied to a 3D benchmark dataset, Spotlight improves morphological representation while preserving pixel-level accuracy, resulting in virtual stains better suited for downstream tasks such as segmentation and profiling.
Related papers
- From Fibers to Cells: Fourier-Based Registration Enables Virtual Cresyl Violet Staining From 3D Polarized Light Imaging [32.73124984242397]
Comprehensive assessment of the various aspects of the brain's microstructure requires the use of complementary imaging techniques.<n>The gold standard for cytoarchitectonic analysis is light microscopic imaging of cell-body stained tissue sections.<n>We take advantage of deep learning methods for image-to-image translation to generate a virtual staining of 3D-PLI that is spatially aligned at the cellular level.
arXiv Detail & Related papers (2025-05-16T15:59:15Z) - Hierarchical Sparse Attention Framework for Computationally Efficient Classification of Biological Cells [0.0]
We present SparseAttnNet, a new hierarchical attention-driven framework for efficient image classification.<n>For biological cell images, we demonstrate that SparseAttnNet can process approximately 15% of the pixels instead of the full image.
arXiv Detail & Related papers (2025-05-12T15:29:08Z) - Volumetric Mapping with Panoptic Refinement via Kernel Density Estimation for Mobile Robots [2.8668675011182967]
Mobile robots usually use lightweight networks to segment objects on RGB images and then localize them via depth maps.<n>We address the problem of panoptic segmentation quality in 3D scene reconstruction by refining segmentation errors using non-parametric statistical methods.<n>We map the predicted masks into a depth frame to estimate their distribution via kernel densities.<n>The outliers in depth perception are then rejected without the need for additional parameters.
arXiv Detail & Related papers (2024-12-15T16:46:23Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - 3D shape reconstruction of semi-transparent worms [0.950214811819847]
3D shape reconstruction typically requires identifying object features or textures in multiple images of a subject.
Here we overcome these challenges by rendering a candidate shape with adaptive blurring and transparency for comparison with the images.
We model the slender Caenorhabditis elegans as a 3D curve using an intrinsic parametrisation that naturally admits biologically-informed constraints and regularisation.
arXiv Detail & Related papers (2023-04-28T13:29:36Z) - Estimation of Optical Aberrations in 3D Microscopic Bioimages [1.588193964339148]
We describe an extension of PhaseNet enabling its use on 3D images of biological samples.
We add a Python-based restoration of images via Richardson-Lucy deconvolution.
We demonstrate that the deconvolution with the predicted PSF can not only remove the simulated aberrations but also improve the quality of the real raw microscopic images with unknown residual PSF.
arXiv Detail & Related papers (2022-09-16T13:22:25Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.