Semi- and Self-Supervised Multi-View Fusion of 3D Microscopy Images
using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2108.02743v1
- Date: Thu, 5 Aug 2021 17:21:01 GMT
- Title: Semi- and Self-Supervised Multi-View Fusion of 3D Microscopy Images
using Generative Adversarial Networks
- Authors: Canyu Yang, Dennis Eschweiler, Johannes Stegmaier
- Abstract summary: Recent developments in fluorescence microscopy allow capturing high-resolution 3D images over time for living model organisms.
To be able to image even large specimens, techniques like multi-view light-sheet imaging record different orientations at each time point.
CNN-based multi-view deconvolution and fusion with two synthetic data sets mimic developing embryos and involve either two or four complementary 3D views.
- Score: 0.11719282046304678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in fluorescence microscopy allow capturing
high-resolution 3D images over time for living model organisms. To be able to
image even large specimens, techniques like multi-view light-sheet imaging
record different orientations at each time point that can then be fused into a
single high-quality volume. Based on measured point spread functions (PSF),
deconvolution and content fusion are able to largely revert the inevitable
degradation occurring during the imaging process. Classical multi-view
deconvolution and fusion methods mainly use iterative procedures and
content-based averaging. Lately, Convolutional Neural Networks (CNNs) have been
deployed to approach 3D single-view deconvolution microscopy, but the
multi-view case waits to be studied. We investigated the efficacy of CNN-based
multi-view deconvolution and fusion with two synthetic data sets that mimic
developing embryos and involve either two or four complementary 3D views.
Compared with classical state-of-the-art methods, the proposed semi- and
self-supervised models achieve competitive and superior deconvolution and
fusion quality in the two-view and quad-view cases, respectively.
Related papers
- MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware Diffusion and Iterative Refinement [23.707586182294932]
Existing works in single-image human reconstruction suffer from weak generalizability due to insufficient training data or 3D inconsistencies for a lack of comprehensive multi-view knowledge.
We introduce MagicMan, a human-specific multi-view diffusion model designed to generate high-quality novel view images from a single reference image.
arXiv Detail & Related papers (2024-08-26T12:10:52Z) - CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference [30.195615398809043]
Cryo-EM is an increasingly popular method for determining the atomic resolution 3D structure of macromolecular complexes.
Recent developments in cryo-EM have focused on deep learning for which amortized inference has been used to predict pose.
Here, we propose a new semi-amortized method, cryoSPIN, in which reconstruction begins with amortized inference and then switches to a form of auto-decoding.
arXiv Detail & Related papers (2024-06-15T00:44:32Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion [60.30030562932703]
EpiDiff is a localized interactive multiview diffusion model.
It generates 16 multiview images in just 12 seconds.
It surpasses previous methods in quality evaluation metrics.
arXiv Detail & Related papers (2023-12-11T05:20:52Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - Progressive Multi-view Human Mesh Recovery with Self-Supervision [68.60019434498703]
Existing solutions typically suffer from poor generalization performance to new settings.
We propose a novel simulation-based training pipeline for multi-view human mesh recovery.
arXiv Detail & Related papers (2022-12-10T06:28:29Z) - Multi-View Photometric Stereo Revisited [100.97116470055273]
Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
arXiv Detail & Related papers (2022-10-14T09:46:15Z) - Mutual Attention-based Hybrid Dimensional Network for Multimodal Imaging
Computer-aided Diagnosis [4.657804635843888]
We propose a novel mutual attention-based hybrid dimensional network for MultiModal 3D medical image classification (MMNet)
The hybrid dimensional network integrates 2D CNN with 3D convolution modules to generate deeper and more informative feature maps.
We further design a mutual attention framework in the network to build the region-wise consistency in similar stereoscopic regions of different image modalities.
arXiv Detail & Related papers (2022-01-24T02:31:25Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Deeply-Supervised Density Regression for Automatic Cell Counting in
Microscopy Images [9.392002197101965]
We propose a new density regression-based method for automatically counting cells in microscopy images.
The proposed method processes two innovations compared to other state-of-the-art regression-based methods.
Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
arXiv Detail & Related papers (2020-11-07T04:02:47Z) - Semi-Automatic Generation of Tight Binary Masks and Non-Convex
Isosurfaces for Quantitative Analysis of 3D Biological Samples [0.2711107673793059]
Current microscopy allows us (3D+t) of complete organisms (3D+t) to offer insights into their development on the cellular level.
Even though imaging speed and quality is steadily improving, fully- segmentation analysis methods are not accurate enough.
This is true while imaging (100um - 1mm) and deep inside the specimen.
We developed a system for analyzing quantitatively 3D+t light-sheet microscopy images of embryos.
arXiv Detail & Related papers (2020-01-30T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.