Deep Quantized Representation for Enhanced Reconstruction
- URL: http://arxiv.org/abs/2107.14368v1
- Date: Thu, 29 Jul 2021 23:22:27 GMT
- Title: Deep Quantized Representation for Enhanced Reconstruction
- Authors: Akash Gupta, Abhishek Aich, Kevin Rodriguez, G. Venugopala Reddy, Amit
K. Roy-Chowdhury
- Abstract summary: We propose a data-driven Deep Quantized Latent Representation (DQLR) methodology for high-quality image reconstruction in the Shoot Apical Meristem (SAM) of Arabidopsis thaliana.
Our proposed framework utilizes multiple consecutive slices in the z-stack to learn a low dimensional latent space, quantize it and subsequently perform reconstruction using the quantized representation to obtain sharper images.
- Score: 33.337794852677035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While machine learning approaches have shown remarkable performance in
biomedical image analysis, most of these methods rely on high-quality and
accurate imaging data. However, collecting such data requires intensive and
careful manual effort. One of the major challenges in imaging the Shoot Apical
Meristem (SAM) of Arabidopsis thaliana, is that the deeper slices in the
z-stack suffer from different perpetual quality-related problems like poor
contrast and blurring. These quality-related issues often lead to the disposal
of the painstakingly collected data with little to no control on quality while
collecting the data. Therefore, it becomes necessary to employ and design
techniques that can enhance the images to make them more suitable for further
analysis. In this paper, we propose a data-driven Deep Quantized Latent
Representation (DQLR) methodology for high-quality image reconstruction in the
Shoot Apical Meristem (SAM) of Arabidopsis thaliana. Our proposed framework
utilizes multiple consecutive slices in the z-stack to learn a low dimensional
latent space, quantize it and subsequently perform reconstruction using the
quantized representation to obtain sharper images. Experiments on a publicly
available dataset validate our methodology showing promising results.
Related papers
- FoundIR: Unleashing Million-scale Training Data to Advance Foundation Models for Image Restoration [66.61201445650323]
Existing methods suffer from a generalization bottleneck in real-world scenarios.
We contribute a million-scale dataset with two notable advantages over existing training data.
We propose a robust model, FoundIR, to better address a broader range of restoration tasks in real-world scenarios.
arXiv Detail & Related papers (2024-12-02T12:08:40Z) - Self-Supervised Denoiser Framework [3.2953695839572528]
We introduce the Self-supervised Denoiser Framework (SDF) to enhance the quality of images reconstructed from undersampled sinogram data.
SDF is a self-supervised training method that leverages pre-training on highly sampled sinogram data.
We demonstrate that SDF produces better image quality, in terms of peak signal-to-noise ratio, than other analytical and self-supervised frameworks.
arXiv Detail & Related papers (2024-11-29T10:21:37Z) - Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - Exposure Bracketing Is All You Need For A High-Quality Image [50.822601495422916]
Multi-exposure images are complementary in denoising, deblurring, high dynamic range imaging, and super-resolution.
We propose to utilize exposure bracketing photography to get a high-quality image by combining these tasks in this work.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Generalizable Denoising of Microscopy Images using Generative
Adversarial Networks and Contrastive Learning [0.0]
We propose a novel framework for few-shot microscopy image denoising.
Our approach combines a generative adversarial network (GAN) trained via contrastive learning (CL) with two structure preserving loss terms.
We demonstrate the effectiveness of our method on three well-known microscopy imaging datasets.
arXiv Detail & Related papers (2023-03-27T13:55:07Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Compressive Ptychography using Deep Image and Generative Priors [9.658250977094562]
Ptychography is a well-established coherent diffraction imaging technique that enables non-invasive imaging of samples at a nanometer scale.
One major limitation of ptychography is the long data acquisition time due to mechanical scanning of the sample.
We propose a generative model combining deep image priors with deep generative priors.
arXiv Detail & Related papers (2022-05-05T02:18:26Z) - Unsupervised PET Reconstruction from a Bayesian Perspective [12.512270202705404]
DeepRED is a typical representation that combines DIP and regularization by denoising (RED)
In this article, we leverage DeepRED from a Bayesian perspective to reconstruct PET images from a single corrupted sinogram without any supervised or auxiliary information.
arXiv Detail & Related papers (2021-10-29T06:32:21Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.