A Comparative Study of Image Denoising Algorithms
- URL: http://arxiv.org/abs/2412.05490v1
- Date: Sat, 07 Dec 2024 01:23:10 GMT
- Title: A Comparative Study of Image Denoising Algorithms
- Authors: Muhammad Umair Danish,
- Abstract summary: Digital images play a significant part and backbone role in many areas like image processing, vision computing, robotics, and bio-medical.
Images are likely to get corrupted or degraded by the available of degradation factors.
Several image denoising algorithms have been proposed in the literature focusing on robust, low-cost and fast techniques to improve output performance.
- Score: 0.0
- License:
- Abstract: With the recent advancements in the field of information industry, critical data in the form of digital images is best understood by the human brain. Therefore, digital images play a significant part and backbone role in many areas such as image processing, vision computing, robotics, and bio-medical. Such use of digital images is practically implementable in various real-time scenarios like biological sciences, medicine, gaming technology, computer information and communication technology, data and statistical science, radiological sciences and medical imaging technology, and medical lab technology. However, when any digital image is sent electronically or captured via camera, it is likely to get corrupted or degraded by the available of degradation factors. To eradicate this problem, several image denoising algorithms have been proposed in the literature focusing on robust, low-cost and fast techniques to improve output performance. Consequently, in this research project, an earnest effort has been made to study various image denoising algorithms. A specific focus is given to the start-of-the-art techniques namely: NL-means, K-SVD, and BM3D. The standard images, natural images, texture images, synthetic images, and images from other datasets have been tested via these algorithms, and a detailed set of convincing results have been provided for efficient comparison.
Related papers
- Is JPEG AI going to change image forensics? [50.92778618091496]
We investigate the counter-forensic effects of the forthcoming JPEG AI standard based on neural image compression.
We show that an increase in false alarms impairs the performance of leading forensic detectors when analyzing genuine content processed through JPEG AI.
arXiv Detail & Related papers (2024-12-04T12:07:20Z) - Private, Efficient and Scalable Kernel Learning for Medical Image Analysis [1.7999333451993955]
OKRA (Orthonormal K-fRAmes) is a novel randomized encoding-based approach for kernel-based machine learning.
It significantly enhances scalability and speed compared to current state-of-the-art solutions.
arXiv Detail & Related papers (2024-10-21T10:03:03Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Harnessing Machine Learning for Discerning AI-Generated Synthetic Images [2.6227376966885476]
We employ machine learning techniques to discern between AI-generated and genuine images.
We refine and adapt advanced deep learning architectures like ResNet, VGGNet, and DenseNet.
The experimental results were significant, demonstrating that our optimized deep learning models outperform traditional methods.
arXiv Detail & Related papers (2024-01-14T20:00:37Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Deepfake Image Generation for Improved Brain Tumor Segmentation [0.0]
This work investigates the feasibility of employing deep-fake image generation for effective brain tumor segmentation.
A Generative Adversarial Network was used for image-to-image translation and image segmentation using a U-Net-based convolutional neural network trained with deepfake images.
Results show improved performance in terms of image segmentation quality metrics, and could potentially assist when training with limited data.
arXiv Detail & Related papers (2023-07-26T16:11:51Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise
Reduction [0.0]
We propose a few-shot supervised learning framework for optical coherence tomography ( OCT) noise reduction.
This framework offers a dramatic increase in training speed and requires only a single image, or part of an image, and a corresponding speckle suppressed ground truth.
Our results demonstrate significant potential for improving sample complexity, generalization, and time efficiency.
arXiv Detail & Related papers (2023-06-13T19:46:40Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Deep Denoising For Scientific Discovery: A Case Study In Electron
Microscopy [22.566600256820646]
We propose a simulation-based denoising (SBD) framework, in which CNNs are trained on simulated images.
SBD outperforms existing techniques by a wide margin on a simulated benchmark dataset, as well as on real data.
We release the first publicly available benchmark dataset of TEM images, containing 18,000 examples.
arXiv Detail & Related papers (2020-10-24T19:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.