ResWCAE: Biometric Pattern Image Denoising Using Residual
Wavelet-Conditioned Autoencoder
- URL: http://arxiv.org/abs/2307.12255v1
- Date: Sun, 23 Jul 2023 08:02:27 GMT
- Title: ResWCAE: Biometric Pattern Image Denoising Using Residual
Wavelet-Conditioned Autoencoder
- Authors: Youzhi Liang, Wen Liang
- Abstract summary: Biometric authentication with pattern images is increasingly popular in compact Internet of Things (IoT) devices.
The reliability of such systems can be compromised by image quality issues, particularly in the presence of high levels of noise.
This paper proposes a lightweight and robust deep learning architecture, the Residual Wavelet-Conditioned Convolutional Autoencoder (Res-WCAE)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The utilization of biometric authentication with pattern images is
increasingly popular in compact Internet of Things (IoT) devices. However, the
reliability of such systems can be compromised by image quality issues,
particularly in the presence of high levels of noise. While state-of-the-art
deep learning algorithms designed for generic image denoising have shown
promise, their large number of parameters and lack of optimization for unique
biometric pattern retrieval make them unsuitable for these devices and
scenarios. In response to these challenges, this paper proposes a lightweight
and robust deep learning architecture, the Residual Wavelet-Conditioned
Convolutional Autoencoder (Res-WCAE) with a Kullback-Leibler divergence (KLD)
regularization, designed specifically for fingerprint image denoising. Res-WCAE
comprises two encoders - an image encoder and a wavelet encoder - and one
decoder. Residual connections between the image encoder and decoder are
leveraged to preserve fine-grained spatial features, where the bottleneck layer
conditioned on the compressed representation of features obtained from the
wavelet encoder using approximation and detail subimages in the
wavelet-transform domain. The effectiveness of Res-WCAE is evaluated against
several state-of-the-art denoising methods, and the experimental results
demonstrate that Res-WCAE outperforms these methods, particularly for heavily
degraded fingerprint images in the presence of high levels of noise. Overall,
Res-WCAE shows promise as a solution to the challenges faced by biometric
authentication systems in compact IoT devices.
Related papers
- Learning Multi-scale Spatial-frequency Features for Image Denoising [58.883244886588336]
We propose a novel multi-scale adaptive dual-domain network (MADNet) for image denoising.<n>We use image pyramid inputs to restore noise-free results from low-resolution images.<n>In order to realize the interaction of high-frequency and low-frequency information, we design an adaptive spatial-frequency learning unit.
arXiv Detail & Related papers (2025-06-19T13:28:09Z) - Efficient and Robust Remote Sensing Image Denoising Using Randomized Approximation of Geodesics' Gramian on the Manifold Underlying the Patch Space [2.56711111236449]
We present a robust remote sensing image denoising method that doesn't require additional training samples.
The method asserts a unique emphasis on each color channel during denoising so the three denoised channels are merged to produce the final image.
arXiv Detail & Related papers (2025-04-15T02:46:05Z) - Picking watermarks from noise (PWFN): an improved robust watermarking model against intensive distortions [8.015939257511018]
This paper introduces a denoise module between the noise layer and the decoder.
The module aims to reduce noise and recover some of the information lost caused by distortion.
Experimental results show that our proposed method is comparable to existing models and outperforms state-of-the-art under different noise intensities.
arXiv Detail & Related papers (2024-05-08T16:06:57Z) - Transfer CLIP for Generalizable Image Denoising [11.144858989063522]
We devise an asymmetrical encoder-decoder denoising network, which incorporates dense features including the noisy image.
Experiments and comparisons conducted across diverse OOD noises, including synthetic noise, real-world sRGB noise, and low-dose CT image noise, demonstrate the superior generalization ability of our method.
arXiv Detail & Related papers (2024-03-22T11:33:04Z) - Arbitrary-Scale Image Generation and Upsampling using Latent Diffusion Model and Implicit Neural Decoder [29.924160271522354]
Super-resolution (SR) and image generation are important tasks in computer vision and are widely adopted in real-world applications.
Most existing methods, however, generate images only at fixed-scale magnification and suffer from over-smoothing and artifacts.
Most relevant work applied Implicit Neural Representation (INR) to the denoising diffusion model to obtain continuous-resolution yet diverse and high-quality SR results.
We propose a novel pipeline that can super-resolve an input image or generate from a random noise a novel image at arbitrary scales.
arXiv Detail & Related papers (2024-03-15T12:45:40Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Convolutional Autoencoder for Blind Hyperspectral Image Unmixing [0.0]
spectral unmixing is a technique to decompose a mixed pixel into two fundamental representatives: endmembers and abundances.
In this paper, a novel architecture is proposed to perform blind unmixing on hyperspectral images.
arXiv Detail & Related papers (2020-11-18T17:41:31Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z) - Generalized Octave Convolutions for Learned Multi-Frequency Image
Compression [20.504561050200365]
We propose the first learned multi-frequency image compression and entropy coding approach.
It is based on the recently developed octave convolutions to factorize the latents into high and low frequency (resolution) components.
We show that the proposed generalized octave convolution can improve the performance of other auto-encoder-based computer vision tasks.
arXiv Detail & Related papers (2020-02-24T01:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.