Compressed Smooth Sparse Decomposition
- URL: http://arxiv.org/abs/2201.07404v1
- Date: Wed, 19 Jan 2022 03:50:41 GMT
- Title: Compressed Smooth Sparse Decomposition
- Authors: Shancong Mou and Jianjun Shi
- Abstract summary: We propose a fast and data-efficient method with theoretical performance guarantee for sparse anomaly detection in images.
The proposed method, named Compressed Smooth Sparse Decomposition (CSSD), is a one-step method that unifies the compressive image acquisition and decomposition-based image processing techniques.
Compared to traditional smooth and sparse decomposition algorithms, significant transmission cost reduction and computational speed boost can be achieved with negligible performance loss.
- Score: 3.8644240125444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based anomaly detection systems are of vital importance in various
manufacturing applications. The resolution and acquisition rate of such systems
is increasing significantly in recent years under the fast development of image
sensing technology. This enables the detection of tiny defects in real-time.
However, such a high resolution and acquisition rate of image data not only
slows down the speed of image processing algorithms but also increases data
storage and transmission cost. To tackle this problem, we propose a fast and
data-efficient method with theoretical performance guarantee that is suitable
for sparse anomaly detection in images with a smooth background (smooth plus
sparse signal). The proposed method, named Compressed Smooth Sparse
Decomposition (CSSD), is a one-step method that unifies the compressive image
acquisition and decomposition-based image processing techniques. To further
enhance its performance in a high-dimensional scenario, a Kronecker Compressed
Smooth Sparse Decomposition (KronCSSD) method is proposed. Compared to
traditional smooth and sparse decomposition algorithms, significant
transmission cost reduction and computational speed boost can be achieved with
negligible performance loss. Simulation examples and several case studies in
various applications illustrate the effectiveness of the proposed framework.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Batch-FPM: Random batch-update multi-parameter physical Fourier ptychography neural network [0.933064392528114]
Fourier Ptychographic Microscopy (FPM) is a computational imaging technique that enables high-resolution imaging over a large field of view.
We propose a fast and robust FPM reconstruction method based on physical neural networks with batch update gradient descent (SGD) optimization strategy.
Our method has better convergence performance even for low signal-to-noise ratio data sets, such as low exposure time dark-field images.
arXiv Detail & Related papers (2024-08-25T09:24:18Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - ResShift: Efficient Diffusion Model for Image Super-resolution by
Residual Shifting [70.83632337581034]
Diffusion-based image super-resolution (SR) methods are mainly limited by the low inference speed.
We propose a novel and efficient diffusion model for SR that significantly reduces the number of diffusion steps.
Our method constructs a Markov chain that transfers between the high-resolution image and the low-resolution image by shifting the residual.
arXiv Detail & Related papers (2023-07-23T15:10:02Z) - Improving Multi-generation Robustness of Learned Image Compression [16.86614420872084]
We show that LIC can achieve comparable performance to the first compression of BPG even after 50 times reencoding without any change of the network structure.
arXiv Detail & Related papers (2022-10-31T03:26:11Z) - A Fast Alternating Minimization Algorithm for Coded Aperture Snapshot
Spectral Imaging Based on Sparsity and Deep Image Priors [8.890754092562918]
Coded aperture snapshot spectral imaging (CASSI) is a technique used to reconstruct three-dimensional hyperspectral images (HSIs)
This paper proposes a fast alternating minimization algorithm based on the sparsity and deep image priors (Fama-P) of natural images.
arXiv Detail & Related papers (2022-06-12T03:29:14Z) - Lossless Image Compression Using a Multi-Scale Progressive Statistical
Model [16.58692559039154]
Methods based on pixel-wise autoregressive statistical models have shown good performance.
We propose a multi-scale progressive statistical model that takes advantage of the pixel-wise approach and the multi-scale approach.
arXiv Detail & Related papers (2021-08-24T07:33:13Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.