Stochastic Resonance Improves the Detection of Low Contrast Images in Deep Learning Models
- URL: http://arxiv.org/abs/2502.14442v1
- Date: Thu, 20 Feb 2025 10:48:49 GMT
- Title: Stochastic Resonance Improves the Detection of Low Contrast Images in Deep Learning Models
- Authors: Siegfried Ludwig,
- Abstract summary: resonance describes the utility of noise in improving the detectability of weak signals in certain types of systems.<n>It has been observed widely in natural and engineered settings, but its utility in image classification with rate-based neural networks has not been studied extensively.<n>Results indicate the presence of resonance in rate-based recurrent neural networks.
- Score: 0.19778256093887275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic resonance describes the utility of noise in improving the detectability of weak signals in certain types of systems. It has been observed widely in natural and engineered settings, but its utility in image classification with rate-based neural networks has not been studied extensively. In this analysis a simple LSTM recurrent neural network is trained for digit recognition and classification. During the test phase, image contrast is reduced to a point where the model fails to recognize the presence of a stimulus. Controlled noise is added to partially recover classification performance. The results indicate the presence of stochastic resonance in rate-based recurrent neural networks.
Related papers
- SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.
Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.
We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Adrialversa attacks can be used as a criterion for judging the adaptability of neural networks to real data.<n>We propose a tunable, regularized neural network framework that unrolls a shallow denoising neural network block and a diffusion regularity block into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.<n>Our model is based on neural operators, a discretization-agnostic architecture.<n>Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks [1.5124439914522694]
We introduce a theoretical framework that explains the capacity property of sinusoidal networks.
We show how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies.
Our method, referred to as TUNER, greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions.
arXiv Detail & Related papers (2024-07-30T18:24:46Z) - Learning Low-Rank Feature for Thorax Disease Classification [7.447448767095787]
We study thorax disease classification in this paper.
Effective extraction of features for the disease areas is crucial for disease classification on radiographic images.
We propose a novel Low-Rank Feature Learning (LRFL) method in this paper.
arXiv Detail & Related papers (2024-02-14T15:35:56Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Application of attention-based Siamese composite neural network in medical image recognition [6.370635116365471]
This study has established a recognition model based on attention and Siamese neural network.
The Attention-Based neural network is used as the main network to improve the classification effect.
The results show that the less the number of image samples are, the more obvious the advantage shows.
arXiv Detail & Related papers (2023-04-19T16:09:59Z) - WIRE: Wavelet Implicit Neural Representations [42.147899723673596]
Implicit neural representations (INRs) have recently advanced numerous vision-related areas.
Current INRs designed to have high accuracy also suffer from poor robustness.
We develop a new, highly accurate and robust INR that does not exhibit this tradeoff.
arXiv Detail & Related papers (2023-01-05T20:24:56Z) - Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration [62.4971588282174]
We propose a new post-processing calibration method called Neural Clamping.
Our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods.
arXiv Detail & Related papers (2022-09-23T14:18:39Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.