What do neural networks learn in image classification? A frequency
shortcut perspective
- URL: http://arxiv.org/abs/2307.09829v2
- Date: Wed, 30 Aug 2023 10:19:02 GMT
- Title: What do neural networks learn in image classification? A frequency
shortcut perspective
- Authors: Shunxin Wang, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio
- Abstract summary: This study empirically investigates the learning dynamics of frequency shortcuts in neural networks (NNs)
We show that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics.
We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts.
- Score: 3.9858496473361402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Frequency analysis is useful for understanding the mechanisms of
representation learning in neural networks (NNs). Most research in this area
focuses on the learning dynamics of NNs for regression tasks, while little for
classification. This study empirically investigates the latter and expands the
understanding of frequency shortcuts. First, we perform experiments on
synthetic datasets, designed to have a bias in different frequency bands. Our
results demonstrate that NNs tend to find simple solutions for classification,
and what they learn first during training depends on the most distinctive
frequency characteristics, which can be either low- or high-frequencies.
Second, we confirm this phenomenon on natural images. We propose a metric to
measure class-wise frequency characteristics and a method to identify frequency
shortcuts. The results show that frequency shortcuts can be texture-based or
shape-based, depending on what best simplifies the objective. Third, we
validate the transferability of frequency shortcuts on out-of-distribution
(OOD) test sets. Our results suggest that frequency shortcuts can be
transferred across datasets and cannot be fully avoided by larger model
capacity and data augmentation. We recommend that future research should focus
on effective training schemes mitigating frequency shortcut learning.
Related papers
- Towards Combating Frequency Simplicity-biased Learning for Domain Generalization [36.777767173275336]
Domain generalization methods aim to learn transferable knowledge from source domains that can generalize well to unseen target domains.
Recent studies show that neural networks frequently suffer from a simplicity-biased learning behavior which leads to over-reliance on specific frequency sets.
We propose two effective data augmentation modules designed to collaboratively and adaptively adjust the frequency characteristic of the dataset.
arXiv Detail & Related papers (2024-10-21T16:17:01Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Frequency Dropout: Feature-Level Regularization via Randomized Filtering [24.53978165468098]
Deep convolutional neural networks are susceptible to picking up spurious correlations from the training signal.
We propose a training strategy, Frequency Dropout, to prevent convolutional neural networks from learning frequency-specific imaging features.
Our results suggest that the proposed approach does not only improve predictive accuracy but also improves robustness against domain shift.
arXiv Detail & Related papers (2022-09-20T16:42:21Z) - Understanding robustness and generalization of artificial neural
networks through Fourier masks [8.94889125739046]
Recent literature suggests that robust networks with good generalization properties tend to be biased towards processing low frequencies in images.
We develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance.
arXiv Detail & Related papers (2022-03-16T17:32:00Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Robust Learning with Frequency Domain Regularization [1.370633147306388]
We introduce a new regularization method by constraining the frequency spectra of the filter of the model.
We demonstrate the effectiveness of our regularization by (1) defensing to adversarial perturbations; (2) reducing the generalization gap in different architecture; and (3) improving the generalization ability in transfer learning scenario without fine-tune.
arXiv Detail & Related papers (2020-07-07T07:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.