Fourier Sensitivity and Regularization of Computer Vision Models
- URL: http://arxiv.org/abs/2301.13514v1
- Date: Tue, 31 Jan 2023 10:05:35 GMT
- Title: Fourier Sensitivity and Regularization of Computer Vision Models
- Authors: Kiran Krishnamachari, See-Kiong Ng, Chuan-Sheng Foo
- Abstract summary: We study the frequency sensitivity characteristics of deep neural networks using a principled approach.
We find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture.
- Score: 11.79852671537969
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent work has empirically shown that deep neural networks latch on to the
Fourier statistics of training data and show increased sensitivity to
Fourier-basis directions in the input. Understanding and modifying this
Fourier-sensitivity of computer vision models may help improve their
robustness. Hence, in this paper we study the frequency sensitivity
characteristics of deep neural networks using a principled approach. We first
propose a basis trick, proving that unitary transformations of the
input-gradient of a function can be used to compute its gradient in the basis
induced by the transformation. Using this result, we propose a general measure
of any differentiable model's Fourier-sensitivity using the unitary
Fourier-transform of its input-gradient. When applied to deep neural networks,
we find that computer vision models are consistently sensitive to particular
frequencies dependent on the dataset, training method and architecture. Based
on this measure, we further propose a Fourier-regularization framework to
modify the Fourier-sensitivities and frequency bias of models. Using our
proposed regularizer-family, we demonstrate that deep neural networks obtain
improved classification accuracy on robustness evaluations.
Related papers
- From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Neural Network-Based Score Estimation in Diffusion Models: Optimization
and Generalization [12.812942188697326]
Diffusion models have emerged as a powerful tool rivaling GANs in generating high-quality samples with improved fidelity, flexibility, and robustness.
A key component of these models is to learn the score function through score matching.
Despite empirical success on various tasks, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy.
arXiv Detail & Related papers (2024-01-28T08:13:56Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.