The Fourier Loss Function
- URL: http://arxiv.org/abs/2102.02979v1
- Date: Fri, 5 Feb 2021 03:19:44 GMT
- Title: The Fourier Loss Function
- Authors: Auricchio Gennaro, Codegoni Andrea, Gualandi Stefano, Zambon Lorenzo
- Abstract summary: This paper introduces a new loss function induced by the Fourier-based Metric.
We prove that the Fourier loss function is twice differentiable, and we provide the explicit formula for both its gradient and its Hessian matrix.
We apply our loss function to a multi-class classification task using MNIST, Fashion-MNIST, and CIFAR10 datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a new loss function induced by the Fourier-based
Metric. This metric is equivalent to the Wasserstein distance but is computed
very efficiently using the Fast Fourier Transform algorithm. We prove that the
Fourier loss function is twice differentiable, and we provide the explicit
formula for both its gradient and its Hessian matrix. More importantly, we show
that minimising the Fourier loss function is equivalent to maximising the
likelihood of the data under a Gaussian noise in the space of frequencies. We
apply our loss function to a multi-class classification task using MNIST,
Fashion-MNIST, and CIFAR10 datasets. The computational results show that, while
its accuracy is competitive with other state-of-the-art loss functions, the
Fourier loss function is significantly more robust to noisy data.
Related papers
- Trigonometric Quadrature Fourier Features for Scalable Gaussian Process
Regression [3.577968559443225]
Quadrature Fourier Features (QFF) have gained popularity in recent years due to their improved approximation accuracy and better calibrated uncertainty estimates.
A key limitation of QFF is that its performance can suffer from well-known pathologies related to highly oscillatory quadrature.
We address this critical issue via a new Trigonometric Quadrature Fourier Feature (TQFF) method, which uses a novel non-Gaussian quadrature rule.
arXiv Detail & Related papers (2023-10-23T03:53:09Z) - Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators [53.087564562565774]
PINO is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
We present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Fourier Disentangled Space-Time Attention for Aerial Video Recognition [54.80846279175762]
We present an algorithm, Fourier Activity Recognition (FAR), for UAV video activity recognition.
Our formulation uses a novel Fourier object disentanglement method to innately separate out the human agent from the background.
We have evaluated our approach on multiple UAV datasets including UAV Human RGB, UAV Human Night, Drone Action, and NEC Drone.
arXiv Detail & Related papers (2022-03-21T01:24:53Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Large-Scale Learning with Fourier Features and Tensor Decompositions [3.6930948691311007]
We exploit the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition.
We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model.
arXiv Detail & Related papers (2021-09-03T14:12:53Z) - Asymmetric Loss Functions for Learning with Noisy Labels [82.50250230688388]
We propose a new class of loss functions, namely textitasymmetric loss functions, which are robust to learning with noisy labels for various types of noise.
Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods.
arXiv Detail & Related papers (2021-06-06T12:52:48Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Fast Partial Fourier Transform [28.36925669222461]
Fast Fourier transform (FFT) is a widely used algorithm that computes the discrete Fourier transform in many machine learning applications.
Despite its pervasive use, all known FFT algorithms do not provide a fine-tuning option for the user to specify one's demand.
In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.
arXiv Detail & Related papers (2020-08-28T10:01:49Z) - Fourier Neural Networks as Function Approximators and Differential
Equation Solvers [0.456877715768796]
The choice of activation and loss function yields results that replicate a Fourier series expansion closely.
We validate this FNN on naturally periodic smooth functions and on piecewise continuous periodic functions.
The main advantages of the current approach are the validity of the solution outside the training region, interpretability of the trained model, and simplicity of use.
arXiv Detail & Related papers (2020-05-27T00:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.