Functional Regularization for Reinforcement Learning via Learned Fourier
Features
- URL: http://arxiv.org/abs/2112.03257v1
- Date: Mon, 6 Dec 2021 18:59:52 GMT
- Title: Functional Regularization for Reinforcement Learning via Learned Fourier
Features
- Authors: Alexander C. Li, Deepak Pathak
- Abstract summary: We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
- Score: 98.90474131452588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple architecture for deep reinforcement learning by embedding
inputs into a learned Fourier basis and show that it improves the sample
efficiency of both state-based and image-based RL. We perform infinite-width
analysis of our architecture using the Neural Tangent Kernel and theoretically
show that tuning the initial variance of the Fourier basis is equivalent to
functional regularization of the learned deep network. That is, these learned
Fourier features allow for adjusting the degree to which networks underfit or
overfit different frequencies in the training data, and hence provide a
controlled mechanism to improve the stability and performance of RL
optimization. Empirically, this allows us to prioritize learning low-frequency
functions and speed up learning by reducing networks' susceptibility to noise
in the optimization process, such as during Bellman updates. Experiments on
standard state-based and image-based RL benchmarks show clear benefits of our
architecture over the baselines. Website at
https://alexanderli.com/learned-fourier-features
Related papers
- Robust Fourier Neural Networks [1.0589208420411014]
We show that introducing a simple diagonal layer after the Fourier embedding layer makes the network more robust to measurement noise.
Under certain conditions, our proposed approach can also learn functions that are noisy mixtures of nonlinear functions of Fourier features.
arXiv Detail & Related papers (2024-09-03T16:56:41Z) - LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Towards Explainable Machine Learning: The Effectiveness of Reservoir
Computing in Wireless Receive Processing [21.843365090029987]
We investigate the specific task of channel equalization by applying a popular learning-based technique known as Reservoir Computing (RC)
RC has shown superior performance compared to conventional methods and other learning-based approaches.
We also show the improvement in receive processing/symbol detection performance with this optimized through simulations.
arXiv Detail & Related papers (2023-10-08T00:44:35Z) - Fourier-DeepONet: Fourier-enhanced deep operator networks for full
waveform inversion with improved accuracy, generalizability, and robustness [4.186792090302649]
Full waveform inversion (FWI) infers the structure information from waveform data by solving a non- optimization problem.
Here, we develop a neural network (Fourier-DeepONet) for FWI with the generalization of sources, including the frequencies and locations of sources.
Our experiments demonstrate that Fourier-DeepONet obtains more accurate predictions of subsurface structures in a wide range of source parameters.
arXiv Detail & Related papers (2023-05-26T22:17:28Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Fourier Sensitivity and Regularization of Computer Vision Models [11.79852671537969]
We study the frequency sensitivity characteristics of deep neural networks using a principled approach.
We find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture.
arXiv Detail & Related papers (2023-01-31T10:05:35Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Fourier Space Losses for Efficient Perceptual Image Super-Resolution [131.50099891772598]
We show that it is possible to improve the performance of a recently introduced efficient generator architecture solely with the application of our proposed loss functions.
We show that our losses' direct emphasis on the frequencies in Fourier-space significantly boosts the perceptual image quality.
The trained generator achieves comparable results with and is 2.4x and 48x faster than state-of-the-art perceptual SR methods RankSRGAN and SRFlow respectively.
arXiv Detail & Related papers (2021-06-01T20:34:52Z) - Learning to Learn Kernels with Variational Random Features [118.09565227041844]
We introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability.
We formulate the optimization of MetaVRF as a variational inference problem.
We show that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.
arXiv Detail & Related papers (2020-06-11T18:05:29Z) - Fourier Neural Networks as Function Approximators and Differential
Equation Solvers [0.456877715768796]
The choice of activation and loss function yields results that replicate a Fourier series expansion closely.
We validate this FNN on naturally periodic smooth functions and on piecewise continuous periodic functions.
The main advantages of the current approach are the validity of the solution outside the training region, interpretability of the trained model, and simplicity of use.
arXiv Detail & Related papers (2020-05-27T00:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.