Neural Architectures Learning Fourier Transforms, Signal Processing and
Much More....
- URL: http://arxiv.org/abs/2308.10388v1
- Date: Sun, 20 Aug 2023 23:30:27 GMT
- Title: Neural Architectures Learning Fourier Transforms, Signal Processing and
Much More....
- Authors: Prateek Verma
- Abstract summary: We show how one can learn kernels from scratch for audio signal processing applications.
We find that the neural architecture not only learns sinusoidal kernel shapes but discovers all kinds of incredible signal-processing properties.
- Score: 1.2328446298523066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report will explore and answer fundamental questions about taking
Fourier Transforms and tying it with recent advances in AI and neural
architecture. One interpretation of the Fourier Transform is decomposing a
signal into its constituent components by projecting them onto complex
exponentials. Variants exist, such as discrete cosine transform that does not
operate on the complex domain and projects an input signal to only cosine
functions oscillating at different frequencies. However, this is a fundamental
limitation, and it needs to be more suboptimal. The first one is that all
kernels are sinusoidal: What if we could have some kernels adapted or learned
according to the problem? What if we can use neural architectures for this? We
show how one can learn these kernels from scratch for audio signal processing
applications. We find that the neural architecture not only learns sinusoidal
kernel shapes but discovers all kinds of incredible signal-processing
properties. E.g., windowing functions, onset detectors, high pass filters, low
pass filters, modulations, etc. Further, upon analysis of the filters, we find
that the neural architecture has a comb filter-like structure on top of the
learned kernels. Comb filters that allow harmonic frequencies to pass through
are one of the core building blocks/types of filters similar to high-pass,
low-pass, and band-pass filters of various traditional signal processing
algorithms. Further, we can also use the convolution operation with a signal to
be learned from scratch, and we will explore papers in the literature that uses
this with that robust Transformer architectures. Further, we would also explore
making the learned kernel's content adaptive, i.e., learning different kernels
for different inputs.
Related papers
- Towards Signal Processing In Large Language Models [46.76681147411957]
This paper introduces the idea of applying signal processing inside a Large Language Model (LLM)
We draw parallels between classical Fourier-Transforms and Fourier Transform-like learnable time-frequency representations.
We show that for GPT-like architectures, our work achieves faster convergence and significantly increases performance.
arXiv Detail & Related papers (2024-06-10T13:51:52Z) - As large as it gets: Learning infinitely large Filters via Neural Implicit Functions in the Fourier Domain [22.512062422338914]
Recent work in neural networks for image classification has seen a strong tendency towards increasing the spatial context.
We propose a module for studying the effective filter size of convolutional neural networks.
Our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters are well localized and relatively small in practice.
arXiv Detail & Related papers (2023-07-19T14:21:11Z) - Content Adaptive Front End For Audio Signal Processing [2.8935588665357077]
We propose a learnable content adaptive front end for audio signal processing.
We pass each audio signal through a bank of convolutional filters, each giving a fixed-dimensional vector.
arXiv Detail & Related papers (2023-03-18T16:09:10Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - DeepA: A Deep Neural Analyzer For Speech And Singing Vocoding [71.73405116189531]
We propose a neural vocoder that extracts F0 and timbre/aperiodicity encoding from the input speech that emulates those defined in conventional vocoders.
As the deep neural analyzer is learnable, it is expected to be more accurate for signal reconstruction and manipulation, and generalizable from speech to singing.
arXiv Detail & Related papers (2021-10-13T01:39:57Z) - Large Scale Audio Understanding without Transformers/ Convolutions/
BERTs/ Mixers/ Attention/ RNNs or .... [4.594159253008448]
This paper presents a way of doing large scale audio understanding without traditional state of the art neural architectures.
Our approach does not have any convolutions, recurrence, attention, transformers or other approaches such as BERT.
A classification head (a feed-forward layer), similar to the approach in SimCLR is trained on a learned representation.
arXiv Detail & Related papers (2021-10-07T05:00:26Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Neural Granular Sound Synthesis [53.828476137089325]
Granular sound synthesis is a popular audio generation technique based on rearranging sequences of small waveform windows.
We show that generative neural networks can implement granular synthesis while alleviating most of its shortcomings.
arXiv Detail & Related papers (2020-08-04T08:08:00Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.