Convolutional Spectral Kernel Learning
- URL: http://arxiv.org/abs/2002.12744v1
- Date: Fri, 28 Feb 2020 14:35:54 GMT
- Title: Convolutional Spectral Kernel Learning
- Authors: Jian Li, Yong Liu, Weiping Wang
- Abstract summary: We build an interpretable convolutional spectral kernel network (textttCSKN) based on the inverse Fourier transform.
We derive the generalization error bounds and introduce two regularizers to improve the performance.
Experiments results on real-world datasets validate the effectiveness of the learning framework.
- Score: 21.595130250234646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, non-stationary spectral kernels have drawn much attention, owing to
its powerful feature representation ability in revealing long-range
correlations and input-dependent characteristics. However, non-stationary
spectral kernels are still shallow models, thus they are deficient to learn
both hierarchical features and local interdependence. In this paper, to obtain
hierarchical and local knowledge, we build an interpretable convolutional
spectral kernel network (\texttt{CSKN}) based on the inverse Fourier transform,
where we introduce deep architectures and convolutional filters into
non-stationary spectral kernel representations. Moreover, based on Rademacher
complexity, we derive the generalization error bounds and introduce two
regularizers to improve the performance. Combining the regularizers and recent
advancements on random initialization, we finally complete the learning
framework of \texttt{CSKN}. Extensive experiments results on real-world
datasets validate the effectiveness of the learning framework and coincide with
our theoretical findings.
Related papers
- Point-Calibrated Spectral Neural Operators [54.13671100638092]
We introduce Point-Calibrated Spectral Transform, which learns operator mappings by approximating functions with the point-level adaptive spectral basis.
Point-Calibrated Spectral Neural Operators learn operator mappings by approximating functions with the point-level adaptive spectral basis.
arXiv Detail & Related papers (2024-10-15T08:19:39Z) - HoloNets: Spectral Convolutions do extend to Directed Graphs [59.851175771106625]
Conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs.
Here we show this traditional reliance on the graph Fourier transform to be superfluous.
We provide a frequency-response interpretation of newly developed filters, investigate the influence of the basis used to express filters and discuss the interplay with characteristic operators on which networks are based.
arXiv Detail & Related papers (2023-10-03T17:42:09Z) - Mechanism of feature learning in convolutional neural networks [14.612673151889615]
We identify the mechanism of how convolutional neural networks learn from image data.
We present empirical evidence for our ansatz, including identifying high correlation between covariances of filters and patch-based AGOPs.
We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines.
arXiv Detail & Related papers (2023-09-01T16:30:02Z) - Learning Neural Eigenfunctions for Unsupervised Semantic Segmentation [12.91586050451152]
Spectral clustering is a theoretically grounded solution to it where the spectral embeddings for pixels are computed to construct distinct clusters.
Current approaches still suffer from inefficiencies in spectral decomposition and inflexibility in applying them to the test data.
This work addresses these issues by casting spectral clustering as a parametric approach that employs neural network-based eigenfunctions to produce spectral embeddings.
In practice, the neural eigenfunctions are lightweight and take the features from pre-trained models as inputs, improving training efficiency and unleashing the potential of pre-trained models for dense prediction.
arXiv Detail & Related papers (2023-04-06T03:14:15Z) - Spectral Regularization Allows Data-frugal Learning over Combinatorial
Spaces [13.36217184117654]
We show that regularizing the spectral representation of machine learning models improves their generalization power when labeled data is scarce.
Running gradient descent on the regularized loss results in a better generalization performance compared to baseline algorithms in several data-scarce real-world problems.
arXiv Detail & Related papers (2022-10-05T23:31:54Z) - Spectral Decomposition Representation for Reinforcement Learning [100.0424588013549]
We propose an alternative spectral method, Spectral Decomposition Representation (SPEDER), that extracts a state-action abstraction from the dynamics without inducing spurious dependence on the data collection policy.
A theoretical analysis establishes the sample efficiency of the proposed algorithm in both the online and offline settings.
An experimental investigation demonstrates superior performance over current state-of-the-art algorithms across several benchmarks.
arXiv Detail & Related papers (2022-08-19T19:01:30Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Spectral Complexity-scaled Generalization Bound of Complex-valued Neural
Networks [78.64167379726163]
This paper is the first work that proves a generalization bound for the complex-valued neural network.
We conduct experiments by training complex-valued convolutional neural networks on different datasets.
arXiv Detail & Related papers (2021-12-07T03:25:25Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Deep Fourier Kernel for Self-Attentive Point Processes [16.63706478353667]
We present a novel attention-based model for discrete event data to capture complex non-linear temporal dependence structures.
We introduce a novel score function using Fourier kernel embedding, whose spectrum is represented using neural networks.
We establish our approach's theoretical properties and demonstrate our approach's competitive performance compared to the state-of-the-art for synthetic and real data.
arXiv Detail & Related papers (2020-02-17T22:25:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.