Domain Adaptive Learning Based on Sample-Dependent and Learnable Kernels
- URL: http://arxiv.org/abs/2102.09340v1
- Date: Thu, 18 Feb 2021 13:55:06 GMT
- Title: Domain Adaptive Learning Based on Sample-Dependent and Learnable Kernels
- Authors: Xinlong Lu, Zhengming Ma, Yuanping Lin
- Abstract summary: This paper proposes a Domain Adaptive Learning method based on Sample-Dependent and Learnable Kernels (SDLK-DAL)
The first contribution of our work is to propose a sample-dependent and learnable Positive Definite Quadratic Kernel function (PDQK) framework.
We conduct a series of experiments that the RKHS determined by PDQK replaces those in several state-of-the-art DAL algorithms, and our approach achieves better performance.
- Score: 2.1485350418225244
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reproducing Kernel Hilbert Space (RKHS) is the common mathematical platform
for various kernel methods in machine learning. The purpose of kernel learning
is to learn an appropriate RKHS according to different machine learning
scenarios and training samples. Because RKHS is uniquely generated by the
kernel function, kernel learning can be regarded as kernel function learning.
This paper proposes a Domain Adaptive Learning method based on Sample-Dependent
and Learnable Kernels (SDLK-DAL). The first contribution of our work is to
propose a sample-dependent and learnable Positive Definite Quadratic Kernel
function (PDQK) framework. Unlike learning the exponential parameter of
Gaussian kernel function or the coefficient of kernel combinations, the
proposed PDQK is a positive definite quadratic function, in which the symmetric
positive semi-definite matrix is the learnable part in machine learning
applications. The second contribution lies on that we apply PDQK to Domain
Adaptive Learning (DAL). Our approach learns the PDQK through minimizing the
mean discrepancy between the data of source domain and target domain and then
transforms the data into an optimized RKHS generated by PDQK. We conduct a
series of experiments that the RKHS determined by PDQK replaces those in
several state-of-the-art DAL algorithms, and our approach achieves better
performance.
Related papers
- Optimal Kernel Choice for Score Function-based Causal Discovery [92.65034439889872]
We propose a kernel selection method within the generalized score function that automatically selects the optimal kernel that best fits the data.
We conduct experiments on both synthetic data and real-world benchmarks, and the results demonstrate that our proposed method outperforms kernel selection methods.
arXiv Detail & Related papers (2024-07-14T09:32:20Z) - Learning "best" kernels from data in Gaussian process regression. With
application to aerodynamics [0.4588028371034406]
We introduce algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.
A first class of algorithms is kernel flow, which was introduced in a context of classification in machine learning.
A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal.
arXiv Detail & Related papers (2022-06-03T07:50:54Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Learning with convolution and pooling operations in kernel methods [8.528384027684192]
Recent empirical work has shown that hierarchical convolutional kernels improve the performance of kernel methods in image classification tasks.
We study the precise interplay between approximation and generalization in convolutional architectures.
Our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
arXiv Detail & Related papers (2021-11-16T09:00:44Z) - Scaling Neural Tangent Kernels via Sketching and Random Features [53.57615759435126]
Recent works report that NTK regression can outperform finitely-wide neural networks trained on small-scale datasets.
We design a near input-sparsity time approximation algorithm for NTK, by sketching the expansions of arc-cosine kernels.
We show that a linear regressor trained on our CNTK features matches the accuracy of exact CNTK on CIFAR-10 dataset while achieving 150x speedup.
arXiv Detail & Related papers (2021-06-15T04:44:52Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Advanced Stationary and Non-Stationary Kernel Designs for Domain-Aware
Gaussian Processes [0.0]
We propose advanced kernel designs that only allow for functions with certain desirable characteristics to be elements of the reproducing kernel Hilbert space (RKHS)
We will show the impact of advanced kernel designs on Gaussian processes using several synthetic and two scientific data sets.
arXiv Detail & Related papers (2021-02-05T22:07:56Z) - End-to-end Kernel Learning via Generative Random Fourier Features [31.57596752889935]
Random Fourier features (RFFs) provide a promising way for kernel learning in a spectral case.
In this paper, we consider a one-stage process that incorporates the kernel learning and linear learner into a unifying framework.
arXiv Detail & Related papers (2020-09-10T00:27:39Z) - Learning Deep Kernels for Non-Parametric Two-Sample Tests [50.92621794426821]
We propose a class of kernel-based two-sample tests, which aim to determine whether two sets of samples are drawn from the same distribution.
Our tests are constructed from kernels parameterized by deep neural nets, trained to maximize test power.
arXiv Detail & Related papers (2020-02-21T03:54:23Z) - PolyScientist: Automatic Loop Transformations Combined with Microkernels
for Optimization of Deep Learning Primitives [55.79741270235602]
We develop a hybrid solution to the development of deep learning kernels.
We use the advanced polyhedral technology to automatically tune the outer loops for performance.
arXiv Detail & Related papers (2020-02-06T08:02:34Z) - Face Verification via learning the kernel matrix [9.414572104591027]
kernel function is introduced to solve the nonlinear pattern recognition problem.
A promising approach is to learn the kernel from data automatically.
In this paper, the nonlinear face verification via learning the kernel matrix is proposed.
arXiv Detail & Related papers (2020-01-21T03:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.