$\ell_p$-Norm Multiple Kernel One-Class Fisher Null-Space
- URL: http://arxiv.org/abs/2008.08642v2
- Date: Sat, 25 Sep 2021 14:08:10 GMT
- Title: $\ell_p$-Norm Multiple Kernel One-Class Fisher Null-Space
- Authors: Shervin Rahimzadeh Arashloo
- Abstract summary: The paper addresses the multiple kernel learning (MKL) problem for one-class classification (OCC)
We present a multiple kernel learning algorithm where a general $ell_p$-norm constraint ($pgeq1$) on kernel weights is considered.
An extension of the proposed one-class MKL approach is also considered where several related one-class MKL tasks are learned jointly.
- Score: 15.000818334408802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper addresses the multiple kernel learning (MKL) problem for one-class
classification (OCC). For this purpose, based on the Fisher null-space
one-class classification principle, we present a multiple kernel learning
algorithm where a general $\ell_p$-norm constraint ($p\geq1$) on kernel weights
is considered. We cast the proposed one-class MKL task as a min-max saddle
point Lagrangian optimisation problem and propose an efficient method to solve
it. An extension of the proposed one-class MKL approach is also considered
where several related one-class MKL tasks are learned jointly by constraining
them to share common kernel weights.
An extensive assessment of the proposed method on a range of data sets from
different application domains confirms its merits against the baseline and
several other algorithms.
Related papers
- Continual Learning Using a Kernel-Based Method Over Foundation Models [13.315292874389735]
Class-incremental learning (CIL) learns a sequence of tasks incrementally.
CIL has two key challenges: catastrophic forgetting (CF) and inter-task class separation (ICS)
This paper proposes Kernel Linear Discriminant Analysis (KLDA) that can effectively avoid CF and ICS problems.
arXiv Detail & Related papers (2024-12-20T05:09:18Z) - A column generation algorithm with dynamic constraint aggregation for minimum sum-of-squares clustering [0.30693357740321775]
The minimum sum-of-squares clustering problem (MSSC) refers to the problem of partitioning $n$ data points into $k$ clusters.
We propose an efficient algorithm for solving large-scale MSSC instances, which combines column generation (CG) with dynamic constraint aggregation (DCA)
arXiv Detail & Related papers (2024-10-08T16:51:28Z) - Fast Asymmetric Factorization for Large Scale Multiple Kernel Clustering [5.21777096853979]
Multiple Kernel Clustering (MKC) has emerged as a solution, allowing the fusion of information from multiple base kernels for clustering.
We propose Efficient Multiple Kernel Concept Factorization (EMKCF), which constructs a new sparse kernel matrix inspired by local regression to achieve memory efficiency.
arXiv Detail & Related papers (2024-05-26T06:29:12Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Multiple Locally Linear Kernel Machines [14.282867638200699]
We propose a new non-linear classifier based on a combination of locally linear classifiers.
We provide a scalable generic MKL training algorithm handling streaming kernels.
arXiv Detail & Related papers (2024-01-17T22:43:00Z) - Multiple Kernel Clustering with Dual Noise Minimization [56.009011016367744]
Multiple kernel clustering (MKC) aims to group data by integrating complementary information from base kernels.
In this paper, we rigorously define dual noise and propose a novel parameter-free MKC algorithm by minimizing them.
We observe that dual noise will pollute the block diagonal structures and incur the degeneration of clustering performance, and C-noise exhibits stronger destruction than N-noise.
arXiv Detail & Related papers (2022-07-13T08:37:42Z) - Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph [73.68184322526338]
Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
arXiv Detail & Related papers (2022-07-05T05:00:38Z) - Gradient Based Clustering [72.15857783681658]
We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality.
The approach is an iterative two step procedure (alternating between cluster assignment and cluster center updates) and is applicable to a wide range of functions.
arXiv Detail & Related papers (2022-02-01T19:31:15Z) - SimpleMKKM: Simple Multiple Kernel K-means [49.500663154085586]
We propose a simple yet effective multiple kernel clustering algorithm, termed simple multiple kernel k-means (SimpleMKKM)
Our criterion is given by an intractable minimization-maximization problem in the kernel coefficient and clustering partition matrix.
We theoretically analyze the performance of SimpleMKKM in terms of its clustering generalization error.
arXiv Detail & Related papers (2020-05-11T10:06:40Z) - Kernel-Based Reinforcement Learning: A Finite-Time Analysis [53.47210316424326]
We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards.
We empirically validate our approach in continuous MDPs with sparse rewards.
arXiv Detail & Related papers (2020-04-12T12:23:46Z) - Nonlinear classifiers for ranking problems based on kernelized SVM [0.0]
Many classification problems focus on maximizing the performance only on the samples with the highest relevance instead of all samples.
In this paper, we derive a general framework including several classes of these linear classification problems.
We dualize the problems, add kernels and propose a componentwise dual ascent method.
arXiv Detail & Related papers (2020-02-26T12:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.