Adaptive Matching of Kernel Means
- URL: http://arxiv.org/abs/2011.07798v1
- Date: Mon, 16 Nov 2020 09:00:14 GMT
- Title: Adaptive Matching of Kernel Means
- Authors: Miao Cheng, Xinge You
- Abstract summary: kernel mean matching (KMM) has become an important method for knowledge discovery and novelty detection in kernel machines.
In this work, a novel approach to adaptive matching of kernel means is proposed, and selected data with high importance are adopted to achieve calculation efficiency with optimization.
The experimental results on a wide variety of real-world data sets demonstrate the proposed method is able to give outstanding performance compared with several state-of-the-art methods.
- Score: 10.377914682543903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a promising step, the performance of data analysis and feature learning
are able to be improved if certain pattern matching mechanism is available. One
of the feasible solutions can refer to the importance estimation of instances,
and consequently, kernel mean matching (KMM) has become an important method for
knowledge discovery and novelty detection in kernel machines. Furthermore, the
existing KMM methods have focused on concrete learning frameworks. In this
work, a novel approach to adaptive matching of kernel means is proposed, and
selected data with high importance are adopted to achieve calculation
efficiency with optimization. In addition, scalable learning can be conducted
in proposed method as a generalized solution to matching of appended data. The
experimental results on a wide variety of real-world data sets demonstrate the
proposed method is able to give outstanding performance compared with several
state-of-the-art methods, while calculation efficiency can be preserved.
Related papers
- Optimal Kernel Choice for Score Function-based Causal Discovery [92.65034439889872]
We propose a kernel selection method within the generalized score function that automatically selects the optimal kernel that best fits the data.
We conduct experiments on both synthetic data and real-world benchmarks, and the results demonstrate that our proposed method outperforms kernel selection methods.
arXiv Detail & Related papers (2024-07-14T09:32:20Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Greedy Modality Selection via Approximate Submodular Maximization [19.22947539760366]
Multimodal learning considers learning from multi-modality data, aiming to fuse heterogeneous sources of information.
It is not always feasible to leverage all available modalities due to memory constraints.
We study modality selection, intending to efficiently select the most informative and complementary modalities under certain computational constraints.
arXiv Detail & Related papers (2022-10-22T22:07:27Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Unsupervised feature selection via self-paced learning and low-redundant
regularization [6.083524716031565]
An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
arXiv Detail & Related papers (2021-12-14T08:28:19Z) - A concise method for feature selection via normalized frequencies [0.0]
In this paper, a concise method is proposed for universal feature selection.
The proposed method uses a fusion of the filter method and the wrapper method, rather than a combination of them.
The evaluation results show that the proposed method outperformed several state-of-the-art related works in terms of accuracy, precision, recall, F-score and AUC.
arXiv Detail & Related papers (2021-06-10T15:29:54Z) - Scalable Approximate Inference and Some Applications [2.6541211006790983]
In this thesis, we propose a new framework for approximate inference.
Our proposed four algorithms are motivated by the recent computational progress of Stein's method.
Results on simulated and real datasets indicate the statistical efficiency and wide applicability of our algorithm.
arXiv Detail & Related papers (2020-03-07T04:33:27Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Learning High Order Feature Interactions with Fine Control Kernels [12.5433010409486]
We provide a methodology for learning sparse statistical models that use as features all possible multiplicative interactions.
We also introduce an algorithmic paradigm, the Fine Control Kernel framework, so named because it is based on Fenchel Duality.
arXiv Detail & Related papers (2020-02-09T06:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.