One Class Restricted Kernel Machines
- URL: http://arxiv.org/abs/2502.10443v1
- Date: Tue, 11 Feb 2025 07:11:20 GMT
- Title: One Class Restricted Kernel Machines
- Authors: A. Quadir, M. Sajid, M. Tanveer,
- Abstract summary: Restricted kernel machines (RKMs) have demonstrated a significant impact in enhancing generalization ability in the field of machine learning.
RKMs's efficacy can be compromised by the presence of outliers and other forms of contamination within the dataset.
To address this critical issue and robustness of the model, we propose the novel one-class RKM (OCRKM)
In the framework of OCRKM, we employ an energy function akin to that of the RBM, which integrates both visible and hidden variables in a nonprobabilistic setting.
- Score: 0.0
- License:
- Abstract: Restricted kernel machines (RKMs) have demonstrated a significant impact in enhancing generalization ability in the field of machine learning. Recent studies have introduced various methods within the RKM framework, combining kernel functions with the least squares support vector machine (LSSVM) in a manner similar to the energy function of restricted boltzmann machines (RBM), such that a better performance can be achieved. However, RKM's efficacy can be compromised by the presence of outliers and other forms of contamination within the dataset. These anomalies can skew the learning process, leading to less accurate and reliable outcomes. To address this critical issue and to ensure the robustness of the model, we propose the novel one-class RKM (OCRKM). In the framework of OCRKM, we employ an energy function akin to that of the RBM, which integrates both visible and hidden variables in a nonprobabilistic setting. The formulation of the proposed OCRKM facilitates the seamless integration of one-class classification method with the RKM, enhancing its capability to detect outliers and anomalies effectively. The proposed OCRKM model is evaluated over UCI benchmark datasets. Experimental findings and statistical analyses consistently emphasize the superior generalization capabilities of the proposed OCRKM model over baseline models across all scenarios.
Related papers
- Hybrid machine learning based scale bridging framework for permeability prediction of fibrous structures [0.0]
This study introduces a hybrid machine learning-based scale-bridging framework for predicting the permeability of fibrous textile structures.
Four methodologies were evaluated: Single Scale Method (SSM), Simple Upscaling Method (SUM), Scale-Bridging Method (SBM), and Fully Resolved Model (FRM)
arXiv Detail & Related papers (2025-02-07T16:09:25Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - ClusterDDPM: An EM clustering framework with Denoising Diffusion
Probabilistic Models [9.91610928326645]
Denoising diffusion probabilistic models (DDPMs) represent a new and promising class of generative models.
In this study, we introduce an innovative expectation-maximization (EM) framework for clustering using DDPMs.
In the M-step, our focus lies in learning clustering-friendly latent representations for the data by employing the conditional DDPM and matching the distribution of latent representations to the mixture of Gaussian priors.
arXiv Detail & Related papers (2023-12-13T10:04:06Z) - f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization [9.591164070876689]
This paper presents a unified optimization framework for fair empirical risk based on f-divergence measures (f-FERM)
In addition, our experiments demonstrate the superiority of fairness-accuracy tradeoffs offered by f-FERM for almost all batch sizes.
Our extension is based on a distributionally robust optimization reformulation of f-FERM objective under $L_p$ norms as uncertainty sets.
arXiv Detail & Related papers (2023-12-06T03:14:16Z) - Bayesian learning of feature spaces for multitasks problems [0.11538034264098687]
This paper introduces a novel approach for multi-task regression that connects Kernel Machines (KMs) and Extreme Learning Machines (ELMs)
The proposed models, termed RFF-BLR, stand on a Bayesian framework that simultaneously addresses two main design goals.
The experimental results show that this framework can lead to significant performance improvements compared to the state-of-the-art methods in nonlinear regression.
arXiv Detail & Related papers (2022-09-07T09:53:53Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - CTDS: Centralized Teacher with Decentralized Student for Multi-Agent
Reinforcement Learning [114.69155066932046]
This work proposes a novel.
Teacher with Decentralized Student (C TDS) framework, which consists of a teacher model and a student model.
Specifically, the teacher model allocates the team reward by learning individual Q-values conditioned on global observation.
The student model utilizes the partial observations to approximate the Q-values estimated by the teacher model.
arXiv Detail & Related papers (2022-03-16T06:03:14Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - Robust Generative Restricted Kernel Machines using Weighted Conjugate
Feature Duality [11.68800227521015]
We introduce weighted conjugate feature duality in the framework of Restricted Kernel Machines (RKMs)
The RKM formulation allows for an easy integration of methods from classical robust statistics.
Experiments show that the weighted RKM is capable of generating clean images when contamination is present in the training data.
arXiv Detail & Related papers (2020-02-04T09:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.