Learning a powerful SVM using piece-wise linear loss functions
- URL: http://arxiv.org/abs/2102.04849v1
- Date: Tue, 9 Feb 2021 14:45:08 GMT
- Title: Learning a powerful SVM using piece-wise linear loss functions
- Authors: Pritam Anand
- Abstract summary: k-Piece-wise Linear loss Support Vector Machine (k-PL-SVM) model is an adaptive SVM model.
We have performed the extensive numerical experiments with k-PL-SVM models for k = 2 and 3.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we have considered general k-piece-wise linear convex loss
functions in SVM model for measuring the empirical risk. The resulting
k-Piece-wise Linear loss Support Vector Machine (k-PL-SVM) model is an adaptive
SVM model which can learn a suitable piece-wise linear loss function according
to nature of the given training set. The k-PL-SVM models are general SVM models
and existing popular SVM models, like C-SVM, LS-SVM and Pin-SVM models, are
their particular cases. We have performed the extensive numerical experiments
with k-PL-SVM models for k = 2 and 3 and shown that they are improvement over
existing SVM models.
Related papers
- Scalable Language Models with Posterior Inference of Latent Thought Vectors [52.63299874322121]
Latent-Thought Language Models (LTMs) incorporate explicit latent thought vectors that follow an explicit prior model in latent space.
LTMs possess additional scaling dimensions beyond traditional LLMs, yielding a structured design space.
LTMs significantly outperform conventional autoregressive models and discrete diffusion models in validation perplexity and zero-shot language modeling.
arXiv Detail & Related papers (2025-02-03T17:50:34Z) - VLsI: Verbalized Layers-to-Interactions from Large to Small Vision Language Models [63.27511432647797]
We propose VLsI: Verbalized Layers-to-Interactions, a new VLM family in 2B and 7B model sizes.
We validate VLsI across ten challenging vision-language benchmarks, achieving notable performance gains (11.0% for 2B and 17.4% for 7B) over GPT-4V.
arXiv Detail & Related papers (2024-12-02T18:58:25Z) - Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - Multiview learning with twin parametric margin SVM [0.0]
Multiview learning (MVL) seeks to leverage the benefits of diverse perspectives to complement each other.
We propose multiview twin parametric margin support vector machine (MvTPMSVM)
MvTPMSVM constructs parametric margin hyperplanes corresponding to both classes, aiming to regulate and manage the impact of the heteroscedastic noise structure.
arXiv Detail & Related papers (2024-08-04T10:16:11Z) - Local Binary and Multiclass SVMs Trained on a Quantum Annealer [0.8399688944263844]
In the last years, with the advent of working quantum annealers, hybrid SVM models characterised by quantum training and classical execution have been introduced.
These models have demonstrated comparable performance to their classical counterparts.
However, they are limited in the training set size due to the restricted connectivity of the current quantum annealers.
arXiv Detail & Related papers (2024-03-13T14:37:00Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Soft-SVM Regression For Binary Classification [0.0]
We introduce a new exponential family based on a convex relaxation of the hinge loss function using softness and class-separation parameters.
This new family, denoted Soft-SVM, allows us to prescribe a generalized linear model that effectively bridges between logistic regression and SVM classification.
arXiv Detail & Related papers (2022-05-24T03:01:35Z) - Chance constrained conic-segmentation support vector machine with
uncertain data [0.0]
Support vector machines (SVM) is one of the well known supervised classes of learning algorithms.
This paper studies CS-SVM when the data points are uncertain or mislabelled.
arXiv Detail & Related papers (2021-07-28T12:29:47Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Unified SVM Algorithm Based on LS-DC Loss [0.0]
We propose an algorithm that can train different SVM models.
UniSVM has a dominant advantage over all existing algorithms because it has a closed-form solution.
Experiments show that UniSVM can achieve comparable performance in less training time.
arXiv Detail & Related papers (2020-06-16T12:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.