LSTSVR-PI: Least square twin support vector regression with privileged
information
- URL: http://arxiv.org/abs/2312.02596v2
- Date: Wed, 21 Feb 2024 15:22:07 GMT
- Title: LSTSVR-PI: Least square twin support vector regression with privileged
information
- Authors: Anuradha Kumari, M. Tanveer
- Abstract summary: We propose a new least square twin support vector regression using privileged information (LSTSVR-PI)
It integrates the LUPI paradigm to utilize additional sources of information into the least square twin support vector regression.
The proposed model fills the gap between the contemporary paradigm of LUPI and classical LSTSVR.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In an educational setting, a teacher plays a crucial role in various
classroom teaching patterns. Similarly, mirroring this aspect of human
learning, the learning using privileged information (LUPI) paradigm introduces
additional information to instruct learning models during the training stage. A
different approach to train the twin variant of the regression model is
provided by the new least square twin support vector regression using
privileged information (LSTSVR-PI), which integrates the LUPI paradigm to
utilize additional sources of information into the least square twin support
vector regression. The proposed LSTSVR-PI solves system of linear equations
which adds up to the efficiency of the model. Further, we also establish a
generalization error bound based on the Rademacher complexity of the proposed
model and incorporate the structural risk minimization principle. The proposed
LSTSVR-PI fills the gap between the contemporary paradigm of LUPI and classical
LSTSVR. Further, to assess the performance of the proposed model, we conduct
numerical experiments along with the baseline models across various
artificially generated and real-world datasets. The various experiments and
statistical analysis infer the superiority of the proposed model. Moreover, as
an application, we conduct experiments on time series datasets, which results
in the superiority of the proposed LSTSVR-PI.
Related papers
- Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Nonparametric Linear Feature Learning in Regression Through Regularisation [0.0]
We propose a novel method for joint linear feature learning and non-parametric function estimation.
By using alternative minimisation, we iteratively rotate the data to improve alignment with leading directions.
We establish that the expected risk of our method converges to the minimal risk under minimal assumptions and with explicit rates.
arXiv Detail & Related papers (2023-07-24T12:52:55Z) - Representation Transfer Learning via Multiple Pre-trained models for
Linear Regression [3.5788754401889014]
We consider the problem of learning a linear regression model on a data domain of interest (target) given few samples.
To aid learning, we are provided with a set of pre-trained regression models that are trained on potentially different data domains.
We propose a representation transfer based learning method for constructing the target model.
arXiv Detail & Related papers (2023-05-25T19:35:24Z) - Reinforcement Learning for Topic Models [3.42658286826597]
We apply reinforcement learning techniques to topic modeling by replacing the variational autoencoder in ProdLDA with a continuous action space reinforcement learning policy.
We introduce several modifications: modernize the neural network architecture, weight the ELBO loss, use contextual embeddings, and monitor the learning process via computing topic diversity and coherence.
arXiv Detail & Related papers (2023-05-08T16:41:08Z) - EmbedDistill: A Geometric Knowledge Distillation for Information
Retrieval [83.79667141681418]
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR)
We propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model.
We show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.
arXiv Detail & Related papers (2023-01-27T22:04:37Z) - Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning [2.8167227950959206]
Self-supervised machine learning is emerging as an attractive alternative to supervised learning.
In this paper, we demonstrate self-supervised machine learning model fitting for a directional microstructural model.
Our approach shows clear improvements in parameter estimation and computational time, compared to standard non-linear least squares fitting.
arXiv Detail & Related papers (2022-10-05T15:51:39Z) - Virtual embeddings and self-consistency for self-supervised learning [43.086696088061416]
TriMix is a novel concept for self-supervised learning that generates virtual embeddings through linear data.
We validate TriMix on eight benchmark datasets with an improvement of 2.71% and 0.41% better than the second-best models for both data types.
arXiv Detail & Related papers (2022-06-13T10:20:28Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.