Weighted Sparse Partial Least Squares for Joint Sample and Feature
Selection
- URL: http://arxiv.org/abs/2308.06740v1
- Date: Sun, 13 Aug 2023 10:09:25 GMT
- Title: Weighted Sparse Partial Least Squares for Joint Sample and Feature
Selection
- Authors: Wenwen Min, Taosheng Xu and Chris Ding
- Abstract summary: We propose an $ell_infty/ell_0$-norm constrained weighted sparse PLS ($ell_infty/ell_$-wsPLS) method for joint sample and feature selection.
We develop an efficient iterative algorithm for each multi-view wsPLS model and show its convergence property.
- Score: 7.219077740523681
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Partial Least Squares (sPLS) is a common dimensionality reduction
technique for data fusion, which projects data samples from two views by
seeking linear combinations with a small number of variables with the maximum
variance. However, sPLS extracts the combinations between two data sets with
all data samples so that it cannot detect latent subsets of samples. To extend
the application of sPLS by identifying a specific subset of samples and remove
outliers, we propose an $\ell_\infty/\ell_0$-norm constrained weighted sparse
PLS ($\ell_\infty/\ell_0$-wsPLS) method for joint sample and feature selection,
where the $\ell_\infty/\ell_0$-norm constrains are used to select a subset of
samples. We prove that the $\ell_\infty/\ell_0$-norm constrains have the
Kurdyka-\L{ojasiewicz}~property so that a globally convergent algorithm is
developed to solve it. Moreover, multi-view data with a same set of samples can
be available in various real problems. To this end, we extend the
$\ell_\infty/\ell_0$-wsPLS model and propose two multi-view wsPLS models for
multi-view data fusion. We develop an efficient iterative algorithm for each
multi-view wsPLS model and show its convergence property. As well as numerical
and biomedical data experiments demonstrate the efficiency of the proposed
methods.
Related papers
- Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - EM for Mixture of Linear Regression with Clustered Data [6.948976192408852]
We discuss how the underlying clustered structures in distributed data can be exploited to improve learning schemes.
We employ the well-known Expectation-Maximization (EM) method to estimate the maximum likelihood parameters from $m$ batches of dependent samples.
We show that if properly, EM on the structured data requires only $O(1)$ to reach the same statistical accuracy, as long as $m$ grows up as $eo(n)$.
arXiv Detail & Related papers (2023-08-22T15:47:58Z) - A model-free feature selection technique of feature screening and random
forest based recursive feature elimination [0.0]
We propose a model-free feature selection method for ultra-high dimensional data with mass features.
We show that the proposed method is selection consistent and $L$ consistent under weak regularity conditions.
arXiv Detail & Related papers (2023-02-15T03:39:16Z) - Random Manifold Sampling and Joint Sparse Regularization for Multi-label
Feature Selection [0.0]
The model proposed in this paper can obtain the most relevant few features by solving the joint constrained optimization problems of $ell_2,1$ and $ell_F$ regularization.
Comparative experiments on real-world data sets show that the proposed method outperforms other methods.
arXiv Detail & Related papers (2022-04-13T15:06:12Z) - Multi-Sample $\zeta$-mixup: Richer, More Realistic Synthetic Samples
from a $p$-Series Interpolant [16.65329510916639]
We propose $zeta$-mixup, a generalization of mixup with provably and demonstrably desirable properties.
We show that our implementation of $zeta$-mixup is faster than mixup, and extensive evaluation on controlled synthetic and 24 real-world natural and medical image classification datasets shows that $zeta$-mixup outperforms mixup and traditional data augmentation techniques.
arXiv Detail & Related papers (2022-04-07T09:41:09Z) - Learning Mixtures of Linear Dynamical Systems [94.49754087817931]
We develop a two-stage meta-algorithm to efficiently recover each ground-truth LDS model up to error $tildeO(sqrtd/T)$.
We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm.
arXiv Detail & Related papers (2022-01-26T22:26:01Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - A Provably Efficient Sample Collection Strategy for Reinforcement
Learning [123.69175280309226]
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior.
We propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., sparse simulator of the environment); 2) An "objective-agnostic" sample collection responsible for generating the prescribed samples as fast as possible.
arXiv Detail & Related papers (2020-07-13T15:17:35Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.