High-Dimensional Feature Selection for Genomic Datasets
- URL: http://arxiv.org/abs/2002.12104v2
- Date: Tue, 18 May 2021 01:00:02 GMT
- Title: High-Dimensional Feature Selection for Genomic Datasets
- Authors: Majid Afshar, Hamid Usefi
- Abstract summary: A central problem in machine learning and pattern recognition is the process of recognizing the most important features.
In this paper, we provide a new feature selection method (T) that consists of first removing the irrelevant features and then detecting correlations between the remaining features.
The effectiveness of DRPT has been verified by performing a series of comparisons with seven state-of-the-art feature selection methods over ten genetic datasets.
- Score: 3.9499155245102275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central problem in machine learning and pattern recognition is the process
of recognizing the most important features. In this paper, we provide a new
feature selection method (DRPT) that consists of first removing the irrelevant
features and then detecting correlations between the remaining features. Let
$D=[A\mid \mathbf{b}]$ be a dataset, where $\mathbf{b}$ is the class label and
$A$ is a matrix whose columns are the features. We solve $A\mathbf{x} =
\mathbf{b}$ using the least squares method and the pseudo-inverse of $A$. Each
component of $\mathbf{x}$ can be viewed as an assigned weight to the
corresponding column (feature). We define a threshold based on the local maxima
of $\mathbf{x}$ and remove those features whose weights are smaller than the
threshold.
To detect the correlations in the reduced matrix, which we still call $A$, we
consider a perturbation $\tilde A$ of $A$. We prove that correlations are
encoded in $\Delta\mathbf{x}=\mid \mathbf{x} -\tilde{\mathbf{x}}\mid $, where
$\tilde{\mathbf{x}}$ is the least quares solution of
$\tilde A\tilde{\mathbf{x}}=\mathbf{b}$. We cluster features first based on
$\Delta\mathbf{x}$ and then using the entropy of features. Finally, a feature
is selected from each sub-cluster based on its weight and entropy. The
effectiveness of DRPT has been verified by performing a series of comparisons
with seven state-of-the-art feature selection methods over ten genetic datasets
ranging up from 9,117 to 267,604 features. The results show that, over all, the
performance of DRPT is favorable in several aspects compared to each feature
selection algorithm.
\e
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - SQ Lower Bounds for Learning Mixtures of Linear Classifiers [43.63696593768504]
We show that known algorithms for this problem are essentially best possible, even for the special case of uniform mixtures.
The key technical ingredient is a new construction of spherical designs that may be of independent interest.
arXiv Detail & Related papers (2023-10-18T10:56:57Z) - Randomized and Deterministic Attention Sparsification Algorithms for
Over-parameterized Feature Dimension [18.57735939471469]
We consider the sparsification of the attention problem.
For any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.
arXiv Detail & Related papers (2023-04-10T05:52:38Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z) - Block Model Guided Unsupervised Feature Selection [32.21728295212875]
We present a novel approach for graph-driven unsupervised feature selection for linked data.
We take the novel approach of first building a block model on the graph and then using the block model for feature selection.
Experimental results show that our method outperforms the state of the art on several real-world public datasets.
arXiv Detail & Related papers (2020-07-05T16:19:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.