MiniAnDE: a reduced AnDE ensemble to deal with microarray data
- URL: http://arxiv.org/abs/2311.12879v1
- Date: Mon, 20 Nov 2023 18:12:55 GMT
- Title: MiniAnDE: a reduced AnDE ensemble to deal with microarray data
- Authors: Pablo Torrijos, Jos\'e A. G\'amez, Jos\'e M. Puerta
- Abstract summary: MiniAnDE is an algorithm that includes only a small number of heterogeneous base classifiers in the ensemble.
This article focuses on the supervised classification of datasets with a large number of variables and a small number of instances.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article focuses on the supervised classification of datasets with a
large number of variables and a small number of instances. This is the case,
for example, for microarray data sets commonly used in bioinformatics. Complex
classifiers that require estimating statistics over many variables are not
suitable for this type of data. Probabilistic classifiers with low-order
probability tables, e.g. NB and AODE, are good alternatives for dealing with
this type of data. AODE usually improves NB in accuracy, but suffers from high
spatial complexity since $k$ models, each with $n+1$ variables, are included in
the AODE ensemble. In this paper, we propose MiniAnDE, an algorithm that
includes only a small number of heterogeneous base classifiers in the ensemble,
i.e., each model only includes a different subset of the $k$ predictive
variables. Experimental evaluation shows that using MiniAnDE classifiers on
microarray data is feasible and outperforms NB and other ensembles such as
bagging and random forest.
Related papers
- Variance Alignment Score: A Simple But Tough-to-Beat Data Selection
Method for Multimodal Contrastive Learning [17.40655778450583]
We propose a principled metric named Variance Alignment Score (VAS), which has the form $langle Sigma_texttest, Sigma_irangle$.
We show that applying VAS and CLIP scores together can outperform baselines by a margin of $1.3%$ on 38 evaluation sets for noisy dataset DataComp and $2.5%$ on VTAB for high-quality dataset CC12M.
arXiv Detail & Related papers (2024-02-03T06:29:04Z) - EM for Mixture of Linear Regression with Clustered Data [6.948976192408852]
We discuss how the underlying clustered structures in distributed data can be exploited to improve learning schemes.
We employ the well-known Expectation-Maximization (EM) method to estimate the maximum likelihood parameters from $m$ batches of dependent samples.
We show that if properly, EM on the structured data requires only $O(1)$ to reach the same statistical accuracy, as long as $m$ grows up as $eo(n)$.
arXiv Detail & Related papers (2023-08-22T15:47:58Z) - Mean Estimation with User-level Privacy under Data Heterogeneity [54.07947274508013]
Different users may possess vastly different numbers of data points.
It cannot be assumed that all users sample from the same underlying distribution.
We propose a simple model of heterogeneous user data that allows user data to differ in both distribution and quantity of data.
arXiv Detail & Related papers (2023-07-28T23:02:39Z) - Conformalization of Sparse Generalized Linear Models [2.1485350418225244]
Conformal prediction method estimates a confidence set for $y_n+1$ that is valid for any finite sample size.
Although attractive, computing such a set is computationally infeasible in most regression problems.
We show how our path-following algorithm accurately approximates conformal prediction sets.
arXiv Detail & Related papers (2023-07-11T08:36:12Z) - Uncertainty Quantification of MLE for Entity Ranking with Covariates [3.2839905453386162]
This paper concerns with statistical estimation and inference for the ranking problems based on pairwise comparisons.
We propose a novel model, Co-Assisted Ranking Estimation (CARE) model, that extends the well-known Bradley-Terry-Luce (BTL) model.
We derive the maximum likelihood estimator of $alpha_i*_i=1n$ and $beta*$ under a sparse comparison graph.
We validate our theoretical results through large-scale numerical studies and an application to the mutual fund stock holding dataset.
arXiv Detail & Related papers (2022-12-20T02:28:27Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Learning Shared Kernel Models: the Shared Kernel EM algorithm [0.0]
Expectation maximisation (EM) is an unsupervised learning method for estimating the parameters of a finite mixture distribution.
We first present a rederivation of the standard EM algorithm using data association ideas from the field of multiple target tracking.
The same method is then applied to a little known but much more general type of supervised EM algorithm for shared kernel models.
arXiv Detail & Related papers (2022-05-15T10:10:08Z) - MURAL: An Unsupervised Random Forest-Based Embedding for Electronic
Health Record Data [59.26381272149325]
We present an unsupervised random forest for representing data with disparate variable types.
MURAL forests consist of a set of decision trees where node-splitting variables are chosen at random.
We show that using our approach, we can visualize and classify data more accurately than competing approaches.
arXiv Detail & Related papers (2021-11-19T22:02:21Z) - Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information [67.25713071340518]
Estimating the difficulty of a dataset typically involves comparing state-of-the-art models to humans.<n>We frame dataset difficulty as the lack of $mathcalV$-$textitusable information.<n>We also introduce $textitpointwise $mathcalV$-information$ (PVI) for measuring the difficulty of individual instances.
arXiv Detail & Related papers (2021-10-16T00:21:42Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Fuzzy Clustering with Similarity Queries [56.96625809888241]
The fuzzy or soft objective is a popular generalization of the well-known $k$-means problem.
We show that by making few queries, the problem becomes easier to solve.
arXiv Detail & Related papers (2021-06-04T02:32:26Z) - Sampling from a $k$-DPP without looking at all items [58.30573872035083]
Given a kernel function and a subset size $k$, our goal is to sample $k$ out of $n$ items with probability proportional to the determinant of the kernel matrix induced by the subset (a.k.a. $k$-DPP)
Existing $k$-DPP sampling algorithms require an expensive preprocessing step which involves multiple passes over all $n$ items, making it infeasible for large datasets.
We develop an algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of $k$ items.
arXiv Detail & Related papers (2020-06-30T16:40:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.