Conditional Variable Selection for Intelligent Test
- URL: http://arxiv.org/abs/2207.00335v1
- Date: Fri, 1 Jul 2022 11:01:53 GMT
- Title: Conditional Variable Selection for Intelligent Test
- Authors: Yiwen Liao, Tianjie Ge, Rapha\"el Latty, Bin Yang
- Abstract summary: We discuss a novel conditional variable selection framework that can select the most important candidate variables given a set of preselected variables.
In this paper, we discuss a novel conditional variable selection framework that can select the most important candidate variables given a set of preselected variables.
- Score: 5.904240881373805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent test requires efficient and effective analysis of
high-dimensional data in a large scale. Traditionally, the analysis is often
conducted by human experts, but it is not scalable in the era of big data. To
tackle this challenge, variable selection has been recently introduced to
intelligent test. However, in practice, we encounter scenarios where certain
variables (e.g. some specific processing conditions for a device under test)
must be maintained after variable selection. We call this conditional variable
selection, which has not been well investigated for embedded or
deep-learning-based variable selection methods. In this paper, we discuss a
novel conditional variable selection framework that can select the most
important candidate variables given a set of preselected variables.
Related papers
- Model-independent variable selection via the rule-based variable priority [1.2771542695459488]
We introduce a new model-independent approach, Variable Priority (VarPro)
VarPro works by utilizing rules without the need to generate artificial data or evaluate prediction error.
We show that VarPro has a consistent filtering property for noise variables.
arXiv Detail & Related papers (2024-09-13T17:32:05Z) - Contextual Feature Selection with Conditional Stochastic Gates [9.784482648233048]
Conditional Gates (c-STG) models the importance of features using conditional variables whose parameters are predicted based on contextual variables.
We show that c-STG can lead to improved feature selection capabilities while enhancing prediction accuracy and interpretability.
arXiv Detail & Related papers (2023-12-21T19:12:59Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Inferring independent sets of Gaussian variables after thresholding
correlations [1.3535770763481905]
We consider testing whether a set of Gaussian variables, selected from the data, is independent of the remaining variables.
We develop a new characterization of the conditioning event in terms of the canonical correlation between the groups of random variables.
In simulation studies and in the analysis of gene co-expression networks, we show that our approach has much higher power than a naive'' approach that ignores the effect of selection.
arXiv Detail & Related papers (2022-11-02T23:47:32Z) - Few-shot Learning for Unsupervised Feature Selection [59.75321498170363]
We propose a few-shot learning method for unsupervised feature selection.
The proposed method can select a subset of relevant features in a target task given a few unlabeled target instances.
We experimentally demonstrate that the proposed method outperforms existing feature selection methods.
arXiv Detail & Related papers (2021-07-02T03:52:51Z) - Fast Bayesian Variable Selection in Binomial and Negative Binomial
Regression [9.774282306558465]
We introduce an efficient MCMC scheme for variable selection in binomial and negative binomial regression, that exploits logistic regression as a special case.
In experiments we demonstrate the effectiveness of our approach, including on data with seventeen thousand covariates.
arXiv Detail & Related papers (2021-06-28T20:54:41Z) - Safe Tests and Always-Valid Confidence Intervals for contingency tables
and beyond [69.25055322530058]
We develop E variables for testing whether two data streams come from the same source or not.
These E variables lead to tests that remain safe, under flexible sampling scenarios such as optional stopping and continuation.
arXiv Detail & Related papers (2021-06-04T20:12:13Z) - True Few-Shot Learning with Language Models [78.42578316883271]
We evaluate the few-shot ability of LMs when held-out examples are unavailable.
Our findings suggest that prior work significantly overestimated the true few-shot ability of LMs.
arXiv Detail & Related papers (2021-05-24T17:55:51Z) - A Two-Stage Variable Selection Approach for Correlated High Dimensional
Predictors [4.8128078741263725]
We propose a two-stage approach that combines a variable clustering stage and a group variable stage for the group variable selection problem.
The variable clustering stage uses information from the data to find a group structure, which improves the performance of the existing group variable selection methods.
The two-stage method shows a better performance, in terms of the prediction accuracy, as well as in the accuracy to select active predictors.
arXiv Detail & Related papers (2021-03-24T17:28:34Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.