Causality and Robust Optimization
- URL: http://arxiv.org/abs/2002.12626v1
- Date: Fri, 28 Feb 2020 10:02:59 GMT
- Title: Causality and Robust Optimization
- Authors: Akihiro Yabe
- Abstract summary: Cofounding bias is a problem when applying machine learning prediction.
We propose a meta-algorithm that can remedy existing feature selection algorithms in terms of cofounding bias.
- Score: 2.690502103971798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A decision-maker must consider cofounding bias when attempting to apply
machine learning prediction, and, while feature selection is widely recognized
as important process in data-analysis, it could cause cofounding bias. A causal
Bayesian network is a standard tool for describing causal relationships, and if
relationships are known, then adjustment criteria can determine with which
features cofounding bias disappears. A standard modification would thus utilize
causal discovery algorithms for preventing cofounding bias in feature
selection. Causal discovery algorithms, however, essentially rely on the
faithfulness assumption, which turn out to be easily violated in practical
feature selection settings. In this paper, we propose a meta-algorithm that can
remedy existing feature selection algorithms in terms of cofounding bias. Our
algorithm is induced from a novel adjustment criterion that requires rather
than faithfulness, an assumption which can be induced from another well-known
assumption of the causal sufficiency. We further prove that the features added
through our modification convert cofounding bias into prediction variance. With
the aid of existing robust optimization technologies that regularize risky
strategies with high variance, then, we are able to successfully improve the
throughput performance of decision-making optimization, as is shown in our
experimental results.
Related papers
- Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Bayesian Optimization with Conformal Prediction Sets [44.565812181545645]
Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models.
We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity.
In many cases we find that query coverage can be significantly improved without harming sample-efficiency.
arXiv Detail & Related papers (2022-10-22T17:01:05Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Post-hoc loss-calibration for Bayesian neural networks [25.05373000435213]
We develop methods for correcting approximate posterior predictive distributions encouraging them to prefer high-utility decisions.
In contrast to previous work, our approach is agnostic to the choice of the approximate inference algorithm.
arXiv Detail & Related papers (2021-06-13T13:53:27Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.