Causal bandits with backdoor adjustment on unknown Gaussian DAGs
- URL: http://arxiv.org/abs/2502.02020v1
- Date: Tue, 04 Feb 2025 05:18:35 GMT
- Title: Causal bandits with backdoor adjustment on unknown Gaussian DAGs
- Authors: Yijia Zhao, Qing Zhou,
- Abstract summary: We study the causal bandit problem when the graph structure is unknown.
We identify backdoor adjustment sets for each arm using sequentially generated experimental and observational data.
We develop a novel bandit algorithm, based on modified upper confidence bounds, to sequentially determine the optimal intervention.
- Score: 5.807183284468881
- License:
- Abstract: The causal bandit problem aims to sequentially learn the intervention that maximizes the expectation of a reward variable within a system governed by a causal graph. Most existing approaches assume prior knowledge of the graph structure, or impose unrealistically restrictive conditions on the graph. In this paper, we assume a Gaussian linear directed acyclic graph (DAG) over arms and the reward variable, and study the causal bandit problem when the graph structure is unknown. We identify backdoor adjustment sets for each arm using sequentially generated experimental and observational data during the decision process, which allows us to estimate causal effects and construct upper confidence bounds. By integrating estimates from both data sources, we develop a novel bandit algorithm, based on modified upper confidence bounds, to sequentially determine the optimal intervention. We establish both case-dependent and case-independent upper bounds on the cumulative regret for our algorithm, which improve upon the bounds of the standard multi-armed bandit algorithms. Our empirical study demonstrates its advantage over existing methods with respect to cumulative regret and computation time.
Related papers
- The Minimal Search Space for Conditional Causal Bandits [0.18124328823188351]
Causal knowledge can be used to support decision-making problems.
This paper presents a graphical characterization of the minimal set of nodes guaranteed to contain the optimal conditional intervention.
We then propose an efficient algorithm with a time complexity of $O(|V| + |E|)$ to identify this minimal set of nodes.
arXiv Detail & Related papers (2025-02-10T15:45:18Z) - Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits [7.064432289838905]
Current works often assume the causal graph is known, which may not always be available a priori.
We focus on the causal bandit problem in scenarios where the underlying causal graph is unknown and may include latent confounders.
We formally characterize the set of necessary and sufficient latent confounders one needs to detect or learn to ensure that all possibly optimal arms are identified correctly.
arXiv Detail & Related papers (2024-11-06T16:59:11Z) - Bridging Rested and Restless Bandits with Graph-Triggering: Rising and Rotting [67.1631453378926]
Graph-Triggered Bandits is a framework to generalize rested and restless bandits.
In this work, we focus on two specific types of monotonic bandits: rising, where the expected reward of an arm grows as the number of triggers increases, and rotting, where the opposite behavior occurs.
arXiv Detail & Related papers (2024-09-09T18:23:07Z) - Asymmetric Graph Error Control with Low Complexity in Causal Bandits [21.812120023339876]
The objective is to select an optimal sequence of interventions on nodes in a causal graph.
By exploiting the causal relationships between the nodes whose signals contribute to the reward, interventions are optimized.
arXiv Detail & Related papers (2024-08-20T23:37:08Z) - Neural Dueling Bandits [58.90189511247936]
We use a neural network to estimate the reward function using preference feedback for the previously selected arms.
We then extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution.
arXiv Detail & Related papers (2024-07-24T09:23:22Z) - Additive Causal Bandits with Unknown Graph [10.575089475850465]
We explore algorithms to select actions in the causal bandit setting where the learner can choose to intervene on a set of random variables related by a causal graph.
The learner's goal is to quickly find the intervention, among all interventions on observable variables, that maximizes the expectation of an outcome variable.
arXiv Detail & Related papers (2023-06-13T15:43:04Z) - Bias-Robust Bayesian Optimization via Dueling Bandit [57.82422045437126]
We consider Bayesian optimization in settings where observations can be adversarially biased.
We propose a novel approach for dueling bandits based on information-directed sampling (IDS)
Thereby, we obtain the first efficient kernelized algorithm for dueling bandits that comes with cumulative regret guarantees.
arXiv Detail & Related papers (2021-05-25T10:08:41Z) - Online Model Selection: a Rested Bandit Formulation [49.69377391589057]
We introduce and analyze a best arm identification problem in the rested bandit setting.
We define a novel notion of regret for this problem, where we compare to the policy that always plays the arm having the smallest expected loss at the end of the game.
Unlike known model selection efforts in the recent bandit literature, our algorithm exploits the specific structure of the problem to learn the unknown parameters of the expected loss function.
arXiv Detail & Related papers (2020-12-07T08:23:08Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Causal Bandits without prior knowledge using separating sets [3.1000291317725]
The Causal Bandit is a variant of the classic Bandit problem where an agent must identify the best action in a sequential decision-making process.
Methods proposed for this problem thus far in the literature rely on exact prior knowledge of the full causal graph.
We formulate new causal bandit algorithms that no longer necessarily rely on prior causal knowledge.
arXiv Detail & Related papers (2020-09-16T20:08:03Z) - Optimal Best-arm Identification in Linear Bandits [79.3239137440876]
We devise a simple algorithm whose sampling complexity matches known instance-specific lower bounds.
Unlike existing best-arm identification strategies, our algorithm uses a stopping rule that does not depend on the number of arms.
arXiv Detail & Related papers (2020-06-29T14:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.