Combinatorial Causal Bandits without Graph Skeleton
- URL: http://arxiv.org/abs/2301.13392v3
- Date: Sat, 16 Sep 2023 13:55:24 GMT
- Title: Combinatorial Causal Bandits without Graph Skeleton
- Authors: Shi Feng, Nuoya Xiong, Wei Chen
- Abstract summary: In causal bandits (CCB), the learning agent chooses a subset of variables in each round to intervene and collects feedback from the observed variables to minimize expected regret or sample complexity.
Previous works study this problem in both general causal models and binary generalized linear models (BGLMs)
This paper studies the CCB problem without the graph structure on binary general causal models and BGLMs.
- Score: 14.178659419111897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In combinatorial causal bandits (CCB), the learning agent chooses a subset of
variables in each round to intervene and collects feedback from the observed
variables to minimize expected regret or sample complexity. Previous works
study this problem in both general causal models and binary generalized linear
models (BGLMs). However, all of them require prior knowledge of causal graph
structure. This paper studies the CCB problem without the graph structure on
binary general causal models and BGLMs. We first provide an exponential lower
bound of cumulative regrets for the CCB problem on general causal models. To
overcome the exponentially large space of parameters, we then consider the CCB
problem on BGLMs. We design a regret minimization algorithm for BGLMs even
without the graph skeleton and show that it still achieves $O(\sqrt{T}\ln T)$
expected regret. This asymptotic regret is the same as the state-of-art
algorithms relying on the graph structure. Moreover, we sacrifice the regret to
$O(T^{\frac{2}{3}}\ln T)$ to remove the weight gap covered by the asymptotic
notation. At last, we give some discussions and algorithms for pure exploration
of the CCB problem without the graph structure.
Related papers
- Improved Bound for Robust Causal Bandits with Linear Models [16.60875994745622]
This paper investigates the robustness of causal bandits in the face of temporal model fluctuations.
The proposed algorithm achieves nearly optimal $tildemathcalO(sqrtT)$ regret when $C$ is $o(sqrtT)$.
arXiv Detail & Related papers (2024-05-13T14:41:28Z) - Online Clustering of Bandits with Misspecified User Models [42.56440072468658]
Contextual linear bandit is an online learning problem where given arm features, a learning agent selects an arm at each round to maximize the cumulative rewards in the long run.
A line of works, called the clustering of bandits (CB), utilize the collaborative effect over user preferences and have shown significant improvements over classic linear bandit algorithms.
In this paper, we are the first to present the important problem of clustering of bandits with misspecified user models (CBMUM).
We devise two robust CB algorithms, RCLUMB and RSCLUMB, that can accommodate the inaccurate user preference estimations and erroneous clustering caused by model misspecifications.
arXiv Detail & Related papers (2023-10-04T10:40:50Z) - Detection-Recovery Gap for Planted Dense Cycles [72.4451045270967]
We consider a model where a dense cycle with expected bandwidth $n tau$ and edge density $p$ is planted in an ErdHos-R'enyi graph $G(n,q)$.
We characterize the computational thresholds for the associated detection and recovery problems for the class of low-degree algorithms.
arXiv Detail & Related papers (2023-02-13T22:51:07Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Causal Bandits for Linear Structural Equation Models [58.2875460517691]
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model.
It is assumed that the graph's structure is known and has $N$ nodes.
Two algorithms are proposed for the frequentist (UCB-based) and Bayesian settings.
arXiv Detail & Related papers (2022-08-26T16:21:31Z) - Combinatorial Causal Bandits [25.012065471684025]
In causal bandits, the learning agent chooses at most $K$ variables in each round to intervene, with the goal of minimizing expected regret on the target variable $Y$.
We study under the context of binary generalized linear models (BGLMs) with a succinct parametric representation of the causal models.
We present the algorithm BGLM-OFU for Markovian BGLMs based on the maximum likelihood estimation method, and show that it achieves $O(sqrtTlog T)$ regret, where $T$ is the time horizon.
arXiv Detail & Related papers (2022-06-04T14:14:58Z) - The Performance of the MLE in the Bradley-Terry-Luce Model in
$\ell_{\infty}$-Loss and under General Graph Topologies [76.61051540383494]
We derive novel, general upper bounds on the $ell_infty$ estimation error of the Bradley-Terry-Luce model.
We demonstrate that the derived bounds perform well and in some cases are sharper compared to known results.
arXiv Detail & Related papers (2021-10-20T23:46:35Z) - Causal Bandits on General Graphs [1.4502611532302039]
We study the problem of determining the best intervention in a Causal Bayesian Network (CBN) specified only by its causal graph.
We propose a simple regret minimization algorithm that takes as input a semi-Markovian causal graph with atomic interventions and possibly unobservable variables.
Our results indicate that the simple regret guarantee of our proposed algorithm can only be improved by considering more nuanced structural restrictions on the causal graph.
arXiv Detail & Related papers (2021-07-06T17:29:45Z) - Adversarial Linear Contextual Bandits with Graph-Structured Side
Observations [80.95090605985042]
A learning agent repeatedly chooses from a set of $K$ actions after being presented with a $d$-dimensional context vector.
The agent incurs and observes the loss of the chosen action, but also observes the losses of its neighboring actions in the observation structures.
Two efficient algorithms are developed based on textttEXP3.
arXiv Detail & Related papers (2020-12-10T15:40:07Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.