Linear Combinatorial Semi-Bandit with Causally Related Rewards
- URL: http://arxiv.org/abs/2212.12923v1
- Date: Sun, 25 Dec 2022 16:05:21 GMT
- Title: Linear Combinatorial Semi-Bandit with Causally Related Rewards
- Authors: Behzad Nourani-Koliji, Saeed Ghoorchian, and Setareh Maghsudi
- Abstract summary: We propose a policy that determines the causal relations by learning the network's topology.
We establish a sublinear regret bound for the proposed algorithm.
- Score: 5.347237827669861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a sequential decision-making problem, having a structural dependency
amongst the reward distributions associated with the arms makes it challenging
to identify a subset of alternatives that guarantees the optimal collective
outcome. Thus, besides individual actions' reward, learning the causal
relations is essential to improve the decision-making strategy. To solve the
two-fold learning problem described above, we develop the 'combinatorial
semi-bandit framework with causally related rewards', where we model the causal
relations by a directed graph in a stationary structural equation model. The
nodal observation in the graph signal comprises the corresponding base arm's
instantaneous reward and an additional term resulting from the causal
influences of other base arms' rewards. The objective is to maximize the
long-term average payoff, which is a linear function of the base arms' rewards
and depends strongly on the network topology. To achieve this objective, we
propose a policy that determines the causal relations by learning the network's
topology and simultaneously exploits this knowledge to optimize the
decision-making process. We establish a sublinear regret bound for the proposed
algorithm. Numerical experiments using synthetic and real-world datasets
demonstrate the superior performance of our proposed method compared to several
benchmarks.
Related papers
- ExDBN: Exact learning of Dynamic Bayesian Networks [2.2499166814992435]
We propose a score-based learning approach for causal learning from data.
We show that the proposed approach turns out to produce excellent results when applied to small and medium-sized synthetic instances of up to 25 time-series.
Two interesting applications in bio-science and finance, to which the method is directly applied, further stress the opportunities in developing highly accurate, globally convergent solvers.
arXiv Detail & Related papers (2024-10-21T15:27:18Z) - Piecewise-Stationary Combinatorial Semi-Bandit with Causally Related
Rewards [5.347237827669861]
We study the piecewise stationary semi-bandit problem with causally related rewards.
In our nonstationary environment, variations in the base arms' distributions, causal relationships between rewards, or both, change the reward generation process.
The problem becomes aggravated in the semi-bandit setting, where the decision-maker only observes the outcome of the selected bundle of arms.
arXiv Detail & Related papers (2023-07-26T12:06:13Z) - Non-stationary Delayed Combinatorial Semi-Bandit with Causally Related
Rewards [7.0997346625024]
We formalize a non-stationary and delayed semi-bandit problem with causally related rewards.
We develop a policy that learns the structural dependencies from delayed feedback and utilizes that to optimize the decision-making.
We evaluate our method via numerical analysis using synthetic and real-world datasets to detect the regions that contribute the most to the spread of Covid-19 in Italy.
arXiv Detail & Related papers (2023-07-18T09:22:33Z) - Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - It Takes Two Flints to Make a Fire: Multitask Learning of Neural
Relation and Explanation Classifiers [40.666590079580544]
We propose an explainable approach for relation extraction that mitigates the tension between generalization and explainability.
Our approach uses a multi-task learning architecture, which jointly trains a classifier for relation extraction.
We convert the model outputs to rules to bring global explanations to this approach.
arXiv Detail & Related papers (2022-04-25T03:53:12Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - False Correlation Reduction for Offline Reinforcement Learning [115.11954432080749]
We propose falSe COrrelation REduction (SCORE) for offline RL, a practically effective and theoretically provable algorithm.
We empirically show that SCORE achieves the SoTA performance with 3.1x acceleration on various tasks in a standard benchmark (D4RL)
arXiv Detail & Related papers (2021-10-24T15:34:03Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.