Loss Bounds for Approximate Influence-Based Abstraction
- URL: http://arxiv.org/abs/2011.01788v3
- Date: Tue, 23 Feb 2021 15:31:22 GMT
- Title: Loss Bounds for Approximate Influence-Based Abstraction
- Authors: Elena Congeduti, Alexander Mey, Frans A. Oliehoek
- Abstract summary: Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
- Score: 81.13024471616417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential decision making techniques hold great promise to improve the
performance of many real-world systems, but computational complexity hampers
their principled application. Influence-based abstraction aims to gain leverage
by modeling local subproblems together with the 'influence' that the rest of
the system exerts on them. While computing exact representations of such
influence might be intractable, learning approximate representations offers a
promising approach to enable scalable solutions. This paper investigates the
performance of such approaches from a theoretical perspective. The primary
contribution is the derivation of sufficient conditions on approximate
influence representations that can guarantee solutions with small value loss.
In particular we show that neural networks trained with cross entropy are well
suited to learn approximate influence representations. Moreover, we provide a
sample based formulation of the bounds, which reduces the gap to applications.
Finally, driven by our theoretical insights, we propose approximation error
estimators, which empirically reveal to correlate well with the value loss.
Related papers
- Inverting estimating equations for causal inference on quantiles [9.216100284591636]
We generalize a class of causal inference solutions from estimating the mean of the potential outcome to its quantiles.
A broad implication of our results is that one can rework the existing result for mean causal estimands to facilitate causal inference on quantiles.
arXiv Detail & Related papers (2024-01-02T01:52:28Z) - Disentangled Representation Learning with Transmitted Information
Bottleneck [73.0553263960709]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Provably Efficient Learning in Partially Observable Contextual Bandit [4.910658441596583]
We show how causal bounds can be applied to improving classical bandit algorithms.
This research has the potential to enhance the performance of contextual bandit agents in real-world applications.
arXiv Detail & Related papers (2023-08-07T13:24:50Z) - Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment
Effect Estimation [137.3520153445413]
A notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference.
We evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets.
The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes.
arXiv Detail & Related papers (2023-07-11T02:58:10Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Algorithmic Recourse in Partially and Fully Confounded Settings Through
Bounding Counterfactual Effects [0.6299766708197883]
Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.
Existing methods compute the effect of recourse actions using a causal model learnt from data under the assumption of no hidden confounding and modelling assumptions such as additive noise.
We propose an alternative approach for discrete random variables which relaxes these assumptions and allows for unobserved confounding and arbitrary structural equations.
arXiv Detail & Related papers (2021-06-22T15:07:49Z) - Provable Guarantees on the Robustness of Decision Rules to Causal
Interventions [20.27500901133189]
Robustness of decision rules to shifts in the data-generating process is crucial to the successful deployment of decision-making systems.
We consider causal Bayesian networks and formally define the interventional robustness problem.
We provide efficient algorithms for computing guaranteed upper and lower bounds on the interventional probabilities.
arXiv Detail & Related papers (2021-05-19T13:09:47Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.