Loss Bounds for Approximate Influence-Based Abstraction
- URL: http://arxiv.org/abs/2011.01788v3
- Date: Tue, 23 Feb 2021 15:31:22 GMT
- Title: Loss Bounds for Approximate Influence-Based Abstraction
- Authors: Elena Congeduti, Alexander Mey, Frans A. Oliehoek
- Abstract summary: Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
- Score: 81.13024471616417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential decision making techniques hold great promise to improve the
performance of many real-world systems, but computational complexity hampers
their principled application. Influence-based abstraction aims to gain leverage
by modeling local subproblems together with the 'influence' that the rest of
the system exerts on them. While computing exact representations of such
influence might be intractable, learning approximate representations offers a
promising approach to enable scalable solutions. This paper investigates the
performance of such approaches from a theoretical perspective. The primary
contribution is the derivation of sufficient conditions on approximate
influence representations that can guarantee solutions with small value loss.
In particular we show that neural networks trained with cross entropy are well
suited to learn approximate influence representations. Moreover, we provide a
sample based formulation of the bounds, which reduces the gap to applications.
Finally, driven by our theoretical insights, we propose approximation error
estimators, which empirically reveal to correlate well with the value loss.
Related papers
- Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation [1.9662978733004601]
We propose an importance sampling method for tractable and efficient estimation of counterfactual expressions.
By minimizing a common upper bound of counterfactual estimators, we transform the variance minimization problem into a conditional distribution learning problem.
We validate the theoretical results through experiments under various types and settings of Structural Causal Models (SCMs) and demonstrate the outperformance on counterfactual estimation tasks.
arXiv Detail & Related papers (2024-10-17T03:08:28Z) - Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Towards Representation Learning for Weighting Problems in Design-Based Causal Inference [1.1060425537315088]
We propose an end-to-end estimation procedure that learns a flexible representation, while retaining promising theoretical properties.
We show that this approach is competitive in a range of common causal inference tasks.
arXiv Detail & Related papers (2024-09-24T19:16:37Z) - Inverting estimating equations for causal inference on quantiles [7.801213477601286]
We generalize a class of causal inference solutions from estimating the mean of the potential outcome to its quantiles.
A broad implication of our results is that one can rework the existing result for mean causal estimands to facilitate causal inference on quantiles.
arXiv Detail & Related papers (2024-01-02T01:52:28Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.