Chance-Constrained Active Inference
- URL: http://arxiv.org/abs/2102.08792v1
- Date: Wed, 17 Feb 2021 14:36:40 GMT
- Title: Chance-Constrained Active Inference
- Authors: Thijs van de Laar, Ismail Senoz, Ay\c{c}a \"Oz\c{c}elikkale, Henk
Wymeersch
- Abstract summary: Active Inference (ActInf) is an emerging theory that explains perception and action in biological agents.
We propose an alternative approach through chance constraints, which allow for a (typically small) probability of constraint violation.
We show how chance-constrained ActInf weights all imposed (prior) constraints on the generative model, allowing a trade-off between robust control and empirical chance constraint violation.
- Score: 21.592135424253826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Inference (ActInf) is an emerging theory that explains perception and
action in biological agents, in terms of minimizing a free energy bound on
Bayesian surprise. Goal-directed behavior is elicited by introducing prior
beliefs on the underlying generative model. In contrast to prior beliefs, which
constrain all realizations of a random variable, we propose an alternative
approach through chance constraints, which allow for a (typically small)
probability of constraint violation, and demonstrate how such constraints can
be used as intrinsic drivers for goal-directed behavior in ActInf. We
illustrate how chance-constrained ActInf weights all imposed (prior)
constraints on the generative model, allowing e.g., for a trade-off between
robust control and empirical chance constraint violation. Secondly, we
interpret the proposed solution within a message passing framework.
Interestingly, the message passing interpretation is not only relevant to the
context of ActInf, but also provides a general purpose approach that can
account for chance constraints on graphical models. The chance constraint
message updates can then be readily combined with other pre-derived message
update rules, without the need for custom derivations. The proposed
chance-constrained message passing framework thus accelerates the search for
workable models in general, and can be used to complement message-passing
formulations on generative neural models.
Related papers
- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests [9.598670034160763]
ARFs can serve as a plausibility measure or directly generate counterfactual explanations.
It is easy to train and computationally highly efficient, handles continuous and categorical data naturally, and allows integrating additional desiderata such as sparsity in a straightforward manner.
arXiv Detail & Related papers (2024-04-04T15:10:13Z) - A Pseudo-Semantic Loss for Autoregressive Models with Logical
Constraints [87.08677547257733]
Neuro-symbolic AI bridges the gap between purely symbolic and neural approaches to learning.
We show how to maximize the likelihood of a symbolic constraint w.r.t the neural network's output distribution.
We also evaluate our approach on Sudoku and shortest-path prediction cast as autoregressive generation.
arXiv Detail & Related papers (2023-12-06T20:58:07Z) - Coverage-Validity-Aware Algorithmic Recourse [23.643366441803796]
We propose a novel framework to generate a model-agnostic recourse that exhibits robustness to model shifts.
Our framework first builds a coverage-validity-aware linear surrogate of the nonlinear (black-box) model.
We show that our surrogate pushes the approximate hyperplane intuitively, facilitating not only robust but also interpretable recourses.
arXiv Detail & Related papers (2023-11-19T15:21:49Z) - Code Models are Zero-shot Precondition Reasoners [83.8561159080672]
We use code representations to reason about action preconditions for sequential decision making tasks.
We propose a precondition-aware action sampling strategy that ensures actions predicted by a policy are consistent with preconditions.
arXiv Detail & Related papers (2023-11-16T06:19:27Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Fundamental Limitations of Alignment in Large Language Models [16.393916864600193]
An important aspect in developing language models that interact with humans is aligning their behavior to be useful and unharmful.
This is usually achieved by tuning the model in a way that enhances desired behaviors and inhibits undesired ones.
We propose a theoretical approach called Behavior Expectation Bounds (BEB) which allows us to formally investigate several inherent characteristics and limitations of alignment in large language models.
arXiv Detail & Related papers (2023-04-19T17:50:09Z) - Differentially Private Counterfactuals via Functional Mechanism [47.606474009932825]
We propose a novel framework to generate differentially private counterfactual (DPC) without touching the deployed model or explanation set.
In particular, we train an autoencoder with the functional mechanism to construct noisy class prototypes, and then derive the DPC from the latent prototypes.
arXiv Detail & Related papers (2022-08-04T20:31:22Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - On Constraint Definability in Tractable Probabilistic Models [12.47276164048813]
A wide variety of problems require predictions to be integrated with reasoning about constraints.
We consider a mathematical inquiry on how the learning of tractable probabilistic models, such as sum-product networks, is possible while incorporating constraints.
arXiv Detail & Related papers (2020-01-29T16:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.