On the Relationship Between Active Inference and Control as Inference
- URL: http://arxiv.org/abs/2006.12964v3
- Date: Mon, 29 Jun 2020 14:52:52 GMT
- Title: On the Relationship Between Active Inference and Control as Inference
- Authors: Beren Millidge, Alexander Tschantz, Anil K Seth, Christopher L Buckley
- Abstract summary: Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Inference (AIF) is an emerging framework in the brain sciences which
suggests that biological agents act to minimise a variational bound on model
evidence. Control-as-Inference (CAI) is a framework within reinforcement
learning which casts decision making as a variational inference problem. While
these frameworks both consider action selection through the lens of variational
inference, their relationship remains unclear. Here, we provide a formal
comparison between them and demonstrate that the primary difference arises from
how value is incorporated into their respective generative models. In the
context of this comparison, we highlight several ways in which these frameworks
can inform one another.
Related papers
- A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - Causal Influence in Federated Edge Inference [34.487472866247586]
In this paper, we consider a setting where heterogeneous agents with connectivity are performing inference using unlabeled streaming data.
In order to overcome the uncertainty, agents cooperate with each other by exchanging their local inferences with and through a fusion center.
Various scenarios reflecting different agent participation patterns and fusion center policies are investigated.
arXiv Detail & Related papers (2024-05-02T13:06:50Z) - SAIE Framework: Support Alone Isn't Enough -- Advancing LLM Training
with Adversarial Remarks [47.609417223514605]
This work introduces the SAIE framework, which facilitates supportive and adversarial discussions between learner and partner models.
Our empirical evaluation shows that models fine-tuned with the SAIE framework outperform those trained with conventional fine-tuning approaches.
arXiv Detail & Related papers (2023-11-14T12:12:25Z) - Discourse Relations Classification and Cross-Framework Discourse
Relation Classification Through the Lens of Cognitive Dimensions: An
Empirical Investigation [5.439020425819001]
We show that discourse relations can be effectively captured by some simple cognitively inspired dimensions proposed by Sanders et al.(2018)
Our experiments on cross-framework discourse relation classification (PDTB & RST) demonstrate that it is possible to transfer knowledge of discourse relations for one framework to another framework by means of these dimensions.
arXiv Detail & Related papers (2023-11-01T11:38:19Z) - Inverse Decision Modeling: Learning Interpretable Representations of
Behavior [72.80902932543474]
We develop an expressive, unifying perspective on inverse decision modeling.
We use this to formalize the inverse problem (as a descriptive model)
We illustrate how this structure enables learning (interpretable) representations of (bounded) rationality.
arXiv Detail & Related papers (2023-10-28T05:05:01Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - On the duality between contrastive and non-contrastive self-supervised
learning [0.0]
Self-supervised learning can be divided into contrastive and non-contrastive approaches.
We show how close the contrastive and non-contrastive families can be.
We also show the influence (or lack thereof) of design choices on downstream performance.
arXiv Detail & Related papers (2022-06-03T08:04:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.