Ravestate: Distributed Composition of a Causal-Specificity-Guided
Interaction Policy
- URL: http://arxiv.org/abs/2310.01943v1
- Date: Tue, 3 Oct 2023 10:38:53 GMT
- Title: Ravestate: Distributed Composition of a Causal-Specificity-Guided
Interaction Policy
- Authors: Joseph Birkner, Andreas Dolp, Negin Karimi, Nikita Basargin, Alona
Kharchenko and Rafael Hostettler
- Abstract summary: In human-robot interaction policy design, a rule-based method is efficient, explainable, expressive and intuitive.
We present the Signal-Rule-Slot framework, which refines prior work on rule-based symbol system design.
We introduce a new, Bayesian notion of interaction rule utility called Causal Pathway Self-information.
- Score: 0.8039067099377079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In human-robot interaction policy design, a rule-based method is efficient,
explainable, expressive and intuitive. In this paper, we present the
Signal-Rule-Slot framework, which refines prior work on rule-based symbol
system design and introduces a new, Bayesian notion of interaction rule utility
called Causal Pathway Self-information. We offer a rigorous theoretical
foundation as well as a rich open-source reference implementation Ravestate,
with which we conduct user studies in text-, speech-, and vision-based
scenarios. The experiments show robust contextual behaviour of our
probabilistically informed rule-based system, paving the way for more effective
human-machine interaction.
Related papers
- Advancing Interactive Explainable AI via Belief Change Theory [5.842480645870251]
We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations.
We first define a novel, logic-based formalism to represent explanatory information shared between humans and machines.
We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated.
arXiv Detail & Related papers (2024-08-13T13:11:56Z) - Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation [55.65482030032804]
Social robot navigation can be helpful in various contexts of daily life but requires safe human-robot interactions and efficient trajectory planning.
We propose a systematic relational reasoning approach with explicit inference of the underlying dynamically evolving relational structures.
Our approach infers dynamically evolving relation graphs and hypergraphs to capture the evolution of relations, which the trajectory predictor employs to generate future states.
arXiv Detail & Related papers (2024-01-22T18:58:22Z) - Conformal Policy Learning for Sensorimotor Control Under Distribution
Shifts [61.929388479847525]
This paper focuses on the problem of detecting and reacting to changes in the distribution of a sensorimotor controller's observables.
The key idea is the design of switching policies that can take conformal quantiles as input.
We show how to design such policies by using conformal quantiles to switch between base policies with different characteristics.
arXiv Detail & Related papers (2023-11-02T17:59:30Z) - Dialectical Reconciliation via Structured Argumentative Dialogues [14.584998154271512]
Our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user)
Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.
arXiv Detail & Related papers (2023-06-26T13:39:36Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Verified Probabilistic Policies for Deep Reinforcement Learning [6.85316573653194]
We tackle the problem of verifying probabilistic policies for deep reinforcement learning.
We propose an abstraction approach, based on interval Markov decision processes, that yields guarantees on a policy's execution.
We present techniques to build and solve these models using abstract interpretation, mixed-integer linear programming, entropy-based refinement and probabilistic model checking.
arXiv Detail & Related papers (2022-01-10T23:55:04Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - A GAN-Like Approach for Physics-Based Imitation Learning and Interactive
Character Control [2.2082422928825136]
We present a simple and intuitive approach for interactive control of physically simulated characters.
Our work builds upon generative adversarial networks (GAN) and reinforcement learning.
We highlight the applicability of our approach in a range of imitation and interactive control tasks.
arXiv Detail & Related papers (2021-05-21T00:03:29Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Discourse Coherence, Reference Grounding and Goal Oriented Dialogue [15.766916122461922]
We argue for a new approach to realizing mixed-initiative human--computer referential communication.
We describe a simple dialogue system in a referential communication domain that accumulates constraints across discourse, interprets them using a learned probabilistic model, and plans clarification using reinforcement learning.
arXiv Detail & Related papers (2020-07-08T20:53:14Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.