Using Human-Guided Causal Knowledge for More Generalized Robot Task
Planning
- URL: http://arxiv.org/abs/2110.04664v1
- Date: Sat, 9 Oct 2021 23:46:44 GMT
- Title: Using Human-Guided Causal Knowledge for More Generalized Robot Task
Planning
- Authors: Semir Tatlidil (1), Yanqi Liu (1), Emily Sheetz (2), R. Iris Bahar
(1), Steven Sloman (1) ((1) Brown University, (2) University of Michigan)
- Abstract summary: Unlike AI, humans are adept at finding solutions that can transfer.
We propose to use human-guided causal knowledge to help robots find solutions that can generalize to a new environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A major challenge in research involving artificial intelligence (AI) is the
development of algorithms that can find solutions to problems that can
generalize to different environments and tasks. Unlike AI, humans are adept at
finding solutions that can transfer. We hypothesize this is because their
solutions are informed by causal models. We propose to use human-guided causal
knowledge to help robots find solutions that can generalize to a new
environment. We develop and test the feasibility of a language interface that
na\"ive participants can use to communicate these causal models to a planner.
We find preliminary evidence that participants are able to use our interface
and generate causal models that achieve near-generalization. We outline an
experiment aimed at testing far-generalization using our interface and describe
our longer terms goals for these causal models.
Related papers
- Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - Generative AI in Writing Research Papers: A New Type of Algorithmic Bias
and Uncertainty in Scholarly Work [0.38850145898707145]
Large language models (LLMs) and generative AI tools present challenges in identifying and addressing biases.
generative AI tools are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts.
We find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias.
arXiv Detail & Related papers (2023-12-04T04:05:04Z) - User Behavior Simulation with Large Language Model based Agents [116.74368915420065]
We propose an LLM-based agent framework and design a sandbox environment to simulate real user behaviors.
Based on extensive experiments, we find that the simulated behaviors of our method are very close to the ones of real humans.
arXiv Detail & Related papers (2023-06-05T02:58:35Z) - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
Face [85.25054021362232]
Large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning.
LLMs could act as a controller to manage existing AI models to solve complicated AI tasks.
We present HuggingGPT, an LLM-powered agent that connects various AI models in machine learning communities.
arXiv Detail & Related papers (2023-03-30T17:48:28Z) - Causal Discovery of Dynamic Models for Predicting Human Spatial
Interactions [5.742409080817885]
We propose an application of causal discovery methods to model human-robot spatial interactions.
New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm.
arXiv Detail & Related papers (2022-10-29T08:56:48Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Towards Involving End-users in Interactive Human-in-the-loop AI Fairness [1.889930012459365]
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications.
Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer.
Our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users to identify potential fairness issues.
arXiv Detail & Related papers (2022-04-22T02:24:11Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.