Proceedings of the AI-HRI Symposium at AAAI-FSS 2020
- URL: http://arxiv.org/abs/2010.13830v4
- Date: Mon, 14 Dec 2020 19:15:24 GMT
- Title: Proceedings of the AI-HRI Symposium at AAAI-FSS 2020
- Authors: Shelly Bagchi, Jason R. Wilson, Muneeb I. Ahmad, Christian Dondrup,
Zhao Han, Justin W. Hart, Matteo Leonetti, Katrin Lohan, Ross Mead, Emmanuel
Senft, Jivko Sinapov, Megan L. Zimmerman
- Abstract summary: The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration since 2014.
Many of the past participants in AI-HRI have been or are now involved with research into trust in HRI.
How does trust apply to the specific situations we encounter in the AI-HRI sphere?
- Score: 8.61750164254107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium
has been a successful venue of discussion and collaboration since 2014. In that
time, the related topic of trust in robotics has been rapidly growing, with
major research efforts at universities and laboratories across the world.
Indeed, many of the past participants in AI-HRI have been or are now involved
with research into trust in HRI. While trust has no consensus definition, it is
regularly associated with predictability, reliability, inciting confidence, and
meeting expectations. Furthermore, it is generally believed that trust is
crucial for adoption of both AI and robotics, particularly when transitioning
technologies from the lab to industrial, social, and consumer applications.
However, how does trust apply to the specific situations we encounter in the
AI-HRI sphere? Is the notion of trust in AI the same as that in HRI? We see a
growing need for research that lives directly at the intersection of AI and HRI
that is serviced by this symposium. Over the course of the two-day meeting, we
propose to create a collaborative forum for discussion of current efforts in
trust for AI-HRI, with a sub-session focused on the related topic of
explainable AI (XAI) for HRI.
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - AI-HRI Brings New Dimensions to Human-Aware Design for Human-Aware AI [2.512827436728378]
We will explore how AI-HRI can change the way researchers think about human-aware AI.
There is no greater opportunity for sharing perspectives at the moment than human-aware AI.
arXiv Detail & Related papers (2022-10-21T09:25:06Z) - Proceedings of the AI-HRI Symposium at AAAI-FSS 2022 [10.710184843122311]
The Artificial Intelligence for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration since 2014.
This year, after a review of the achievements of the AI-HRI community over the last decade in 2021, we are focusing on a visionary theme: exploring the future of AI-HRI.
With the success of past symposiums, AI-HRI impacts a variety of communities and problems, and has pioneered the discussions in recent trends and interests.
arXiv Detail & Related papers (2022-09-28T17:55:46Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - AI-HRI 2021 Proceedings [11.93031750070303]
We aim to review the achievements of the AI-HRI community in the last decade and identify the challenges facing ahead.
This year there will be no single theme to lead the symposium and we encourage AI-HRI submissions from across disciplines and research interests.
In addition, acknowledging that ethics is an inherent part of the human-robot interaction, we encourage submissions of works on ethics for HRI.
arXiv Detail & Related papers (2021-09-22T16:54:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.