Improving Competence for Reliable Autonomy
- URL: http://arxiv.org/abs/2007.11740v1
- Date: Thu, 23 Jul 2020 01:31:28 GMT
- Title: Improving Competence for Reliable Autonomy
- Authors: Connor Basich (University of Massachusetts Amherst), Justin Svegliato
(University of Massachusetts Amherst), Kyle Hollins Wray (Alliance Innovation
Lab Silicon Valley), Stefan J. Witwicki (Alliance Innovation Lab Silicon
Valley), Shlomo Zilberstein (University of Massachuetts Amherst)
- Abstract summary: We propose a method for improving the competence of a system over the course of its deployment.
We specifically focus on a class of semi-autonomous systems known as competence-aware systems.
Our method exploits such feedback to identify important state features missing from the system's initial model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the complexity of real-world, unstructured domains, it is often
impossible or impractical to design models that include every feature needed to
handle all possible scenarios that an autonomous system may encounter. For an
autonomous system to be reliable in such domains, it should have the ability to
improve its competence online. In this paper, we propose a method for improving
the competence of a system over the course of its deployment. We specifically
focus on a class of semi-autonomous systems known as competence-aware systems
that model their own competence -- the optimal extent of autonomy to use in any
given situation -- and learn this competence over time from feedback received
through interactions with a human authority. Our method exploits such feedback
to identify important state features missing from the system's initial model,
and incorporates them into its state representation. The result is an agent
that better predicts human involvement, leading to improvements in its
competence and reliability, and as a result, its overall performance.
Related papers
- Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - A Case for Competent AI Systems $-$ A Concept Note [0.3626013617212666]
This note explores the concept of capability within AI systems, representing what the system is expected to deliver.
The achievement of this capability may be hindered by deficiencies in implementation and testing.
A central challenge arises in elucidating the competency of an AI system to execute tasks effectively.
arXiv Detail & Related papers (2023-11-28T09:21:03Z) - Collective Reasoning for Safe Autonomous Systems [0.0]
We introduce the idea of increasing the reliability of autonomous systems by relying on collective intelligence.
We define and formalize at design rules for collective reasoning to achieve collaboratively increased safety, trustworthiness and good decision making.
arXiv Detail & Related papers (2023-05-18T20:37:32Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - AAAI 2022 Fall Symposium: Lessons Learned for Autonomous Assessment of
Machine Abilities (LLAAMA) [1.157139586810131]
Modern civilian and military systems have created a demand for sophisticated intelligent autonomous machines.
These newer forms of intelligent autonomy raise questions about when/how communication of the operational intent and assessments of actual capabilities of autonomous agents impact overall performance.
This symposium examines the possibilities for enabling intelligent autonomous systems to self-assess and communicate their ability to effectively execute assigned tasks.
arXiv Detail & Related papers (2023-01-13T03:47:38Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Learning to Optimize Autonomy in Competence-Aware Systems [32.3596917475882]
We propose an introspective model of autonomy that is learned and updated online through experience.
We define a competence-aware system (CAS) that explicitly models its own proficiency at different levels of autonomy and the available human feedback.
We analyze the convergence properties of CAS and provide experimental results for robot delivery and autonomous driving domains.
arXiv Detail & Related papers (2020-03-17T14:31:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.