A Brief Guide to Designing and Evaluating Human-Centered Interactive
Machine Learning
- URL: http://arxiv.org/abs/2204.09622v1
- Date: Wed, 20 Apr 2022 17:05:09 GMT
- Title: A Brief Guide to Designing and Evaluating Human-Centered Interactive
Machine Learning
- Authors: Kory W. Mathewson, Patrick M. Pilarski
- Abstract summary: Interactive machine learning (IML) is a field of research that explores how to leverage both human and computational abilities in decision making systems.
This guide is intended to be used by machine learning practitioners who are responsible for the health, safety, and well-being of interacting humans.
- Score: 3.685480240534955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interactive machine learning (IML) is a field of research that explores how
to leverage both human and computational abilities in decision making systems.
IML represents a collaboration between multiple complementary human and machine
intelligent systems working as a team, each with their own unique abilities and
limitations. This teamwork might mean that both systems take actions at the
same time, or in sequence. Two major open research questions in the field of
IML are: "How should we design systems that can learn to make better decisions
over time with human interaction?" and "How should we evaluate the design and
deployment of such systems?" A lack of appropriate consideration for the humans
involved can lead to problematic system behaviour, and issues of fairness,
accountability, and transparency. Thus, our goal with this work is to present a
human-centred guide to designing and evaluating IML systems while mitigating
risks. This guide is intended to be used by machine learning practitioners who
are responsible for the health, safety, and well-being of interacting humans.
An obligation of responsibility for public interaction means acting with
integrity, honesty, fairness, and abiding by applicable legal statutes. With
these values and principles in mind, we as a machine learning research
community can better achieve goals of augmenting human skills and abilities.
This practical guide therefore aims to support many of the responsible
decisions necessary throughout the iterative design, development, and
dissemination of IML systems.
Related papers
- CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics [0.0]
Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions.
Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations.
arXiv Detail & Related papers (2024-07-03T07:59:52Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Human-Centred Learning Analytics and AI in Education: a Systematic
Literature Review [0.0]
Excluding stakeholders from the design process can potentially lead to mistrust and inadequately aligned tools.
Despite a shift towards human-centred design, there remain gaps in our understanding of the importance of human control, safety, reliability, and trustworthiness.
arXiv Detail & Related papers (2023-12-20T04:15:01Z) - Adaptive User-centered Neuro-symbolic Learning for Multimodal
Interaction with Autonomous Systems [0.0]
Recent advances in machine learning have enabled autonomous systems to perceive and comprehend objects.
It is essential to consider both the explicit teaching provided by humans and the implicit teaching obtained by observing human behavior.
We argue for considering both types of inputs, as well as human-in-the-loop and incremental learning techniques.
arXiv Detail & Related papers (2023-09-11T19:35:12Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - One-way Explainability Isn't The Message [2.618757282404254]
We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
arXiv Detail & Related papers (2022-05-05T09:15:53Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.