Video Surveillance System Incorporating Expert Decision-making Process:
A Case Study on Detecting Calving Signs in Cattle
- URL: http://arxiv.org/abs/2301.03926v1
- Date: Tue, 10 Jan 2023 12:06:49 GMT
- Title: Video Surveillance System Incorporating Expert Decision-making Process:
A Case Study on Detecting Calving Signs in Cattle
- Authors: Ryosuke Hyodo, Susumu Saito, Teppei Nakano, Makoto Akabane, Ryoichi
Kasuga, Tetsuji Ogawa
- Abstract summary: In this study, we examine the framework of a video surveillance AI system that presents the reasoning behind predictions by incorporating experts' decision-making processes with rich domain knowledge of the notification target.
In our case study, we designed a system for detecting signs of calving in cattle based on the proposed framework and evaluated the system through a user study with people involved in livestock farming.
- Score: 5.80793470875286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Through a user study in the field of livestock farming, we verify the
effectiveness of an XAI framework for video surveillance systems. The systems
can be made interpretable by incorporating experts' decision-making processes.
AI systems are becoming increasingly common in real-world applications,
especially in fields related to human decision-making, and its interpretability
is necessary. However, there are still relatively few standard methods for
assessing and addressing the interpretability of machine learning-based systems
in real-world applications. In this study, we examine the framework of a video
surveillance AI system that presents the reasoning behind predictions by
incorporating experts' decision-making processes with rich domain knowledge of
the notification target. While general black-box AI systems can only present
final probability values, the proposed framework can present information
relevant to experts' decisions, which is expected to be more helpful for their
decision-making. In our case study, we designed a system for detecting signs of
calving in cattle based on the proposed framework and evaluated the system
through a user study (N=6) with people involved in livestock farming. A
comparison with the black-box AI system revealed that many participants
referred to the presented reasons for the prediction results, and five out of
six participants selected the proposed system as the system they would like to
use in the future. It became clear that we need to design a user interface that
considers the reasons for the prediction results.
Related papers
- Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Painting the black box white: experimental findings from applying XAI to
an ECG reading setting [0.13124513975412253]
The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI)
We focus on the cognitive dimension of users' perception of explanations and XAI systems.
arXiv Detail & Related papers (2022-10-27T07:47:50Z) - Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
Methods, Challenges, and Opportunities [0.0]
Intrusion Detection Systems (IDS) have received widespread adoption due to their ability to handle vast amounts of data with a high prediction accuracy.
IDSs designed using Deep Learning (DL) techniques are often treated as black box models and do not provide a justification for their predictions.
This survey reviews the state-of-the-art in explainable AI (XAI) for IDS, its current challenges, and discusses how these challenges span to the design of an X-IDS.
arXiv Detail & Related papers (2022-07-13T14:31:46Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - AHMoSe: A Knowledge-Based Visual Support System for Selecting Regression
Machine Learning Models [2.9998889086656577]
AHMoSe is a visual support system that allows domain experts to better understand, diagnose and compare different regression models.
We describe a use case scenario in the viticulture domain, grape quality prediction, where the system enables users to diagnose and select prediction models that perform better.
arXiv Detail & Related papers (2021-01-28T12:55:06Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - A systematic review and taxonomy of explanations in decision support and
recommender systems [13.224071661974596]
We systematically review the literature on explanations in advice-giving systems.
We derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities.
arXiv Detail & Related papers (2020-06-15T18:19:20Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.