Learning from AI: An Interactive Learning Method Using a DNN Model
Incorporating Expert Knowledge as a Teacher
- URL: http://arxiv.org/abs/2306.02257v1
- Date: Sun, 4 Jun 2023 04:22:55 GMT
- Title: Learning from AI: An Interactive Learning Method Using a DNN Model
Incorporating Expert Knowledge as a Teacher
- Authors: Kohei Hattori, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu
Fujiyoshi
- Abstract summary: Visual explanation is an approach for visualizing the grounds of judgment by deep learning.
A method that incorporates expert human knowledge in the model via an attention map is proposed.
The results of an evaluation experiment with subjects show that learning using the proposed method is more efficient than the conventional method.
- Score: 7.964052580720558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual explanation is an approach for visualizing the grounds of judgment by
deep learning, and it is possible to visually interpret the grounds of a
judgment for a certain input by visualizing an attention map. As for
deep-learning models that output erroneous decision-making grounds, a method
that incorporates expert human knowledge in the model via an attention map in a
manner that improves explanatory power and recognition accuracy is proposed. In
this study, based on a deep-learning model that incorporates the knowledge of
experts, a method by which a learner "learns from AI" the grounds for its
decisions is proposed. An "attention branch network" (ABN), which has been
fine-tuned with attention maps modified by experts, is prepared as a teacher.
By using an interactive editing tool for the fine-tuned ABN and attention maps,
the learner learns by editing the attention maps and changing the inference
results. By repeatedly editing the attention maps and making inferences so that
the correct recognition results are output, the learner can acquire the grounds
for the expert's judgments embedded in the ABN. The results of an evaluation
experiment with subjects show that learning using the proposed method is more
efficient than the conventional method.
Related papers
- Advancing Personalized Learning Analysis via an Innovative Domain Knowledge Informed Attention-based Knowledge Tracing Method [0.0]
We propose an innovative attention-based method by effectively incorporating the domain knowledge of knowledge concept routes in the given curriculum.
We leverage XES3G5M dataset to evaluate and compare the performance of our proposed method to the seven State-of-the-art deep learning models.
arXiv Detail & Related papers (2025-01-09T22:41:50Z) - Learner Attentiveness and Engagement Analysis in Online Education Using Computer Vision [3.449808359602251]
This research presents a computer vision-based approach to analyze and quantify learners' attentiveness, engagement, and other affective states within online learning scenarios.
A machine learning-based algorithm is developed on top of the classification model that outputs a comprehensive attentiveness index of the learners.
An end-to-end pipeline is proposed through which learners' live video feed is processed, providing detailed attentiveness analytics of the learners to the instructors.
arXiv Detail & Related papers (2024-11-30T10:54:08Z) - Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models [1.3597551064547502]
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem.
Traditional xAI methods concentrate on visualizing input features that influence model predictions.
We present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction.
arXiv Detail & Related papers (2024-04-15T14:26:00Z) - A Survey of Explainable Knowledge Tracing [14.472784840283099]
This paper thoroughly analyzes the interpretability of KT algorithms.
Current evaluation methods for explainable knowledge tracing are lacking.
This paper offers some insights into evaluation methods from the perspective of educational stakeholders.
arXiv Detail & Related papers (2024-03-12T03:17:59Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Introspective Learning by Distilling Knowledge from Online
Self-explanation [36.91213895208838]
We propose an implementation of introspective learning by distilling knowledge from online self-explanations.
The models trained with the introspective learning procedure outperform the ones trained with the standard learning procedure.
arXiv Detail & Related papers (2020-09-19T02:05:32Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z) - A Review on Intelligent Object Perception Methods Combining
Knowledge-based Reasoning and Machine Learning [60.335974351919816]
Object perception is a fundamental sub-field of Computer Vision.
Recent works seek ways to integrate knowledge engineering in order to expand the level of intelligence of the visual interpretation of objects.
arXiv Detail & Related papers (2019-12-26T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.