Aligning Human and Machine Attention for Enhanced Supervised Learning
- URL: http://arxiv.org/abs/2502.06811v2
- Date: Wed, 19 Feb 2025 20:57:37 GMT
- Title: Aligning Human and Machine Attention for Enhanced Supervised Learning
- Authors: Avihay Chriqui, Inbal Yahav, Dov Teeni, Ahmed Abbasi,
- Abstract summary: Humans continue to outperform machines in certain learning tasks.
It seems plausible that machine performance could be enriched by aligning machine attention with human attention mechanisms.
This paper proposes a new approach called Human-Machine Attention Learning (HuMAL)
- Score: 2.3311636727756055
- License:
- Abstract: Attention, or prioritization of certain information items over others, is a critical element of any learning process, for both humans and machines. Given that humans continue to outperform machines in certain learning tasks, it seems plausible that machine performance could be enriched by aligning machine attention with human attention mechanisms -- yet research on this topic is sparse and has achieved only limited success. This paper proposes a new approach to address this gap, called Human-Machine Attention Learning (HuMAL). This approach involves reliance on data annotated by humans to reflect their self-perceived attention during specific tasks. We evaluate several alternative strategies for integrating such human attention data into machine learning (ML) algorithms, using a sentiment analysis task (review data from Yelp) and a personality-type classification task (data from myPersonality). The best-performing HuMAL strategy significantly enhances the task performance of fine-tuned transformer models (BERT, as well as GPT-2 and XLNET), and the benefit is particularly pronounced under challenging conditions of imbalanced or sparse labeled data. This research contributes to a deeper understanding of strategies for integrating human attention into ML models and highlights the potential of leveraging human cognition to augment ML in real-world applications.
Related papers
- Human-In-The-Loop Machine Learning for Safe and Ethical Autonomous Vehicles: Principles, Challenges, and Opportunities [33.853994070508485]
We focus on Curriculum Learning (CL), Human-In-The-Loop Reinforcement Learning (HITL-RL), Active Learning (AL), and ethical principles.
In CL, human experts systematically train ML models by starting with simple tasks and gradually progressing to more difficult ones.
HITL-RL significantly enhances the RL process by incorporating human input through techniques like reward shaping, action injection, and interactive learning.
AL streamlines the annotation process by targeting specific instances that need to be labeled with human oversight.
arXiv Detail & Related papers (2024-08-22T17:02:29Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - EyeTrans: Merging Human and Machine Attention for Neural Code
Summarization [16.694601606682046]
We develop a method for incorporating human attention into machine attention to enhance neural code summarization.
We conduct comprehensive experiments on two code summarization tasks to demonstrate the effectiveness of incorporating human attention into Transformers.
arXiv Detail & Related papers (2024-02-21T19:45:06Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Classifying Human Activities using Machine Learning and Deep Learning
Techniques [0.0]
Human Activity Recognition (HAR) describes the machines ability to recognize human actions.
Challenge in HAR is to overcome the difficulties of separating human activities based on the given data.
Deep Learning techniques like Long Short-Term Memory (LSTM), Bi-Directional LS classifier, Recurrent Neural Network (RNN), and Gated Recurrent Unit (GRU) are trained.
Experiment results proved that the Linear Support Vector in machine learning and Gated Recurrent Unit in Deep Learning provided better accuracy for human activity recognition.
arXiv Detail & Related papers (2022-05-19T05:20:04Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Cost-effective Interactive Attention Learning with Neural Attention
Processes [79.8115563067513]
We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL)
IAL is prone to overfitting due to scarcity of human annotations, and requires costly retraining.
We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features.
arXiv Detail & Related papers (2020-06-09T17:36:41Z) - Leveraging Rationales to Improve Human Task Performance [15.785125079811902]
Given a computational system's performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human?
We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods.
Results show that our approach produces rationales that lead to statistically significant improvement in human task performance.
arXiv Detail & Related papers (2020-02-11T04:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.