Aligning Eyes between Humans and Deep Neural Network through Interactive
Attention Alignment
- URL: http://arxiv.org/abs/2202.02838v1
- Date: Sun, 6 Feb 2022 19:22:06 GMT
- Title: Aligning Eyes between Humans and Deep Neural Network through Interactive
Attention Alignment
- Authors: Yuyang Gao, Tong Sun, Liang Zhao, Sungsoo Hong
- Abstract summary: We propose a novel framework of Interactive Attention Alignment (IAA) that aims at realizing human-steerable Deep Neural Networks (DNNs)
IAA leverages DNN model explanation method as an interactive medium that humans can use to unveil the cases of biased model attention and directly adjust the attention.
In improving the DNN using human-generated adjusted attention, we introduce GRADIA, a novel computational pipeline that jointly maximizes attention quality and prediction accuracy.
- Score: 17.653966477405024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Deep Neural Networks (DNNs) are deriving the major innovations in
nearly every field through their powerful automation, we are also witnessing
the peril behind automation as a form of bias, such as automated racism, gender
bias, and adversarial bias. As the societal impact of DNNs grows, finding an
effective way to steer DNNs to align their behavior with the human mental model
has become indispensable in realizing fair and accountable models. We propose a
novel framework of Interactive Attention Alignment (IAA) that aims at realizing
human-steerable Deep Neural Networks (DNNs). IAA leverages DNN model
explanation method as an interactive medium that humans can use to unveil the
cases of biased model attention and directly adjust the attention. In improving
the DNN using human-generated adjusted attention, we introduce GRADIA, a novel
computational pipeline that jointly maximizes attention quality and prediction
accuracy. We evaluated IAA framework in Study 1 and GRADIA in Study 2 in a
gender classification problem. Study 1 found applying IAA can significantly
improve the perceived quality of model attention from human eyes. In Study 2,
we found using GRADIA can (1) significantly improve the perceived quality of
model attention and (2) significantly improve model performance in scenarios
where the training samples are limited. We present implications for future
interactive user interfaces design towards human-alignable AI.
Related papers
- RTify: Aligning Deep Neural Networks with Human Behavioral Decisions [10.510746720313303]
Current neural network models of primate vision focus on replicating overall levels of behavioral accuracy.
We introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network to human reaction times (RTs)
We show that the approximation can be used to optimize an "ideal-observer" RNN model to achieve an optimal tradeoff between speed and accuracy without human data.
arXiv Detail & Related papers (2024-11-06T03:04:05Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception? [8.370048099732573]
Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision.
We argue that it is important to distinguish between statistical tools and computational models.
We dispel a number of myths surrounding DNNs in vision science.
arXiv Detail & Related papers (2023-05-26T15:31:06Z) - Harmonizing the object recognition strategies of deep neural networks
with humans [10.495114898741205]
We show that state-of-the-art deep neural networks (DNNs) are becoming less aligned with humans as their accuracy improves.
Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision.
arXiv Detail & Related papers (2022-11-08T20:03:49Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Cost-effective Interactive Attention Learning with Neural Attention
Processes [79.8115563067513]
We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL)
IAL is prone to overfitting due to scarcity of human annotations, and requires costly retraining.
We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features.
arXiv Detail & Related papers (2020-06-09T17:36:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.