Towards Modeling Human Attention from Eye Movements for Neural Source
Code Summarization
- URL: http://arxiv.org/abs/2305.09773v1
- Date: Tue, 16 May 2023 19:56:45 GMT
- Title: Towards Modeling Human Attention from Eye Movements for Neural Source
Code Summarization
- Authors: Aakash Bansal, Bonita Sharif, Collin McMillan
- Abstract summary: We use eye-tracking data to create a model of human attention.
The model predicts which words in source code are the most important for code summarization.
We observe an improvement in prediction performance of the augmented approach in line with other bio-inspired neural models.
- Score: 6.435578628605734
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural source code summarization is the task of generating natural language
descriptions of source code behavior using neural networks. A fundamental
component of most neural models is an attention mechanism. The attention
mechanism learns to connect features in source code to specific words to use
when generating natural language descriptions. Humans also pay attention to
some features in code more than others. This human attention reflects
experience and high-level cognition well beyond the capability of any current
neural model. In this paper, we use data from published eye-tracking
experiments to create a model of this human attention. The model predicts which
words in source code are the most important for code summarization. Next, we
augment a baseline neural code summarization approach using our model of human
attention. We observe an improvement in prediction performance of the augmented
approach in line with other bio-inspired neural models.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Investigating the Encoding of Words in BERT's Neurons using Feature
Textualization [11.943486282441143]
We propose a technique to produce representations of neurons in embedding word space.
We find that the produced representations can provide insights about the encoded knowledge in individual neurons.
arXiv Detail & Related papers (2023-11-14T15:21:49Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Modeling rapid language learning by distilling Bayesian priors into
artificial neural networks [18.752638142258668]
We show that learning from limited naturalistic data is possible with an approach that combines the strong inductive biases of a Bayesian model with the flexible representations of a neural network.
The resulting system can learn formal linguistic patterns from a small number of examples.
It can also learn aspects of English syntax from a corpus of natural language.
arXiv Detail & Related papers (2023-05-24T04:11:59Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.