Text Classification for Task-based Source Code Related Questions
- URL: http://arxiv.org/abs/2111.00580v1
- Date: Sun, 31 Oct 2021 20:10:21 GMT
- Title: Text Classification for Task-based Source Code Related Questions
- Authors: Sairamvinay Vijayaraghavan, Jinxiao Song, David Tomassi, Siddhartha
Punj, Jailan Sabet
- Abstract summary: StackOverflow provides solutions in small snippets which provide a complete answer to whatever task question the developer wants to code.
We develop a two-fold deep learning model: Seq2Seq and a binary classifier that takes in the intent (which is in natural language) and code snippets in Python.
We find that the hidden state layer's embeddings perform slightly better than regular standard embeddings from a constructed vocabulary.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a key demand to automatically generate code for small tasks for
developers. Websites such as StackOverflow provide a simplistic way by offering
solutions in small snippets which provide a complete answer to whatever task
question the developer wants to code. Natural Language Processing and
particularly Question-Answering Systems are very helpful in resolving and
working on these tasks. In this paper, we develop a two-fold deep learning
model: Seq2Seq and a binary classifier that takes in the intent (which is in
natural language) and code snippets in Python. We train both the intent and the
code utterances in the Seq2Seq model, where we decided to compare the effect of
the hidden layer embedding from the encoder for representing the intent and
similarly, using the decoder's hidden layer embeddings for the code sequence.
Then we combine both these embeddings and then train a simple binary neural
network classifier model for predicting if the intent is correctly answered by
the predicted code sequence from the seq2seq model. We find that the hidden
state layer's embeddings perform slightly better than regular standard
embeddings from a constructed vocabulary. We experimented with our tests on the
CoNaLa dataset in addition to the StaQC database consisting of simple task-code
snippet-based pairs. We empirically establish that using additional pre-trained
embeddings for code snippets in Python is less context-based in comparison to
using hidden state context vectors from seq2seq models.
Related papers
- SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - InCoder: A Generative Model for Code Infilling and Synthesis [88.46061996766348]
We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) and editing (via infilling)
InCoder is trained to generate code files from a large corpus of permissively licensed code.
Our model is the first generative model that is able to directly perform zero-shot code infilling.
arXiv Detail & Related papers (2022-04-12T16:25:26Z) - Learning Deep Semantic Model for Code Search using CodeSearchNet Corpus [17.6095840480926]
We propose a novel deep semantic model which makes use of the utilities of multi-modal sources.
We apply the proposed model to tackle the CodeSearchNet challenge about semantic code search.
Our model is trained on CodeSearchNet corpus and evaluated on the held-out data, the final model achieves 0.384 NDCG and won the first place in this benchmark.
arXiv Detail & Related papers (2022-01-27T04:15:59Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - What do pre-trained code models know about code? [9.60966128833701]
We use diagnostic tasks called probes to investigate pre-trained code models.
BERT (pre-trained on English), CodeBERT and CodeBERTa (pre-trained on source code, and natural language documentation), and GraphCodeBERT (pre-trained on source code with dataflow) are investigated.
arXiv Detail & Related papers (2021-08-25T16:20:17Z) - BERT2Code: Can Pretrained Language Models be Leveraged for Code Search? [0.7953229555481884]
We show that our model learns the inherent relationship between the embedding spaces and further probes into the scope of improvement.
In this analysis, we show that the quality of the code embedding model is the bottleneck for our model's performance.
arXiv Detail & Related papers (2021-04-16T10:28:27Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.