Universal Representation for Code
- URL: http://arxiv.org/abs/2103.03116v1
- Date: Thu, 4 Mar 2021 15:39:25 GMT
- Title: Universal Representation for Code
- Authors: Linfeng Liu, Hoan Nguyen, George Karypis, Srinivasan Sengamedu
- Abstract summary: We present effective pre-training strategies on top of a novel graph-based code representation.
We pre-train graph neural networks on the representation to extract universal code properties.
We evaluate our model on two real-world datasets -- spanning over 30M Java methods and 770K Python methods.
- Score: 8.978516631649276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from source code usually requires a large amount of labeled data.
Despite the possible scarcity of labeled data, the trained model is highly
task-specific and lacks transferability to different tasks. In this work, we
present effective pre-training strategies on top of a novel graph-based code
representation, to produce universal representations for code. Specifically,
our graph-based representation captures important semantics between code
elements (e.g., control flow and data flow). We pre-train graph neural networks
on the representation to extract universal code properties. The pre-trained
model then enables the possibility of fine-tuning to support various downstream
applications. We evaluate our model on two real-world datasets -- spanning over
30M Java methods and 770K Python methods. Through visualization, we reveal
discriminative properties in our universal code representation. By comparing
multiple benchmarks, we demonstrate that the proposed framework achieves
state-of-the-art results on method name prediction and code graph link
prediction.
Related papers
- The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - Improving Model Training via Self-learned Label Representations [5.969349640156469]
We show that more sophisticated label representations are better for classification than the usual one-hot encoding.
We propose Learning with Adaptive Labels (LwAL) algorithm, which simultaneously learns the label representation while training for the classification task.
Our algorithm introduces negligible additional parameters and has a minimal computational overhead.
arXiv Detail & Related papers (2022-09-09T21:10:43Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Cross-Domain Few-Shot Graph Classification [7.23389716633927]
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces.
We propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views.
We show that when coupled with metric-based meta-learning frameworks, the proposed encoder achieves the best average meta-test classification accuracy.
arXiv Detail & Related papers (2022-01-20T16:16:30Z) - Graph Convolution for Re-ranking in Person Re-identification [40.9727538382413]
We propose a graph-based re-ranking method to improve learned features while still keeping Euclidean distance as the similarity metric.
A simple yet effective method is proposed to generate a profile vector for each tracklet in videos, which helps extend our method to video re-ID.
arXiv Detail & Related papers (2021-07-05T18:40:43Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - deGraphCS: Embedding Variable-based Flow Graph for Neural Code Search [15.19181807445119]
We propose a learnable deep Graph for Code Search (called deGraphCS) to transfer source code into variable-based flow graphs.
We collect a large-scale dataset from GitHub containing 41,152 code snippets written in C language.
arXiv Detail & Related papers (2021-03-24T06:57:44Z) - Learning to map source code to software vulnerability using
code-as-a-graph [67.62847721118142]
We explore the applicability of Graph Neural Networks in learning the nuances of source code from a security perspective.
We show that a code-as-graph encoding is more meaningful for vulnerability detection than existing code-as-photo and linear sequence encoding approaches.
arXiv Detail & Related papers (2020-06-15T16:05:27Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z) - Evolving Losses for Unsupervised Video Representation Learning [91.2683362199263]
We present a new method to learn video representations from large-scale unlabeled video data.
The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods.
arXiv Detail & Related papers (2020-02-26T16:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.