Incorprating Prompt tuning for Commit classification with prior
Knowledge
- URL: http://arxiv.org/abs/2308.10576v2
- Date: Thu, 26 Oct 2023 08:23:08 GMT
- Title: Incorprating Prompt tuning for Commit classification with prior
Knowledge
- Authors: Jiajun Tong, Xiaobin Rui
- Abstract summary: Commit Classification(CC) is an important task in software maintenance.
We propose a generative framework that incorporates prompt-tuning for commit classification with prior knowledge.
Our framework can solve the CC problem simply but effectively in few-shot and zeroshot scenarios.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commit Classification(CC) is an important task in software maintenance since
it helps software developers classify code changes into different types
according to their nature and purpose. This allows them to better understand
how their development efforts are progressing, identify areas where they need
improvement. However, existing methods are all discriminative models, usually
with complex architectures that require additional output layers to produce
class label probabilities. Moreover, they require a large amount of labeled
data for fine-tuning, and it is difficult to learn effective classification
boundaries in the case of limited labeled data. To solve above problems, we
propose a generative framework that Incorporating prompt-tuning for commit
classification with prior knowledge (IPCK)
https://github.com/AppleMax1992/IPCK, which simplifies the model structure and
learns features across different tasks. It can still reach the SOTA performance
with only limited samples. Firstly, we proposed a generative framework based on
T5. This encoder-decoder construction method unifies different CC task into a
text2text problem, which simplifies the structure of the model by not requiring
an extra output layer. Second, instead of fine-tuning, we design an
prompt-tuning solution which can be adopted in few-shot scenarios with only
limit samples. Furthermore, we incorporate prior knowledge via an external
knowledge graph to map the probabilities of words into the final labels in the
speech machine step to improve performance in few-shot scenarios. Extensive
experiments on two open available datasets show that our framework can solve
the CC problem simply but effectively in few-shot and zeroshot scenarios, while
improving the adaptability of the model without requiring a large amount of
training samples for fine-tuning.
Related papers
- Improve Meta-learning for Few-Shot Text Classification with All You Can Acquire from the Tasks [10.556477506959888]
Existing methods often encounter difficulties in drawing accurate class prototypes from support set samples.
Recent approaches attempt to incorporate external knowledge or pre-trained language models to augment data, but this requires additional resources.
We propose a novel solution by adequately leveraging the information within the task itself.
arXiv Detail & Related papers (2024-10-14T12:47:11Z) - Generative Multi-modal Models are Good Class-Incremental Learners [51.5648732517187]
We propose a novel generative multi-modal model (GMM) framework for class-incremental learning.
Our approach directly generates labels for images using an adapted generative model.
Under the Few-shot CIL setting, we have improved by at least 14% accuracy over all the current state-of-the-art methods with significantly less forgetting.
arXiv Detail & Related papers (2024-03-27T09:21:07Z) - Transfer Learning for Structured Pruning under Limited Task Data [15.946734013984184]
We propose a framework which combines structured pruning with transfer learning to reduce the need for task-specific data.
We demonstrate that our framework results in pruned models with improved generalization over strong baselines.
arXiv Detail & Related papers (2023-11-10T20:23:35Z) - TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree Transformation [9.477734501499274]
We present TransformCode, a novel framework that learns code embeddings in a contrastive learning manner.
Our framework is encoder-agnostic and language-agnostic, which means that it can leverage any encoder model and handle any programming language.
arXiv Detail & Related papers (2023-11-10T09:05:23Z) - Boosting Commit Classification with Contrastive Learning [0.8655526882770742]
Commit Classification (CC) is an important task in software maintenance.
We propose a contrastive learning-based commit classification framework.
Our framework can solve the CC problem simply but effectively in fewshot scenarios.
arXiv Detail & Related papers (2023-08-16T10:02:36Z) - Improving Cross-task Generalization of Unified Table-to-text Models with
Compositional Task Configurations [63.04466647849211]
Methods typically encode task information with a simple dataset name as a prefix to the encoder.
We propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization.
We show this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations.
arXiv Detail & Related papers (2022-12-17T02:20:14Z) - Discrete Key-Value Bottleneck [95.61236311369821]
Deep neural networks perform well on classification tasks where data streams are i.i.d. and labeled data is abundant.
One powerful approach that has addressed this challenge involves pre-training of large encoders on volumes of readily available data, followed by task-specific tuning.
Given a new task, however, updating the weights of these encoders is challenging as a large number of weights needs to be fine-tuned, and as a result, they forget information about the previous tasks.
We propose a model architecture to address this issue, building upon a discrete bottleneck containing pairs of separate and learnable key-value codes.
arXiv Detail & Related papers (2022-07-22T17:52:30Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Few-Shot Learning with Siamese Networks and Label Tuning [5.006086647446482]
We show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative.
We introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings.
arXiv Detail & Related papers (2022-03-28T11:16:46Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.