A Transformer-based Approach for Augmenting Software Engineering Chatbots Datasets
- URL: http://arxiv.org/abs/2407.11955v1
- Date: Tue, 16 Jul 2024 17:48:44 GMT
- Title: A Transformer-based Approach for Augmenting Software Engineering Chatbots Datasets
- Authors: Ahmad Abdellatif, Khaled Badran, Diego Elias Costa, Emad Shihab,
- Abstract summary: We present an automated transformer-based approach to augment software engineering datasets.
We evaluate the impact of using the augmentation approach on the Rasa NLU's performance using three software engineering datasets.
- Score: 4.311626046942916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: The adoption of chatbots into software development tasks has become increasingly popular among practitioners, driven by the advantages of cost reduction and acceleration of the software development process. Chatbots understand users' queries through the Natural Language Understanding component (NLU). To yield reasonable performance, NLUs have to be trained with extensive, high-quality datasets, that express a multitude of ways users may interact with chatbots. However, previous studies show that creating a high-quality training dataset for software engineering chatbots is expensive in terms of both resources and time. Aims: Therefore, in this paper, we present an automated transformer-based approach to augment software engineering chatbot datasets. Method: Our approach combines traditional natural language processing techniques with the BART transformer to augment a dataset by generating queries through synonym replacement and paraphrasing. We evaluate the impact of using the augmentation approach on the Rasa NLU's performance using three software engineering datasets. Results: Overall, the augmentation approach shows promising results in improving the Rasa's performance, augmenting queries with varying sentence structures while preserving their original semantics. Furthermore, it increases Rasa's confidence in its intent classification for the correctly classified intents. Conclusions: We believe that our study helps practitioners improve the performance of their chatbots and guides future research to propose augmentation techniques for SE chatbots.
Related papers
- An Approach for Auto Generation of Labeling Functions for Software Engineering Chatbots [3.1911318265930944]
We propose an approach to automatically generate labeling functions (LFs) by extracting patterns from labeled user queries.
We evaluate the effectiveness of our approach by applying it to the queries of four diverse SE datasets.
arXiv Detail & Related papers (2024-10-09T17:34:14Z) - Body Transformer: Leveraging Robot Embodiment for Policy Learning [51.531793239586165]
Body Transformer (BoT) is an architecture that leverages the robot embodiment by providing an inductive bias that guides the learning process.
We represent the robot body as a graph of sensors and actuators, and rely on masked attention to pool information throughout the architecture.
The resulting architecture outperforms the vanilla transformer, as well as the classical multilayer perceptron, in terms of task completion, scaling properties, and computational efficiency.
arXiv Detail & Related papers (2024-08-12T17:31:28Z) - Distinguishing Chatbot from Human [1.1249583407496218]
We develop a new dataset consisting of more than 750,000 human-written paragraphs.
Based on this dataset, we apply Machine Learning (ML) techniques to determine the origin of text.
Our proposed solutions offer high classification accuracy and serve as useful tools for textual analysis.
arXiv Detail & Related papers (2024-08-03T13:18:04Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Large language model-powered chatbots for internationalizing student support in higher education [0.0]
This research explores the integration of GPT-3.5 and GPT-4 Turbo into higher education to enhance internationalization and leverage digital transformation.
It delves into the design, implementation, and application of Large Language Models (LLMs) for improving student engagement, information access, and support.
arXiv Detail & Related papers (2024-03-16T23:50:19Z) - Multi-Purpose NLP Chatbot : Design, Methodology & Conclusion [0.0]
This research paper provides a thorough analysis of the chatbots technology environment as it exists today.
It provides a very flexible system that makes use of reinforcement learning strategies to improve user interactions and conversational experiences.
The complexity of chatbots technology development is also explored in this study, along with the causes that have propelled these developments and their far-reaching effects on a range of sectors.
arXiv Detail & Related papers (2023-10-13T09:47:24Z) - XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems
to Improve Language Understanding [73.24847320536813]
This study explores distilling visual information from pretrained multimodal transformers to pretrained language encoders.
Our framework is inspired by cross-modal encoders' success in visual-language tasks while we alter the learning objective to cater to the language-heavy characteristics of NLU.
arXiv Detail & Related papers (2022-04-15T03:44:00Z) - On the validity of pre-trained transformers for natural language
processing in the software engineering domain [78.32146765053318]
We compare BERT transformer models trained with software engineering data with transformers based on general domain data.
Our results show that for tasks that require understanding of the software engineering context, pre-training with software engineering data is valuable.
arXiv Detail & Related papers (2021-09-10T08:46:31Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Pretrained Transformers for Text Ranking: BERT and Beyond [53.83210899683987]
This survey provides an overview of text ranking with neural network architectures known as transformers.
The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in natural language processing.
arXiv Detail & Related papers (2020-10-13T15:20:32Z) - Chatbot Interaction with Artificial Intelligence: Human Data
Augmentation with T5 and Language Transformer Ensemble for Text
Classification [2.492300648514128]
We present the Interaction with Artificial Intelligence (CI-AI) framework as an approach to the training of deep learning chatbots for task classification.
The intelligent system augments human-sourced data via artificial paraphrasing in order to generate a large set of training data.
We find that all models are improved when training data is augmented by the T5 model.
arXiv Detail & Related papers (2020-10-12T19:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.