SoTaNa: The Open-Source Software Development Assistant
- URL: http://arxiv.org/abs/2308.13416v1
- Date: Fri, 25 Aug 2023 14:56:21 GMT
- Title: SoTaNa: The Open-Source Software Development Assistant
- Authors: Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu
Zhang, Shi Han, Dongmei Zhang, Hongbin Sun
- Abstract summary: SoTaNa is an open-source software development assistant.
It generates high-quality instruction-based data for the domain of software engineering.
It employs a parameter-efficient fine-tuning approach to enhance the open-source foundation model, LLaMA.
- Score: 81.86136560157266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software development plays a crucial role in driving innovation and
efficiency across modern societies. To meet the demands of this dynamic field,
there is a growing need for an effective software development assistant.
However, existing large language models represented by ChatGPT suffer from
limited accessibility, including training data and model weights. Although
other large open-source models like LLaMA have shown promise, they still
struggle with understanding human intent. In this paper, we present SoTaNa, an
open-source software development assistant. SoTaNa utilizes ChatGPT to generate
high-quality instruction-based data for the domain of software engineering and
employs a parameter-efficient fine-tuning approach to enhance the open-source
foundation model, LLaMA. We evaluate the effectiveness of \our{} in answering
Stack Overflow questions and demonstrate its capabilities. Additionally, we
discuss its capabilities in code summarization and generation, as well as the
impact of varying the volume of generated data on model performance. Notably,
SoTaNa can run on a single GPU, making it accessible to a broader range of
researchers. Our code, model weights, and data are public at
\url{https://github.com/DeepSoftwareAnalytics/SoTaNa}.
Related papers
- LAMBDA: A Large Model Based Data Agent [7.240586338370509]
LAMBDA is a novel open-source, code-free multi-agent data analysis system.
It is designed to address data analysis challenges in complex data-driven applications.
LAMBDA has demonstrated strong performance on various machine learning datasets.
arXiv Detail & Related papers (2024-07-24T06:26:36Z) - TechGPT-2.0: A large language model project to solve the task of
knowledge graph construction [31.638140593358433]
TechGPT-2.0 is a project designed to enhance the capabilities of large language models in knowledge graph construction tasks.
It exhibits robust text processing capabilities, particularly in the domains of medicine and law.
TechGPT-2.0 is trained on Huawei's Ascend server.
arXiv Detail & Related papers (2024-01-09T11:52:58Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Source Code Data Augmentation for Deep Learning: A Survey [32.035973285175075]
We conduct a comprehensive survey of data augmentation for source code.
We highlight the general strategies and techniques to optimize the DA quality.
We outline the prevailing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-05-31T14:47:44Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Enhancing Chat Language Models by Scaling High-quality Instructional
Conversations [91.98516412612739]
We first provide a systematically designed, diverse, informative, large-scale dataset of instructional conversations, UltraChat.
Our objective is to capture the breadth of interactions that a human might have with an AI assistant.
We fine-tune a LLaMA model to create a powerful conversational model, UltraLLaMA.
arXiv Detail & Related papers (2023-05-23T16:49:14Z) - Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data
Programming [77.38174112525168]
We present Nemo, an end-to-end interactive Supervision system that improves overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS supervision approach.
arXiv Detail & Related papers (2022-03-02T19:57:32Z) - KILT: a Benchmark for Knowledge Intensive Language Tasks [102.33046195554886]
We present a benchmark for knowledge-intensive language tasks (KILT)
All tasks in KILT are grounded in the same snapshot of Wikipedia.
We find that a shared dense vector index coupled with a seq2seq model is a strong baseline.
arXiv Detail & Related papers (2020-09-04T15:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.