Domain-Specific Code Language Models: Unraveling the Potential for HPC
Codes and Tasks
- URL: http://arxiv.org/abs/2312.13322v1
- Date: Wed, 20 Dec 2023 15:11:06 GMT
- Title: Domain-Specific Code Language Models: Unraveling the Potential for HPC
Codes and Tasks
- Authors: Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien,
Mihai Capota, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval
Pinter, Timothy Mattson, Gal Oren
- Abstract summary: A growing trend in AI for software development is to develop larger language models (LLMs) to address a variety of programming tasks.
Even LLMs applied to tasks from the high-performance computing ( HPC) domain are huge in size and demand expensive compute resources for training.
We build an HPC-specific LM, named MonoCoder, that is orders of magnitude smaller than existing LMs but delivers similar, if not better performance.
- Score: 5.250454826260407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With easier access to powerful compute resources, there is a growing trend in
AI for software development to develop larger language models (LLMs) to address
a variety of programming tasks. Even LLMs applied to tasks from the
high-performance computing (HPC) domain are huge in size and demand expensive
compute resources for training. This is partly because these LLMs for HPC tasks
are obtained by finetuning existing LLMs that support several natural and/or
programming languages. We found this design choice confusing - why do we need
large LMs trained on natural languages and programming languages unrelated to
HPC for HPC-specific tasks?
In this line of work, we aim to question choices made by existing LLMs by
developing smaller LMs for specific domains - we call them domain-specific LMs.
Specifically, we start off with HPC as a domain and build an HPC-specific LM,
named MonoCoder, that is orders of magnitude smaller than existing LMs but
delivers similar, if not better performance, on non-HPC and HPC tasks.
Specifically, we pre-trained MonoCoder on an HPC-specific dataset (named
HPCorpus) of C and C++ programs mined from GitHub. We evaluated the performance
of MonoCoder against conventional multi-lingual LLMs. Results demonstrate that
MonoCoder, although much smaller than existing LMs, achieves similar results on
normalized-perplexity tests and much better ones in CodeBLEU competence for
high-performance and parallel code generations. Furthermore, fine-tuning the
base model for the specific task of parallel code generation (OpenMP parallel
for pragmas) demonstrates outstanding results compared to GPT, especially when
local misleading semantics are removed by our novel pre-processor Tokompiler,
showcasing the ability of domain-specific models to assist in HPC-relevant
tasks.
Related papers
- BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models [56.89958793648104]
Large Language Models (LLMs) are versatile and capable of addressing a diverse range of tasks.
Previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs.
We present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models.
arXiv Detail & Related papers (2024-03-27T08:57:21Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - OMPGPT: A Generative Pre-trained Transformer Model for OpenMP [6.917568654215119]
OMPGPT is a novel domain-specific model meticulously designed to harness the inherent strengths of language models for OpenMP pragma generation.
We leverage prompt engineering techniques from the NLP domain to create Chain-of-OMP, an innovative strategy designed to enhance OMPGPT's effectiveness.
arXiv Detail & Related papers (2024-01-28T06:06:59Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code [76.84199699772903]
ML-Bench is a benchmark rooted in real-world programming applications that leverage existing code repositories to perform tasks.
To evaluate both Large Language Models (LLMs) and AI agents, two setups are employed: ML-LLM-Bench for assessing LLMs' text-to-code conversion within a predefined deployment environment, and ML-Agent-Bench for testing autonomous agents in an end-to-end task execution within a Linux sandbox environment.
arXiv Detail & Related papers (2023-11-16T12:03:21Z) - HPC-GPT: Integrating Large Language Model for High-Performance Computing [3.8078849170829407]
We propose HPC-GPT, a novel LLaMA-based model that has been supervised fine-tuning using generated QA (Question-Answer) instances for the HPC domain.
To evaluate its effectiveness, we concentrate on two HPC tasks: managing AI models and datasets for HPC, and data race detection.
Our experiments on open-source benchmarks yield extensive results, underscoring HPC-GPT's potential to bridge the performance gap between LLMs and HPC-specific tasks.
arXiv Detail & Related papers (2023-10-03T01:34:55Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - Scope is all you need: Transforming LLMs for HPC Code [5.0227775038998415]
We propose a novel tokenizer named Tokompiler, designed specifically for preprocessing code in HPC and compilation-centric tasks.
Tokompiler leverages knowledge of language primitives to generate language-oriented tokens, providing a context-aware understanding of code structure.
Results demonstrate that Tokompiler significantly enhances code completion accuracy and semantic understanding compared to traditional tokenizers.
arXiv Detail & Related papers (2023-08-18T10:12:03Z) - HPC-Coder: Modeling Parallel Programs using Large Language Models [2.3101915391170573]
We show how large language models can be applied to tasks specific to high performance and scientific codes.
We introduce a new dataset of HPC and scientific codes and use it to fine-tune several pre-trained models.
In our experiments, we show that this model can auto-complete HPC functions where generic models cannot.
arXiv Detail & Related papers (2023-06-29T19:44:55Z) - LM4HPC: Towards Effective Language Model Application in High-Performance
Computing [0.46180371154032884]
We design the LM4 HPC framework to facilitate the research and development of HPC software analyses and optimizations using LMs.
Our framework is built on top of a range of components from different levels of the machine learning software stack, with Hugging Face-compatible APIs.
The results show that LM4 HPC can help users quickly evaluate a set of state-of-the-art models and generate insightful leaderboards.
arXiv Detail & Related papers (2023-06-26T18:05:03Z) - Augmented Language Models: a Survey [55.965967655575454]
This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools.
We refer to them as Augmented Language Models (ALMs)
The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks.
arXiv Detail & Related papers (2023-02-15T18:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.