DoPAMine: Domain-specific Pre-training Adaptation from seed-guided data Mining
- URL: http://arxiv.org/abs/2410.00260v2
- Date: Wed, 9 Oct 2024 17:39:59 GMT
- Title: DoPAMine: Domain-specific Pre-training Adaptation from seed-guided data Mining
- Authors: Vinayak Arannil, Neha Narwal, Sourav Sanjukta Bhabesh, Sai Nikhil Thirandas, Darren Yow-Bang Wang, Graham Horwood, Alex Anto Chirayath, Gouri Pandeshwar,
- Abstract summary: Large Language Models (LLMs) have shown ability to generalize effectively across numerous industry domains.
LLMs exhibit limitations when tasked with performing in specialized or low-resource industry domains.
In this work, we propose an automated and scalable framework - DoPAMine:Domain-specific Pre-training Adaptation from seed-guided data Mining.
- Score: 2.1534028009401713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have shown remarkable ability to generalize effectively across numerous industry domains while executing a range of tasks. Many of these competencies are obtained from the data utilized during the pre-training phase of the Language Models (LMs). However, these models exhibit limitations when tasked with performing in specialized or low-resource industry domains. More recent approaches use LLMs for generating domain-specific synthetic data but most often they lack in truthfulness and complexity. Alternatively, in cases where domain data is available like healthcare and finance most of the LMs are proprietary necessitating the need for a scalable method to curate real world industry specific pre-training data. In this work, we propose an automated and scalable framework - DoPAMine:Domain-specific Pre-training Adaptation from seed-guided data Mining, to mine domain specific training data from a large data corpus for domain adaptation of a LM. The framework leverages the parametric knowledge of a LLM to generate diverse and representative seed data tailored to a specific domain which is then used to mine real world data from a large data corpus like Common Crawl. We evaluated our framework's performance in the continual pre-training (CPT) setting by training two domain specific 7B parameter LMs in healthcare and finance with data mined via DoPAMine. Our experiments show that DoPAMine boosts the performance of pre-trained LLMs on average by 4.9% and 5.1% in zero-shot and 5-shot settings respectively on healthcare tasks from MMLU, MedQA, MedMCQA and PubMedQA datasets, and 2.9% and 6.7% for zero-shot and 5-shot settings respectively on finance tasks from FiQA-SA, FPB and Headlines datasets when compared to the baseline.
Related papers
- Reinforcement Learning for Long-Horizon Interactive LLM Agents [56.9860859585028]
Interactive digital agents (IDAs) leverage APIs of stateful digital environments to perform tasks in response to user requests.
We present a reinforcement learning (RL) approach that trains IDAs directly in their target environments.
We derive LOOP, a data- and memory-efficient variant of proximal policy optimization.
arXiv Detail & Related papers (2025-02-03T18:35:42Z) - On Domain-Specific Post-Training for Multimodal Large Language Models [72.67107077850939]
We develop a visual instruction synthesizer that generates diverse visual instruction tasks from domain-specific image-caption pairs.
We apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales.
arXiv Detail & Related papers (2024-11-29T18:42:28Z) - Exploring Language Model Generalization in Low-Resource Extractive QA [57.14068405860034]
We investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift.
We devise a series of experiments to explain the performance gap empirically.
arXiv Detail & Related papers (2024-09-27T05:06:43Z) - Data Proportion Detection for Optimized Data Management for Large Language Models [32.62631669919273]
We introduce a new topic, textitdata proportion detection, which enables the automatic estimation of pre-training data proportions.
We provide rigorous theoretical proofs, practical algorithms, and preliminary experimental results for data proportion detection.
arXiv Detail & Related papers (2024-09-26T04:30:32Z) - Task Oriented In-Domain Data Augmentation [38.525017729123114]
Large Language Models (LLMs) have shown superior performance in various applications and fields.
To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data.
We propose TRAIT, a task-oriented in-domain data augmentation framework.
arXiv Detail & Related papers (2024-06-24T14:58:11Z) - General LLMs as Instructors for Domain-Specific LLMs: A Sequential Fusion Method to Integrate Extraction and Editing [12.017822691367705]
We introduce a Sequential Fusion method to integrate knowledge from complex contexts into Large Language Models (LLMs)
Using our method, domain-specific LLMs achieved a 71.7% accuracy (an average gain of 39.1%) in question-answering tasks.
These findings underscore the effectiveness and flexibility of our approach in FDoR-UL across various domains.
arXiv Detail & Related papers (2024-03-23T06:03:36Z) - How to Train Data-Efficient LLMs [56.41105687693619]
We study data-efficient approaches for pre-training language models (LLMs)
We find that Ask-LLM and Density sampling are the best methods in their respective categories.
In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories.
arXiv Detail & Related papers (2024-02-15T02:27:57Z) - EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models
with Semi-structured Data [67.8302955948861]
Large Language Models (LLMs) pre-trained on massive corpora have exhibited remarkable performance on various NLP tasks.
Applying these models to specific domains still poses significant challenges, such as lack of domain knowledge.
We focus on domain-specific continual pre-training of LLMs using E-commerce domain as an exemplar.
arXiv Detail & Related papers (2023-12-25T11:31:47Z) - DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining [148.90031913522648]
We propose Domain Reweighting with Minimax Optimization (DoReMi)
DoReMi first trains a small proxy model using group distributionally robust optimization (Group DRO) over domains to produce domain weights.
We then resample a dataset with these domain weights and train a larger, full-sized model.
arXiv Detail & Related papers (2023-05-17T17:58:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.