EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models
with Semi-structured Data
- URL: http://arxiv.org/abs/2312.15696v1
- Date: Mon, 25 Dec 2023 11:31:47 GMT
- Title: EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models
with Semi-structured Data
- Authors: Shirong Ma, Shen Huang, Shulin Huang, Xiaobin Wang, Yangning Li,
Hai-Tao Zheng, Pengjun Xie, Fei Huang and Yong Jiang
- Abstract summary: Large Language Models (LLMs) pre-trained on massive corpora have exhibited remarkable performance on various NLP tasks.
Applying these models to specific domains still poses significant challenges, such as lack of domain knowledge.
We focus on domain-specific continual pre-training of LLMs using E-commerce domain as an exemplar.
- Score: 67.8302955948861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) pre-trained on massive corpora have exhibited
remarkable performance on various NLP tasks. However, applying these models to
specific domains still poses significant challenges, such as lack of domain
knowledge, limited capacity to leverage domain knowledge and inadequate
adaptation to domain-specific data formats. Considering the exorbitant cost of
training LLMs from scratch and the scarcity of annotated data within particular
domains, in this work, we focus on domain-specific continual pre-training of
LLMs using E-commerce domain as an exemplar. Specifically, we explore the
impact of continual pre-training on LLMs employing unlabeled general and
E-commercial corpora. Furthermore, we design a mixing strategy among different
data sources to better leverage E-commercial semi-structured data. We construct
multiple tasks to assess LLMs' few-shot In-context Learning ability and their
zero-shot performance after instruction tuning in E-commerce domain.
Experimental results demonstrate the effectiveness of continual pre-training of
E-commerce LLMs and the efficacy of our devised data mixing strategy.
Related papers
- PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs [31.16117964915814]
Machine unlearning, which seeks to erase specific data stored in the pre-trained or fine-tuned models, has emerged as a crucial protective measure for LLMs.
To facilitate the development of structural unlearning methods, we propose PISTOL, a pipeline for compiling multi-scenario datasets.
We conduct benchmarks with four distinct unlearning methods on both Llama2-7B and Mistral-7B models.
arXiv Detail & Related papers (2024-06-24T17:22:36Z) - Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data [53.70870879858533]
We introduce a Federated Domain-specific Knowledge Transfer framework.
It enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy.
The proposed FDKT framework consistently and greatly improves SLMs' task performance by around 5% with a privacy budget of less than 10.
arXiv Detail & Related papers (2024-05-23T06:14:35Z) - BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models [56.89958793648104]
Large Language Models (LLMs) are versatile and capable of addressing a diverse range of tasks.
Previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs.
We present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models.
arXiv Detail & Related papers (2024-03-27T08:57:21Z) - Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - Investigating Continual Pretraining in Large Language Models: Insights
and Implications [9.591223887442704]
This paper studies the evolving domain of Continual Learning in large language models (LLMs)
Our primary emphasis is on continual domain-adaptive pretraining, a process designed to equip LLMs with the ability to integrate new information from various domains.
We examine the impact of model size on learning efficacy and forgetting, as well as how the progression and similarity of emerging domains affect the knowledge transfer within these models.
arXiv Detail & Related papers (2024-02-27T10:47:24Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Efficient Continual Pre-training for Building Domain Specific Large
Language Models [8.799785664150255]
Large language models (LLMs) have demonstrated remarkable open-domain capabilities.
Traditionally, LLMs tailored for a domain are trained from scratch to excel at handling domain-specific tasks.
We introduce FinPythia-6.9B, developed through domain-adaptive continual pre-training on the financial domain.
arXiv Detail & Related papers (2023-11-14T21:19:14Z) - Large Language Models Can Be Good Privacy Protection Learners [53.07930843882592]
We introduce Privacy Protection Language Models (PPLM), a novel paradigm for fine-tuning language models.
Our work offers a theoretical analysis for model design and delves into various techniques such as corpus curation, penalty-based unlikelihood in training loss, and instruction-based tuning.
In particular, instruction tuning with both positive and negative examples, stands out as a promising method, effectively protecting private data while enhancing the model's knowledge.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Fine-tuning Large Enterprise Language Models via Ontological Reasoning [5.12835891233968]
Large Language Models (LLMs) exploit fine-tuning as a technique to adapt to diverse goals, thanks to task-specific training data.
We propose a novel neurosymbolic architecture that leverages the power of ontological reasoning to build task- and domain-specific corpora for LLM fine-tuning.
arXiv Detail & Related papers (2023-06-19T06:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.