EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models
with Semi-structured Data
- URL: http://arxiv.org/abs/2312.15696v1
- Date: Mon, 25 Dec 2023 11:31:47 GMT
- Title: EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models
with Semi-structured Data
- Authors: Shirong Ma, Shen Huang, Shulin Huang, Xiaobin Wang, Yangning Li,
Hai-Tao Zheng, Pengjun Xie, Fei Huang and Yong Jiang
- Abstract summary: Large Language Models (LLMs) pre-trained on massive corpora have exhibited remarkable performance on various NLP tasks.
Applying these models to specific domains still poses significant challenges, such as lack of domain knowledge.
We focus on domain-specific continual pre-training of LLMs using E-commerce domain as an exemplar.
- Score: 67.8302955948861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) pre-trained on massive corpora have exhibited
remarkable performance on various NLP tasks. However, applying these models to
specific domains still poses significant challenges, such as lack of domain
knowledge, limited capacity to leverage domain knowledge and inadequate
adaptation to domain-specific data formats. Considering the exorbitant cost of
training LLMs from scratch and the scarcity of annotated data within particular
domains, in this work, we focus on domain-specific continual pre-training of
LLMs using E-commerce domain as an exemplar. Specifically, we explore the
impact of continual pre-training on LLMs employing unlabeled general and
E-commercial corpora. Furthermore, we design a mixing strategy among different
data sources to better leverage E-commercial semi-structured data. We construct
multiple tasks to assess LLMs' few-shot In-context Learning ability and their
zero-shot performance after instruction tuning in E-commerce domain.
Experimental results demonstrate the effectiveness of continual pre-training of
E-commerce LLMs and the efficacy of our devised data mixing strategy.
Related papers
- Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-Tuning [104.27224674122313]
Fine-tuning MLLM has become a common practice to improve performance on specific downstream tasks.
To balance the trade-off between generalization and specialization, we propose measuring the parameter importance for both pre-trained and fine-tuning distributions.
arXiv Detail & Related papers (2024-11-17T01:16:37Z) - A Practical Guide to Fine-tuning Language Models with Limited Data [9.413178499853156]
Employing pre-trained Large Language Models (LLMs) has become the de facto standard in Natural Language Processing (NLP) despite their extensive data requirements.
Motivated by the recent surge in research focused on training LLMs with limited data, this paper surveys recent transfer learning approaches to optimize model performance in downstream tasks where data is scarce.
arXiv Detail & Related papers (2024-11-14T15:55:37Z) - Exploring Language Model Generalization in Low-Resource Extractive QA [57.14068405860034]
We investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift.
We devise a series of experiments to empirically explain the performance gap.
arXiv Detail & Related papers (2024-09-27T05:06:43Z) - Investigating LLM Applications in E-Commerce [17.854070801235217]
Large Language Models (LLMs) have revolutionized natural language processing in various applications especially in e-commerce.
This paper explored the efficacy of LLMs in the e-commerce domain, focusing on instruction-tuning an open source LLM model with public e-commerce datasets of varying sizes.
We examined the effectiveness of the current niche industrial application of very large LLM, using in-context learning, in e-commerce specific tasks.
arXiv Detail & Related papers (2024-08-23T00:57:37Z) - PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs [31.16117964915814]
Machine unlearning, which seeks to erase specific data stored in the pre-trained or fine-tuned models, has emerged as a crucial protective measure for LLMs.
To facilitate the development of structural unlearning methods, we propose PISTOL, a pipeline for compiling multi-scenario datasets.
We conduct benchmarks with four distinct unlearning methods on both Llama2-7B and Mistral-7B models.
arXiv Detail & Related papers (2024-06-24T17:22:36Z) - Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts [49.950419707905944]
We present Self-MoE, an approach that transforms a monolithic LLM into a compositional, modular system of self-specialized experts.
Our approach leverages self-specialization, which constructs expert modules using self-generated synthetic data.
Our findings highlight the critical role of modularity, the applicability of Self-MoE to multiple base LLMs, and the potential of self-improvement in achieving efficient, scalable, and adaptable systems.
arXiv Detail & Related papers (2024-06-17T19:06:54Z) - Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data [53.70870879858533]
We introduce a Federated Domain-specific Knowledge Transfer framework.
It enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy.
The proposed FDKT framework consistently and greatly improves SLMs' task performance by around 5% with a privacy budget of less than 10.
arXiv Detail & Related papers (2024-05-23T06:14:35Z) - BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models [56.89958793648104]
Large Language Models (LLMs) are versatile and capable of addressing a diverse range of tasks.
Previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs.
We present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models.
arXiv Detail & Related papers (2024-03-27T08:57:21Z) - Investigating Continual Pretraining in Large Language Models: Insights
and Implications [9.591223887442704]
This paper studies the evolving domain of Continual Learning in large language models (LLMs)
Our primary emphasis is on continual domain-adaptive pretraining, a process designed to equip LLMs with the ability to integrate new information from various domains.
We examine the impact of model size on learning efficacy and forgetting, as well as how the progression and similarity of emerging domains affect the knowledge transfer within these models.
arXiv Detail & Related papers (2024-02-27T10:47:24Z) - Fine-tuning Large Enterprise Language Models via Ontological Reasoning [5.12835891233968]
Large Language Models (LLMs) exploit fine-tuning as a technique to adapt to diverse goals, thanks to task-specific training data.
We propose a novel neurosymbolic architecture that leverages the power of ontological reasoning to build task- and domain-specific corpora for LLM fine-tuning.
arXiv Detail & Related papers (2023-06-19T06:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.