Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- URL: http://arxiv.org/abs/2312.05276v1
- Date: Fri, 8 Dec 2023 03:44:09 GMT
- Title: Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- Authors: Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang,
Jinjie Gu, Jun Zhou, Guannan Zhang
- Abstract summary: We propose PAIR, a novel Progressive prompting Augmented mIning fRamework for harvesting marketing-oriented knowledge graph with LLMs.
In particular, we reduce the pure relation generation to an LLM based adaptive relation filtering process through the knowledge-empowered prompting technique.
In terms of online serving, we specialize in a small and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality corpus provided by a strong teacher-LLM.
- Score: 34.37733369078883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the rapid development of mobile economy has promoted the
flourishing of online marketing campaigns, whose success greatly hinges on the
efficient matching between user preferences and desired marketing campaigns
where a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)
could serve as the critical "bridge" for preference propagation. In this paper,
we seek to carefully prompt a Large Language Model (LLM) with domain-level
knowledge as a better marketing-oriented knowledge miner for marketing-oriented
knowledge graph construction, which is however non-trivial, suffering from
several inevitable issues in real-world marketing scenarios, i.e.,
uncontrollable relation generation of LLMs,insufficient prompting ability of a
single prompt, the unaffordable deployment cost of LLMs. To this end, we
propose PAIR, a novel Progressive prompting Augmented mIning fRamework for
harvesting marketing-oriented knowledge graph with LLMs. In particular, we
reduce the pure relation generation to an LLM based adaptive relation filtering
process through the knowledge-empowered prompting technique. Next, we steer
LLMs for entity expansion with progressive prompting augmentation,followed by a
reliable aggregation with comprehensive consideration of both self-consistency
and semantic relatedness. In terms of online serving, we specialize in a small
and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality
corpus provided by a strong teacher-LLM. Extensive experiments and practical
applications in audience targeting verify the effectiveness of the proposed
(Light)PAIR.
Related papers
- Explainable LLM-driven Multi-dimensional Distillation for E-Commerce Relevance Learning [20.569157915157817]
We propose an Explainable LLM-driven Multi-dimensional Distillation framework for e-commerce relevance learning.
Our proposed framework significantly enhances e-commerce relevance learning performance and user experience.
arXiv Detail & Related papers (2024-11-20T05:30:15Z) - Best Practices for Distilling Large Language Models into BERT for Web Search Ranking [14.550458167328497]
Large Language Models (LLMs) can generate a ranked list of potential documents.
We transfer the ranking expertise of LLMs to a more compact model like BERT, using a ranking loss to enable the deployment of less resource-intensive models.
Our model has been successfully integrated into a commercial web search engine as of February 2024.
arXiv Detail & Related papers (2024-11-07T08:54:46Z) - A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs [74.35290684163718]
A primary challenge in large language model (LLM) development is their onerous pre-training cost.
This paper explores a promising paradigm to improve LLM pre-training efficiency and quality by leveraging a small language model (SLM)
arXiv Detail & Related papers (2024-10-24T14:31:52Z) - Large Language Models for Base Station Siting: Intelligent Deployment based on Prompt or Agent [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
This approach entails the strategic use of well-crafted prompts to infuse human experience and knowledge into these sophisticated LLMs.
This integration represents the future paradigm of artificial intelligence (AI) as a service and AI for more ease.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Fine-tuning Multimodal Large Language Models for Product Bundling [53.01642741096356]
We introduce Bundle-MLLM, a novel framework that fine-tunes large language models (LLMs) through a hybrid item tokenization approach.
Specifically, we integrate textual, media, and relational data into a unified tokenization, introducing a soft separation token to distinguish between textual and non-textual tokens.
We propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling.
arXiv Detail & Related papers (2024-07-16T13:30:14Z) - Truthful Aggregation of LLMs with an Application to Online Advertising [11.552000005640203]
We introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers.
We show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead.
arXiv Detail & Related papers (2024-05-09T17:01:31Z) - CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization [22.080563239179618]
Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks.
We propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment.
arXiv Detail & Related papers (2024-05-08T03:11:12Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.