Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- URL: http://arxiv.org/abs/2312.05276v1
- Date: Fri, 8 Dec 2023 03:44:09 GMT
- Title: Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- Authors: Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang,
Jinjie Gu, Jun Zhou, Guannan Zhang
- Abstract summary: We propose PAIR, a novel Progressive prompting Augmented mIning fRamework for harvesting marketing-oriented knowledge graph with LLMs.
In particular, we reduce the pure relation generation to an LLM based adaptive relation filtering process through the knowledge-empowered prompting technique.
In terms of online serving, we specialize in a small and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality corpus provided by a strong teacher-LLM.
- Score: 34.37733369078883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the rapid development of mobile economy has promoted the
flourishing of online marketing campaigns, whose success greatly hinges on the
efficient matching between user preferences and desired marketing campaigns
where a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)
could serve as the critical "bridge" for preference propagation. In this paper,
we seek to carefully prompt a Large Language Model (LLM) with domain-level
knowledge as a better marketing-oriented knowledge miner for marketing-oriented
knowledge graph construction, which is however non-trivial, suffering from
several inevitable issues in real-world marketing scenarios, i.e.,
uncontrollable relation generation of LLMs,insufficient prompting ability of a
single prompt, the unaffordable deployment cost of LLMs. To this end, we
propose PAIR, a novel Progressive prompting Augmented mIning fRamework for
harvesting marketing-oriented knowledge graph with LLMs. In particular, we
reduce the pure relation generation to an LLM based adaptive relation filtering
process through the knowledge-empowered prompting technique. Next, we steer
LLMs for entity expansion with progressive prompting augmentation,followed by a
reliable aggregation with comprehensive consideration of both self-consistency
and semantic relatedness. In terms of online serving, we specialize in a small
and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality
corpus provided by a strong teacher-LLM. Extensive experiments and practical
applications in audience targeting verify the effectiveness of the proposed
(Light)PAIR.
Related papers
- Best Practices for Distilling Large Language Models into BERT for Web Search Ranking [14.550458167328497]
Large Language Models (LLMs) can generate a ranked list of potential documents.
We transfer the ranking expertise of LLMs to a more compact model like BERT, using a ranking loss to enable the deployment of less resource-intensive models.
Our model has been successfully integrated into a commercial web search engine as of February 2024.
arXiv Detail & Related papers (2024-11-07T08:54:46Z) - A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs [74.35290684163718]
A primary challenge in large language model (LLM) development is their onerous pre-training cost.
This paper explores a promising paradigm to improve LLM pre-training efficiency and quality by leveraging a small language model (SLM)
arXiv Detail & Related papers (2024-10-24T14:31:52Z) - Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMs [30.333277284839053]
Large language models (LLMs) have shown success in generating high-quality responses.
Existing methods to enhance response quality often involve a prompt refinement model.
We introduce a self-instructed in-context learning framework that empowers LLMs to deliver more effective responses.
arXiv Detail & Related papers (2024-09-03T02:42:39Z) - Large Language Models for Base Station Siting: Intelligent Deployment based on Prompt or Agent [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
This approach entails the strategic use of well-crafted prompts to infuse human experience and knowledge into these sophisticated LLMs.
This integration represents the future paradigm of artificial intelligence (AI) as a service and AI for more ease.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration [70.09561665520043]
We propose a novel framework for multi-agent collaboration that introduces Reinforced Advantage feedback (ReAd) for efficient self-refinement of plans.
We provide theoretical analysis by extending advantage-weighted regression in reinforcement learning to multi-agent systems.
Experiments on Over-AI and a difficult variant of RoCoBench show that ReAd surpasses baselines in success rate, and also significantly decreases the interaction steps of agents.
arXiv Detail & Related papers (2024-05-23T08:33:19Z) - Truthful Aggregation of LLMs with an Application to Online Advertising [11.552000005640203]
We introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers.
We show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead.
arXiv Detail & Related papers (2024-05-09T17:01:31Z) - CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization [22.080563239179618]
Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks.
We propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment.
arXiv Detail & Related papers (2024-05-08T03:11:12Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - Improving Contextual Congruence Across Modalities for Effective
Multimodal Marketing using Knowledge-infused Learning [3.3281180957341117]
Large Language (LLMs) and Vision models (LVMs) are still limited in capturing holistic meaning with cross-modal semantic relationships.
We design a framework to couple explicit commonsense knowledge in the form of knowledge graphs with large VLMs to improve the performance of a downstream task.
Our approach enables the early detection of likely persuasive multi-modal campaigns and the assessment and augmentation of marketing theory.
arXiv Detail & Related papers (2024-02-06T00:51:27Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.