Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- URL: http://arxiv.org/abs/2312.05276v1
- Date: Fri, 8 Dec 2023 03:44:09 GMT
- Title: Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation
- Authors: Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang,
Jinjie Gu, Jun Zhou, Guannan Zhang
- Abstract summary: We propose PAIR, a novel Progressive prompting Augmented mIning fRamework for harvesting marketing-oriented knowledge graph with LLMs.
In particular, we reduce the pure relation generation to an LLM based adaptive relation filtering process through the knowledge-empowered prompting technique.
In terms of online serving, we specialize in a small and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality corpus provided by a strong teacher-LLM.
- Score: 34.37733369078883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the rapid development of mobile economy has promoted the
flourishing of online marketing campaigns, whose success greatly hinges on the
efficient matching between user preferences and desired marketing campaigns
where a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)
could serve as the critical "bridge" for preference propagation. In this paper,
we seek to carefully prompt a Large Language Model (LLM) with domain-level
knowledge as a better marketing-oriented knowledge miner for marketing-oriented
knowledge graph construction, which is however non-trivial, suffering from
several inevitable issues in real-world marketing scenarios, i.e.,
uncontrollable relation generation of LLMs,insufficient prompting ability of a
single prompt, the unaffordable deployment cost of LLMs. To this end, we
propose PAIR, a novel Progressive prompting Augmented mIning fRamework for
harvesting marketing-oriented knowledge graph with LLMs. In particular, we
reduce the pure relation generation to an LLM based adaptive relation filtering
process through the knowledge-empowered prompting technique. Next, we steer
LLMs for entity expansion with progressive prompting augmentation,followed by a
reliable aggregation with comprehensive consideration of both self-consistency
and semantic relatedness. In terms of online serving, we specialize in a small
and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality
corpus provided by a strong teacher-LLM. Extensive experiments and practical
applications in audience targeting verify the effectiveness of the proposed
(Light)PAIR.
Related papers
- Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration [70.09561665520043]
We propose a novel framework for multi-agent collaboration that introduces Reinforced Advantage feedback (ReAd) for efficient self-refinement of plans.
We provide theoretical analysis by extending advantage-weighted regression in reinforcement learning to multi-agent systems.
Experiments on Over-AI and a difficult variant of RoCoBench show that ReAd surpasses baselines in success rate, and also significantly decreases the interaction steps of agents.
arXiv Detail & Related papers (2024-05-23T08:33:19Z) - Truthful Aggregation of LLMs with an Application to Online Advertising [11.552000005640203]
Large Language Models (LLMs) are being integrated into online platforms' services.
This makes revenue generation from LLM-generated content the next major challenge in online advertising.
We introduce an auction mechanism for this problem that operates without LLM fine-tuning.
We show that our mechanism significantly boosts advertiser value and platform revenue, with low computational overhead.
arXiv Detail & Related papers (2024-05-09T17:01:31Z) - CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization [22.080563239179618]
Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks.
We propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment.
arXiv Detail & Related papers (2024-05-08T03:11:12Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts [10.929547354171723]
This paper introduces Knowledgeable Agents from Language Model Rollouts (KALM)
It extracts knowledge from large language models (LLMs) in the form of imaginary rollouts that can be easily learned by the agent through offline reinforcement learning methods.
It achieves a success rate of 46% in executing tasks with unseen goals, substantially surpassing the 26% success rate achieved by baseline methods.
arXiv Detail & Related papers (2024-04-14T13:19:40Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - Improving Contextual Congruence Across Modalities for Effective
Multimodal Marketing using Knowledge-infused Learning [3.3281180957341117]
Large Language (LLMs) and Vision models (LVMs) are still limited in capturing holistic meaning with cross-modal semantic relationships.
We design a framework to couple explicit commonsense knowledge in the form of knowledge graphs with large VLMs to improve the performance of a downstream task.
Our approach enables the early detection of likely persuasive multi-modal campaigns and the assessment and augmentation of marketing theory.
arXiv Detail & Related papers (2024-02-06T00:51:27Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Online Advertisements with LLMs: Opportunities and Challenges [51.96140910798771]
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
We delve into essential requirements including privacy, latency, reliability as well as the satisfaction of users and advertisers that such a system must fulfill.
arXiv Detail & Related papers (2023-11-11T02:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.