HLLM-Creator: Hierarchical LLM-based Personalized Creative Generation
- URL: http://arxiv.org/abs/2508.18118v1
- Date: Mon, 25 Aug 2025 15:23:21 GMT
- Title: HLLM-Creator: Hierarchical LLM-based Personalized Creative Generation
- Authors: Junyi Chen, Lu Chi, Siliang Xu, Shiwei Ran, Bingyue Peng, Zehuan Yuan,
- Abstract summary: Current AIGC systems rely heavily on creators' inspiration, rarely generating truly user-personalized content.<n>We propose HLLM-Creator, a hierarchical LLM framework for efficient user interest modeling and personalized content generation.<n>Experiments on personalized title generation for Douyin Search Ads show the effectiveness of our model.
- Score: 24.94016591437963
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: AI-generated content technologies are widely used in content creation. However, current AIGC systems rely heavily on creators' inspiration, rarely generating truly user-personalized content. In real-world applications such as online advertising, a single product may have multiple selling points, with different users focusing on different features. This underscores the significant value of personalized, user-centric creative generation. Effective personalized content generation faces two main challenges: (1) accurately modeling user interests and integrating them into the content generation process while adhering to factual constraints, and (2) ensuring high efficiency and scalability to handle the massive user base in industrial scenarios. Additionally, the scarcity of personalized creative data in practice complicates model training, making data construction another key hurdle. We propose HLLM-Creator, a hierarchical LLM framework for efficient user interest modeling and personalized content generation. During inference, a combination of user clustering and a user-ad-matching-prediction based pruning strategy is employed to significantly enhance generation efficiency and reduce computational overhead, making the approach suitable for large-scale deployment. Moreover, we design a data construction pipeline based on chain-of-thought reasoning, which generates high-quality, user-specific creative titles and ensures factual consistency despite limited personalized data. This pipeline serves as a critical foundation for the effectiveness of our model. Extensive experiments on personalized title generation for Douyin Search Ads show the effectiveness of HLLM-Creator. Online A/B test shows a 0.476% increase on Adss, paving the way for more effective and efficient personalized generation in industrial scenarios. Codes for academic dataset are available at https://github.com/bytedance/HLLM.
Related papers
- CURP: Codebook-based Continuous User Representation for Personalized Generation with LLMs [60.867541073274715]
We propose a novel framework CURP, which employs a bidirectional user encoder and a discrete prototype codebook to extract multi-dimensional user traits.<n>This design enables plug-and-play personalization with a small number of trainable parameters.<n>We show that CURP achieves superior performance and generalization compared to strong baselines.
arXiv Detail & Related papers (2026-01-31T14:13:06Z) - Generative Modeling with Multi-Instance Reward Learning for E-commerce Creative Optimization [15.51942931334223]
In e-commerce advertising, selecting the most compelling combination of creative elements is critical for capturing user attention and driving conversions.<n>We propose a novel framework named GenCO that integrates generative modeling with multi-instance reward learning.<n>Our approach has significantly increased advertising revenue, demonstrating its practical value.
arXiv Detail & Related papers (2025-08-13T11:53:41Z) - FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real Users [111.56469697145519]
We propose Few-Shot Preference Optimization, which reframes reward modeling as a meta-learning problem.<n>Under this framework, an LLM learns to quickly adapt to a user via a few labeled preferences from that user, constructing a personalized reward function for them.<n>We generate over 1M synthetic personalized preferences using publicly available LLMs.<n>We evaluate FSPO on personalized open-ended generation for up to 1,500 synthetic users across three domains: movie reviews, pedagogical adaptation based on educational background, and general question answering, along with a controlled human study.
arXiv Detail & Related papers (2025-02-26T17:08:46Z) - CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.<n>To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)<n>Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - Lifelong Personalized Low-Rank Adaptation of Large Language Models for Recommendation [50.837277466987345]
We focus on the field of large language models (LLMs) for recommendation.
We propose RecLoRA, which incorporates a Personalized LoRA module that maintains independent LoRAs for different users.
We also design a Few2Many Learning Strategy, using a conventional recommendation model as a lens to magnify small training spaces to full spaces.
arXiv Detail & Related papers (2024-08-07T04:20:28Z) - ProgGen: Generating Named Entity Recognition Datasets Step-by-step with Self-Reflexive Large Language Models [25.68491572293656]
Large Language Models fall short in structured knowledge extraction tasks such as named entity recognition.
This paper explores an innovative, cost-efficient strategy to harness LLMs with modest NER capabilities for producing superior NER datasets.
arXiv Detail & Related papers (2024-03-17T06:12:43Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - A New Creative Generation Pipeline for Click-Through Rate with Stable
Diffusion Model [8.945197427679924]
Traditional AI-based approaches face the same problem of not considering user information while having limited aesthetic knowledge from designers.
To optimize the results, the generated creatives in traditional methods are then ranked by another module named creative ranking model.
This paper proposes a new automated Creative Generation pipeline for Click-Through Rate (CG4CTR) with the goal of improving CTR during the creative generation stage.
arXiv Detail & Related papers (2024-01-17T03:27:39Z) - AdBooster: Personalized Ad Creative Generation using Stable Diffusion
Outpainting [7.515971669919419]
In digital advertising, the selection of the optimal item (recommendation) and its best creative presentation (creative optimization) have traditionally been considered separate disciplines.
We introduce the task of generative models for creative generation that incorporate user interests, and itshape AdBooster, a model for personalized ad creatives.
arXiv Detail & Related papers (2023-09-08T12:57:05Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.