TeamCMU at Touché: Adversarial Co-Evolution for Advertisement Integration and Detection in Conversational Search
- URL: http://arxiv.org/abs/2507.00509v1
- Date: Tue, 01 Jul 2025 07:24:29 GMT
- Title: TeamCMU at Touché: Adversarial Co-Evolution for Advertisement Integration and Detection in Conversational Search
- Authors: To Eun Kim, João Coelho, Gbemileke Onilude, Jai Singh,
- Abstract summary: integration of advertisements into generated responses presents both commercial opportunities and challenges for user experience.<n>We propose a modular pipeline for advertisement management in RAG-based conversational systems, consisting of an ad-rewriter for seamless ad integration and a robust ad-classifier for detection.
- Score: 1.187456026346823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As conversational search engines increasingly adopt generation-based paradigms powered by Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG), the integration of advertisements into generated responses presents both commercial opportunities and challenges for user experience. Unlike traditional search, where advertisements are clearly delineated, generative systems blur the boundary between informational content and promotional material, raising concerns around transparency and trust. In this work, we propose a modular pipeline for advertisement management in RAG-based conversational systems, consisting of an ad-rewriter for seamless ad integration and a robust ad-classifier for detection. We leverage synthetic data to train high-performing classifiers, which are then used to guide two complementary ad-integration strategies: supervised fine-tuning of the ad-rewriter and a best-of-N sampling approach that selects the least detectable ad-integrated response among multiple candidates. Our evaluation focuses on two core questions: the effectiveness of ad classifiers in detecting diverse ad integration strategies, and the training methods that best support coherent, minimally intrusive ad insertion. Experimental results show that our ad-classifier, trained on synthetic advertisement data inspired by marketing strategies and enhanced through curriculum learning, achieves robust detection performance. Additionally, we demonstrate that classifier-guided optimization, through both fine-tuning and best-of-N sampling, significantly improves ad stealth, enabling more seamless integration. These findings contribute an adversarial co-evolution framework for developing more sophisticated ad-aware generative search systems and robust ad classifiers.
Related papers
- Bidding-Aware Retrieval for Multi-Stage Consistency in Online Advertising [30.108437268612438]
Bidding-Aware Retrieval (BAR) is a model-based retrieval framework that addresses multi-stage inconsistency by incorporating ad bid value into the retrieval scoring function.<n>BAR's core innovation is Bidding-Aware Modeling, incorporating bid signals through monotonicity-constrained learning and multi-task distillation to ensure economically coherent representations.<n>Extensive offline experiments and full-scale deployment across Alibaba's display advertising platform validated BAR's efficacy.
arXiv Detail & Related papers (2025-08-07T09:43:34Z) - Multi-objective Aligned Bidword Generation Model for E-commerce Search Advertising [16.8420671443003]
Retrieval systems primarily address the challenge of matching user queries with the most relevant advertisements.<n>We propose a Multi-objective aligned Bidword Generation Model (MoBGM), which is composed of a discriminator, generator, and preference alignment module.<n>Our proposed algorithm significantly outperforms the state of the art in offline and online experiments.
arXiv Detail & Related papers (2025-06-04T10:57:18Z) - Review, Refine, Repeat: Understanding Iterative Decoding of AI Agents with Dynamic Evaluation and Selection [71.92083784393418]
Inference-time methods such as Best-of-N (BON) sampling offer a simple yet effective alternative to improve performance.<n>We propose Iterative Agent Decoding (IAD) which combines iterative refinement with dynamic candidate evaluation and selection guided by a verifier.
arXiv Detail & Related papers (2025-04-02T17:40:47Z) - CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.<n>To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)<n>Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - DistinctAD: Distinctive Audio Description Generation in Contexts [62.58375366359421]
We propose DistinctAD, a framework for generating Audio Descriptions that emphasize distinctiveness to produce better narratives.<n>To address the domain gap, we introduce a CLIP-AD adaptation strategy that does not require additional AD corpora.<n>In Stage-II, DistinctAD incorporates two key innovations: (i) a Contextual Expectation-Maximization Attention (EMA) module that reduces redundancy by extracting common bases from consecutive video clips, and (ii) an explicit distinctive word prediction loss that filters out repeated words in the context.
arXiv Detail & Related papers (2024-11-27T09:54:59Z) - Optimizing Search Advertising Strategies: Integrating Reinforcement Learning with Generalized Second-Price Auctions for Enhanced Ad Ranking and Bidding [36.74368014856906]
We propose a model that adjusts to varying user interactions and optimize the balance between advertiser cost, user relevance, and platform revenue.
Our results suggest significant improvements in ad placement accuracy and cost efficiency, demonstrating the model's applicability in real-world scenarios.
arXiv Detail & Related papers (2024-05-22T06:30:55Z) - DialCLIP: Empowering CLIP as Multi-Modal Dialog Retriever [83.33209603041013]
We propose a parameter-efficient prompt-tuning method named DialCLIP for multi-modal dialog retrieval.
Our approach introduces a multi-modal context generator to learn context features which are distilled into prompts within the pre-trained vision-language model CLIP.
To facilitate various types of retrieval, we also design multiple experts to learn mappings from CLIP outputs to multi-modal representation space.
arXiv Detail & Related papers (2024-01-02T07:40:12Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - A Unified Framework for Integrating Semantic Communication and
AI-Generated Content in Metaverse [57.317580645602895]
Integrated Semantic Communication and AI-Generated Content (ISGC) has attracted a lot of attentions recently.
ISGC transfers semantic information from user inputs, generates digital content, and renders graphics for Metaverse.
We introduce a unified framework that captures ISGC two primary benefits, including integration gain for optimized resource allocation.
arXiv Detail & Related papers (2023-05-18T02:02:36Z) - Aggregate effects of advertising decisions: a complex systems look at
search engine advertising via an experimental study [26.218512292529635]
We develop and validate a simulation framework that supports assessments of various advertising strategies and estimations of the impact of mechanisms on the search market.
We conduct three experiments on the aggregate impact of electronic word-of-mouth, the competition level, and strategic bidding behaviors.
arXiv Detail & Related papers (2022-03-04T09:16:15Z) - A Cooperative-Competitive Multi-Agent Framework for Auto-bidding in
Online Advertising [53.636153252400945]
We propose a general Multi-Agent reinforcement learning framework for Auto-Bidding, namely MAAB, to learn the auto-bidding strategies.
Our approach outperforms several baseline methods in terms of social welfare and guarantees the ad platform's revenue.
arXiv Detail & Related papers (2021-06-11T08:07:14Z) - Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement [26.70647666598025]
We study approaches to refine the given ad text and image by: (i) generating new ad text, (ii) recommending keyphrases for new ad text, and (iii) recommending image tags (objects in image)
Based on A/B tests conducted by multiple advertisers, we form pairwise examples of inferior and superior ad creatives.
We also share broadly applicable insights from our experiments using data from the Yahoo Gemini ad platform.
arXiv Detail & Related papers (2020-08-17T16:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.