Proactive Detection and Calibration of Seasonal Advertisements with Multimodal Large Language Models
- URL: http://arxiv.org/abs/2411.00780v1
- Date: Wed, 16 Oct 2024 18:14:05 GMT
- Title: Proactive Detection and Calibration of Seasonal Advertisements with Multimodal Large Language Models
- Authors: Hamid Eghbalzadeh, Shuai Shao, Saurabh Verma, Venugopal Mani, Hongnan Wang, Jigar Madia, Vitali Karpinchyk, Andrey Malevich,
- Abstract summary: We present a research problem that is of interest for the ads ranking and recommendation community.
Our paper provides detailed guidelines from various angles of this problem tested in, and motivated by a large-scale industrial ads ranking system.
We present a conclusive solution we took during this research exploration: to detect seasonality, we leveraged Multimodal LLMs.
- Score: 5.425511887990726
- License:
- Abstract: A myriad of factors affect large scale ads delivery systems and influence both user experience and revenue. One such factor is proactive detection and calibration of seasonal advertisements to help with increasing conversion and user satisfaction. In this paper, we present Proactive Detection and Calibration of Seasonal Advertisements (PDCaSA), a research problem that is of interest for the ads ranking and recommendation community, both in the industrial setting as well as in research. Our paper provides detailed guidelines from various angles of this problem tested in, and motivated by a large-scale industrial ads ranking system. We share our findings including the clear statement of the problem and its motivation rooted in real-world systems, evaluation metrics, and sheds lights to the existing challenges, lessons learned, and best practices of data annotation and machine learning modeling to tackle this problem. Lastly, we present a conclusive solution we took during this research exploration: to detect seasonality, we leveraged Multimodal LLMs (MLMs) which on our in-house benchmark achieved 0.97 top F1 score. Based on our findings, we envision MLMs as a teacher for knowledge distillation, a machine labeler, and a part of the ensembled and tiered seasonality detection system, which can empower ads ranking systems with enriched seasonal information.
Related papers
- Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Large Language Models for Anomaly and Out-of-Distribution Detection: A Survey [18.570066068280212]
Large Language Models (LLMs) have demonstrated their effectiveness not only in natural language processing but also in broader applications.
This survey focuses on the problem of anomaly and OOD detection under the context of LLMs.
We propose a new taxonomy to categorize existing approaches into two classes based on the role played by LLMs.
arXiv Detail & Related papers (2024-09-03T15:22:41Z) - LLM Inference Serving: Survey of Recent Advances and Opportunities [8.567865555551911]
This survey offers a comprehensive overview of recent advancements in Large Language Model (LLM) serving systems.
We specifically examine system-level enhancements that improve performance and efficiency without altering the core LLM decoding mechanisms.
This survey serves as a valuable resource for LLM practitioners seeking to stay abreast of the latest developments in this rapidly evolving field.
arXiv Detail & Related papers (2024-07-17T08:11:47Z) - A Comprehensive Library for Benchmarking Multi-class Visual Anomaly Detection [52.228708947607636]
This paper introduces a comprehensive visual anomaly detection benchmark, ADer, which is a modular framework for new methods.
The benchmark includes multiple datasets from industrial and medical domains, implementing fifteen state-of-the-art methods and nine comprehensive metrics.
We objectively reveal the strengths and weaknesses of different methods and provide insights into the challenges and future directions of multi-class visual anomaly detection.
arXiv Detail & Related papers (2024-06-05T13:40:07Z) - CLAMBER: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models [60.59638232596912]
We introduce CLAMBER, a benchmark for evaluating large language models (LLMs)
Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.
Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries.
arXiv Detail & Related papers (2024-05-20T14:34:01Z) - A Survey of Confidence Estimation and Calibration in Large Language Models [86.692994151323]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains.
Despite their impressive performance, they can be unreliable due to factual errors in their generations.
Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations.
arXiv Detail & Related papers (2023-11-14T16:43:29Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - On the Factory Floor: ML Engineering for Industrial-Scale Ads
Recommendation Models [9.102290972714652]
For industrial-scale advertising systems, prediction of ad click-through rate (CTR) is a central problem.
We present a case study of practical techniques deployed in Google's search ads CTR model.
arXiv Detail & Related papers (2022-09-12T15:15:23Z) - Applying Multi-armed Bandit Algorithms to Computational Advertising [0.0]
We study the performance of various online learning algorithms to identify and display the best ads/offers with the highest conversion rates to web users.
We formulate our ad-selection problem as a Multi-Armed Bandit problem which is a classical paradigm in Machine Learning.
This article highlights some of our findings in the area of computational advertising from 2011 to 2015.
arXiv Detail & Related papers (2020-11-22T03:23:13Z) - Hidden Incentives for Auto-Induced Distributional Shift [11.295927026302573]
We introduce the term auto-induced distributional shift (ADS) to describe the phenomenon of an algorithm causing a change in the distribution of its own inputs.
Our goal is to ensure that machine learning systems do not leverage ADS to increase performance when doing so could be undesirable.
We demonstrate that changes to the learning algorithm, such as the introduction of meta-learning, can cause hidden incentives for auto-induced distributional shift (HI-ADS) to be revealed.
arXiv Detail & Related papers (2020-09-19T03:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.