Proactive Detection and Calibration of Seasonal Advertisements with Multimodal Large Language Models
- URL: http://arxiv.org/abs/2411.00780v1
- Date: Wed, 16 Oct 2024 18:14:05 GMT
- Title: Proactive Detection and Calibration of Seasonal Advertisements with Multimodal Large Language Models
- Authors: Hamid Eghbalzadeh, Shuai Shao, Saurabh Verma, Venugopal Mani, Hongnan Wang, Jigar Madia, Vitali Karpinchyk, Andrey Malevich,
- Abstract summary: We present a research problem that is of interest for the ads ranking and recommendation community.
Our paper provides detailed guidelines from various angles of this problem tested in, and motivated by a large-scale industrial ads ranking system.
We present a conclusive solution we took during this research exploration: to detect seasonality, we leveraged Multimodal LLMs.
- Score: 5.425511887990726
- License:
- Abstract: A myriad of factors affect large scale ads delivery systems and influence both user experience and revenue. One such factor is proactive detection and calibration of seasonal advertisements to help with increasing conversion and user satisfaction. In this paper, we present Proactive Detection and Calibration of Seasonal Advertisements (PDCaSA), a research problem that is of interest for the ads ranking and recommendation community, both in the industrial setting as well as in research. Our paper provides detailed guidelines from various angles of this problem tested in, and motivated by a large-scale industrial ads ranking system. We share our findings including the clear statement of the problem and its motivation rooted in real-world systems, evaluation metrics, and sheds lights to the existing challenges, lessons learned, and best practices of data annotation and machine learning modeling to tackle this problem. Lastly, we present a conclusive solution we took during this research exploration: to detect seasonality, we leveraged Multimodal LLMs (MLMs) which on our in-house benchmark achieved 0.97 top F1 score. Based on our findings, we envision MLMs as a teacher for knowledge distillation, a machine labeler, and a part of the ensembled and tiered seasonality detection system, which can empower ads ranking systems with enriched seasonal information.
Related papers
- A Unified Knowledge-Distillation and Semi-Supervised Learning Framework to Improve Industrial Ads Delivery Systems [19.0143243243314]
Industrial ads ranking systems conventionally rely on labeled impression data, which leads to challenges such as overfitting, slower incremental gain from model scaling, and biases due to discrepancies between training and serving data.
We propose a Unified framework for Knowledge-Distillation and Semi-supervised Learning (UK) for ads ranking, empowering the training of models on a significantly larger and more diverse datasets.
arXiv Detail & Related papers (2025-02-05T23:14:07Z) - Fake Advertisements Detection Using Automated Multimodal Learning: A Case Study for Vietnamese Real Estate Data [4.506099292980221]
FADAML is a novel end-to-end machine learning system to detect and filter out fake online advertisements.
Our system combines techniques in multimodal machine learning and automated machine learning to achieve a high detection rate.
arXiv Detail & Related papers (2025-01-18T18:48:06Z) - AD-LLM: Benchmarking Large Language Models for Anomaly Detection [50.57641458208208]
This paper introduces AD-LLM, the first benchmark that evaluates how large language models can help with anomaly detection.
We examine three key tasks: zero-shot detection, using LLMs' pre-trained knowledge to perform AD without tasks-specific training; data augmentation, generating synthetic data and category descriptions to improve AD models; and model selection, using LLMs to suggest unsupervised AD models.
arXiv Detail & Related papers (2024-12-15T10:22:14Z) - Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Large Language Models for Anomaly and Out-of-Distribution Detection: A Survey [18.570066068280212]
Large Language Models (LLMs) have demonstrated their effectiveness not only in natural language processing but also in broader applications.
This survey focuses on the problem of anomaly and OOD detection under the context of LLMs.
We propose a new taxonomy to categorize existing approaches into two classes based on the role played by LLMs.
arXiv Detail & Related papers (2024-09-03T15:22:41Z) - A Comprehensive Library for Benchmarking Multi-class Visual Anomaly Detection [52.228708947607636]
This paper introduces a comprehensive visual anomaly detection benchmark, ADer, which is a modular framework for new methods.
The benchmark includes multiple datasets from industrial and medical domains, implementing fifteen state-of-the-art methods and nine comprehensive metrics.
We objectively reveal the strengths and weaknesses of different methods and provide insights into the challenges and future directions of multi-class visual anomaly detection.
arXiv Detail & Related papers (2024-06-05T13:40:07Z) - CLAMBER: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models [60.59638232596912]
We introduce CLAMBER, a benchmark for evaluating large language models (LLMs)
Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.
Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries.
arXiv Detail & Related papers (2024-05-20T14:34:01Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - On the Factory Floor: ML Engineering for Industrial-Scale Ads
Recommendation Models [9.102290972714652]
For industrial-scale advertising systems, prediction of ad click-through rate (CTR) is a central problem.
We present a case study of practical techniques deployed in Google's search ads CTR model.
arXiv Detail & Related papers (2022-09-12T15:15:23Z) - Applying Multi-armed Bandit Algorithms to Computational Advertising [0.0]
We study the performance of various online learning algorithms to identify and display the best ads/offers with the highest conversion rates to web users.
We formulate our ad-selection problem as a Multi-Armed Bandit problem which is a classical paradigm in Machine Learning.
This article highlights some of our findings in the area of computational advertising from 2011 to 2015.
arXiv Detail & Related papers (2020-11-22T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.