MindMem: Multimodal for Predicting Advertisement Memorability Using LLMs and Deep Learning
- URL: http://arxiv.org/abs/2502.18371v1
- Date: Tue, 25 Feb 2025 17:09:12 GMT
- Title: MindMem: Multimodal for Predicting Advertisement Memorability Using LLMs and Deep Learning
- Authors: Sepehr Asgarian, Qayam Jetha, Jouhyun Jeon,
- Abstract summary: We present MindMem, a multimodal predictive model for advertisement memorability.<n>By integrating textual, visual, and auditory data, MindMem achieves state-of-the-art performance.<n>We introduce MindMem-ReAd, which employs Large Language Model-based simulations to optimize advertisement content and placement.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the competitive landscape of advertising, success hinges on effectively navigating and leveraging complex interactions among consumers, advertisers, and advertisement platforms. These multifaceted interactions compel advertisers to optimize strategies for modeling consumer behavior, enhancing brand recall, and tailoring advertisement content. To address these challenges, we present MindMem, a multimodal predictive model for advertisement memorability. By integrating textual, visual, and auditory data, MindMem achieves state-of-the-art performance, with a Spearman's correlation coefficient of 0.631 on the LAMBDA and 0.731 on the Memento10K dataset, consistently surpassing existing methods. Furthermore, our analysis identified key factors influencing advertisement memorability, such as video pacing, scene complexity, and emotional resonance. Expanding on this, we introduced MindMem-ReAd (MindMem-Driven Re-generated Advertisement), which employs Large Language Model-based simulations to optimize advertisement content and placement, resulting in up to a 74.12% improvement in advertisement memorability. Our results highlight the transformative potential of Artificial Intelligence in advertising, offering advertisers a robust tool to drive engagement, enhance competitiveness, and maximize impact in a rapidly evolving market.
Related papers
- CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.
To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)
Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - Parallel Ranking of Ads and Creatives in Real-Time Advertising Systems [20.78133992969317]
We propose for the first time a novel architecture for online parallel estimation of ads and creatives ranking.
The online architecture enables sophisticated personalized creative modeling while reducing overall latency.
The offline joint model for CTR estimation allows mutual awareness and collaborative optimization between ads and creatives.
arXiv Detail & Related papers (2023-12-20T04:05:21Z) - A Multimodal Analysis of Influencer Content on Twitter [40.41635575764701]
Line between personal opinions and commercial content promotion is frequently blurred.
This makes automatic detection of regulatory compliance breaches related to influencer advertising difficult.
We introduce a new Twitter (now X) dataset consisting of 15,998 influencer posts mapped into commercial and non-commercial categories.
arXiv Detail & Related papers (2023-09-06T15:07:23Z) - Long-Term Ad Memorability: Understanding & Generating Memorable Ads [54.23854539909078]
Despite the importance of long-term memory in marketing and brand building, until now, there has been no large-scale study on the memorability of ads.<n>We release the first memorability dataset, LAMBDA, consisting of 1749 participants and 2205 ads covering 276 brands.<n>Running statistical tests over different participant subpopulations and ad types, we find many interesting insights into what makes an ad memorable, e.g., fast-moving ads are more memorable than those with slower scenes.<n>We present a scalable method to build a high-quality memorable ad generation model by leveraging automatically annotated data.
arXiv Detail & Related papers (2023-09-01T10:27:04Z) - A Profit-Maximizing Strategy for Advertising on the e-Commerce Platforms [1.565361244756411]
The proposed model aims to find the optimal set of features to maximize the probability of converting targeted audiences into actual buyers.
We conduct an empirical study featuring real-world data from Tmall to show that our proposed method can effectively optimize the advertising strategy with budgetary constraints.
arXiv Detail & Related papers (2022-10-31T01:45:42Z) - Persuasion Strategies in Advertisements [68.70313043201882]
We introduce an extensive vocabulary of persuasion strategies and build the first ad image corpus annotated with persuasion strategies.
We then formulate the task of persuasion strategy prediction with multi-modal learning.
We conduct a real-world case study on 1600 advertising campaigns of 30 Fortune-500 companies.
arXiv Detail & Related papers (2022-08-20T07:33:13Z) - Personality-Driven Social Multimedia Content Recommendation [68.46899477180837]
We investigate the impact of human personality traits on the content recommendation model by applying a novel personality-driven multi-view content recommender system.
Our experimental results and real-world case study demonstrate not just PersiC's ability to perform efficient human personality-driven multi-view content recommendation, but also allow for actionable digital ad strategy recommendations.
arXiv Detail & Related papers (2022-07-25T14:37:18Z) - A Multimodal Framework for Video Ads Understanding [64.70769354696019]
We develop a multimodal system to improve the ability of structured analysis of advertising video content.
Our solution achieved a score of 0.2470 measured in consideration of localization and prediction accuracy, ranking fourth in the 2021 TAAC final leaderboard.
arXiv Detail & Related papers (2021-08-29T16:06:00Z) - Ranking Micro-Influencers: a Novel Multi-Task Learning and Interpretable
Framework [69.5850969606885]
We propose a novel multi-task learning framework to improve the state of the art in micro-influencer ranking based on multimedia content.
We show significant improvement both in terms of accuracy and model complexity.
The techniques for ranking and interpretation presented in this work can be generalised to arbitrary multimedia ranking tasks.
arXiv Detail & Related papers (2021-07-29T13:04:25Z) - Predicting Online Video Advertising Effects with Multimodal Deep
Learning [33.20913249848369]
We propose a method for predicting the click through rate (CTR) of video advertisements and analyzing the factors that determine the CTR.
In this paper, we demonstrate an optimized framework for accurately predicting the effects by taking advantage of the multimodal nature of online video advertisements.
arXiv Detail & Related papers (2020-12-22T06:24:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.