Interpretable Deep Learning Model for Online Multi-touch Attribution
- URL: http://arxiv.org/abs/2004.00384v1
- Date: Thu, 26 Mar 2020 23:21:40 GMT
- Title: Interpretable Deep Learning Model for Online Multi-touch Attribution
- Authors: Dongdong Yang, Kevin Dyer, Senzhang Wang
- Abstract summary: We propose a novel model called DeepMTA, which combines deep learning model and additive feature explanation model for interpretable online multi-touch attribution.
As the first interpretable deep learning model for MTA, DeepMTA considers three important features in the customer journey.
Evaluation on a real dataset shows the proposed conversion prediction model achieves 91% accuracy.
- Score: 14.62385029537631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In online advertising, users may be exposed to a range of different
advertising campaigns, such as natural search or referral or organic search,
before leading to a final transaction. Estimating the contribution of
advertising campaigns on the user's journey is very meaningful and crucial. A
marketer could observe each customer's interaction with different marketing
channels and modify their investment strategies accordingly. Existing methods
including both traditional last-clicking methods and recent data-driven
approaches for the multi-touch attribution (MTA) problem lack enough
interpretation on why the methods work. In this paper, we propose a novel model
called DeepMTA, which combines deep learning model and additive feature
explanation model for interpretable online multi-touch attribution. DeepMTA
mainly contains two parts, the phased-LSTMs based conversion prediction model
to catch different time intervals, and the additive feature attribution model
combined with shaley values. Additive feature attribution is explanatory that
contains a linear function of binary variables. As the first interpretable deep
learning model for MTA, DeepMTA considers three important features in the
customer journey: event sequence order, event frequency and time-decay effect
of the event. Evaluation on a real dataset shows the proposed conversion
prediction model achieves 91\% accuracy.
Related papers
- PATFinger: Prompt-Adapted Transferable Fingerprinting against Unauthorized Multimodal Dataset Usage [19.031839603738057]
multimodal datasets can be leveraged to pre-train vision-adapted models by providing cross-modal semantics.
We propose a novel prompt-language transferable fingerprinting scheme called PATFinger.
Our scheme utilizes inherent dataset attributes as fingerprints instead of compelling the model to learn triggers.
arXiv Detail & Related papers (2025-04-15T09:53:02Z) - MIM: Multi-modal Content Interest Modeling Paradigm for User Behavior Modeling [27.32474950026696]
We propose a novel Multi-modal Content Interest Modeling paradigm (MIM)
MIM consists of three key stages: Pre-training, Content-Interest-Aware Supervised Fine-Tuning, and Content-Interest-Aware UBM.
Method has been successfully deployed online, achieving a significant increase of +14.14% in CTR and +4.12% in RPM.
arXiv Detail & Related papers (2025-02-01T05:06:21Z) - Robust Uplift Modeling with Large-Scale Contexts for Real-time Marketing [6.511772664252086]
Uplift modeling is proposed to solve the problem, which applies different treatments (e.g., discounts, bonus) to satisfy corresponding users.
In real-world scenarios, there are rich contexts available in the online platform (e.g., short videos, news) and the uplift model needs to infer an incentive for each user.
We propose a novel model-agnostic Robust Uplift Modeling with Large-Scale Contexts (UMLC) framework for Real-time Marketing.
arXiv Detail & Related papers (2025-01-04T08:55:50Z) - SM3Det: A Unified Model for Multi-Modal Remote Sensing Object Detection [73.49799596304418]
This paper introduces a new task called Multi-Modal datasets and Multi-Task Object Detection (M2Det) for remote sensing.
It is designed to accurately detect horizontal or oriented objects from any sensor modality.
This task poses challenges due to 1) the trade-offs involved in managing multi-modal modelling and 2) the complexities of multi-task optimization.
arXiv Detail & Related papers (2024-12-30T02:47:51Z) - Multimodal Difference Learning for Sequential Recommendation [5.243083216855681]
We argue that user interests and item relationships vary across different modalities.
We propose a novel Multimodal Learning framework for Sequential Recommendation, MDSRec.
Results on five real-world datasets demonstrate the superiority of MDSRec over state-of-the-art baselines.
arXiv Detail & Related papers (2024-12-11T05:08:19Z) - EDA: Evolving and Distinct Anchors for Multimodal Motion Prediction [27.480524917596565]
We introduce a novel paradigm, named Evolving and Distinct Anchors (EDA), to define the positive and negative components for multimodal motion prediction based on mixture models.
EDA enables anchors to evolve and redistribute themselves under specific scenes for an enlarged regression capacity.
arXiv Detail & Related papers (2023-12-15T02:55:24Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - JPAVE: A Generation and Classification-based Model for Joint Product
Attribute Prediction and Value Extraction [59.94977231327573]
We propose a multi-task learning model with value generation/classification and attribute prediction called JPAVE.
Two variants of our model are designed for open-world and closed-world scenarios.
Experimental results on a public dataset demonstrate the superiority of our model compared with strong baselines.
arXiv Detail & Related papers (2023-11-07T18:36:16Z) - Mining Stable Preferences: Adaptive Modality Decorrelation for
Multimedia Recommendation [23.667430143035787]
We propose a novel MOdality DEcorrelating STable learning framework, MODEST for brevity, to learn users' stable preference.
Inspired by sample re-weighting techniques, the proposed method aims to estimate a weight for each item, such that the features from different modalities in the weighted distribution are decorrelated.
Our method could be served as a play-and-plug module for existing multimedia recommendation backbones.
arXiv Detail & Related papers (2023-06-25T09:09:11Z) - Exploring the cloud of feature interaction scores in a Rashomon set [17.775145325515993]
We introduce the feature interaction score (FIS) in the context of a Rashomon set.
We demonstrate the properties of the FIS via synthetic data and draw connections to other areas of statistics.
Our results suggest that the proposed FIS can provide valuable insights into the nature of feature interactions in machine learning models.
arXiv Detail & Related papers (2023-05-17T13:05:26Z) - Contrastive Meta Learning with Behavior Multiplicity for Recommendation [42.15990960863924]
A well-informed recommendation framework could not only help users identify their interested items, but also benefit the revenue of various online platforms.
We propose Contrastive Meta Learning (CML) to maintain dedicated cross-type behavior dependency for different users.
Our method consistently outperforms various state-of-the-art recommendation methods.
arXiv Detail & Related papers (2022-02-17T08:51:24Z) - You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory
Prediction [52.442129609979794]
Recent deep learning approaches for trajectory prediction show promising performance.
It remains unclear which features such black-box models actually learn to use for making predictions.
This paper proposes a procedure that quantifies the contributions of different cues to model performance.
arXiv Detail & Related papers (2021-10-11T14:24:15Z) - Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor
and Optimal Transport [8.617532047238461]
We propose a novel architecture, Temporal Adaptor (TRA), to empower existing stock prediction models with the ability to model multiple stock trading patterns.
TRA is a lightweight module that consists of a set independent predictors for learning multiple patterns as well as a router to dispatch samples to different predictors.
We show that the proposed method can improve information coefficient (IC) from 0.053 to 0.059 and 0.051 to 0.056 respectively.
arXiv Detail & Related papers (2021-06-24T12:19:45Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses [97.50616524350123]
We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
arXiv Detail & Related papers (2020-01-15T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.