Robustness-enhanced Uplift Modeling with Adversarial Feature
Desensitization
- URL: http://arxiv.org/abs/2310.04693v3
- Date: Fri, 29 Dec 2023 09:54:36 GMT
- Title: Robustness-enhanced Uplift Modeling with Adversarial Feature
Desensitization
- Authors: Zexu Sun, Bowei He, Ming Ma, Jiakai Tang, Yuchen Wang, Chen Ma, Dugang
Liu
- Abstract summary: We propose a novel robustness-enhanced uplift modeling framework with adversarial feature desensitization (RUAD)
Our RUAD can more effectively alleviate the feature sensitivity of the uplift model through two customized modules.
We conduct extensive experiments on a public dataset and a real product dataset to verify the effectiveness of our RUAD in online marketing.
- Score: 11.404726761497798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uplift modeling has shown very promising results in online marketing.
However, most existing works are prone to the robustness challenge in some
practical applications. In this paper, we first present a possible explanation
for the above phenomenon. We verify that there is a feature sensitivity problem
in online marketing using different real-world datasets, where the perturbation
of some key features will seriously affect the performance of the uplift model
and even cause the opposite trend. To solve the above problem, we propose a
novel robustness-enhanced uplift modeling framework with adversarial feature
desensitization (RUAD). Specifically, our RUAD can more effectively alleviate
the feature sensitivity of the uplift model through two customized modules,
including a feature selection module with joint multi-label modeling to
identify a key subset from the input features and an adversarial feature
desensitization module using adversarial training and soft interpolation
operations to enhance the robustness of the model against this selected subset
of features. Finally, we conduct extensive experiments on a public dataset and
a real product dataset to verify the effectiveness of our RUAD in online
marketing. In addition, we also demonstrate the robustness of our RUAD to the
feature sensitivity, as well as the compatibility with different uplift models.
Related papers
- PrivilegedDreamer: Explicit Imagination of Privileged Information for Rapid Adaptation of Learned Policies [7.376615925443845]
We introduce PrivilegedDreamer, a model-based reinforcement learning framework that extends the existing model-based approach by incorporating an explicit parameter estimation module.
Our empirical analysis on five diverse HIP-MDP tasks demonstrates that PrivilegedDreamer outperforms state-of-the-art model-based, model-free, and do- main adaptation learning algorithms.
arXiv Detail & Related papers (2025-02-17T02:46:02Z) - AdaF^2M^2: Comprehensive Learning and Responsive Leveraging Features in Recommendation System [16.364341783911414]
We propose a model-agnostic framework AdaF2M2, short for Adaptive Feature Modeling with Feature Mask.
By arming base models with AdaF2M2, we conduct online A/B tests on multiple recommendation scenarios, obtaining +1.37% and +1.89% cumulative improvements on user active days and app duration respectively.
arXiv Detail & Related papers (2025-01-27T06:49:27Z) - Latent Feature Mining for Predictive Model Enhancement with Large Language Models [2.6334346517416876]
We introduce an effective approach to formulate latent feature mining as text-to-text propositional logical reasoning.
We propose FLAME, a framework that leverages large language models (LLMs) to augment observed features with latent features.
We validate our framework with two case studies: the criminal justice system and the healthcare domain.
arXiv Detail & Related papers (2024-10-06T03:51:32Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - On the Embedding Collapse when Scaling up Recommendation Models [53.66285358088788]
We identify the embedding collapse phenomenon as the inhibition of scalability, wherein the embedding matrix tends to occupy a low-dimensional subspace.
We propose a simple yet effective multi-embedding design incorporating embedding-set-specific interaction modules to learn embedding sets with large diversity.
arXiv Detail & Related papers (2023-10-06T17:50:38Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Cross Feature Selection to Eliminate Spurious Interactions and Single
Feature Dominance Explainable Boosting Machines [0.0]
Interpretability is essential for legal, ethical, and practical reasons.
High-performance models can suffer from spurious interactions with redundant features and single-feature dominance.
In this paper, we explore novel approaches to address these issues by utilizing alternate Cross-feature selection, ensemble features and model configuration alteration techniques.
arXiv Detail & Related papers (2023-07-17T13:47:41Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.