Linear Classifiers that Encourage Constructive Adaptation
- URL: http://arxiv.org/abs/2011.00355v3
- Date: Thu, 10 Jun 2021 04:13:23 GMT
- Title: Linear Classifiers that Encourage Constructive Adaptation
- Authors: Yatong Chen, Jialu Wang, Yang Liu
- Abstract summary: We study the dynamics of prediction and adaptation as a two-stage game, and characterize optimal strategies for the model designer and its decision subjects.
In benchmarks on simulated and real-world datasets, we find that classifiers trained using our method maintain the accuracy of existing approaches while inducing higher levels of improvement and less manipulation.
- Score: 6.324366770332667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning systems are often used in settings where individuals adapt
their features to obtain a desired outcome. In such settings, strategic
behavior leads to a sharp loss in model performance in deployment. In this
work, we aim to address this problem by learning classifiers that encourage
decision subjects to change their features in a way that leads to improvement
in both predicted \emph{and} true outcome. We frame the dynamics of prediction
and adaptation as a two-stage game, and characterize optimal strategies for the
model designer and its decision subjects. In benchmarks on simulated and
real-world datasets, we find that classifiers trained using our method maintain
the accuracy of existing approaches while inducing higher levels of improvement
and less manipulation.
Related papers
- Classification under strategic adversary manipulation using pessimistic bilevel optimisation [2.6505619784178047]
Adversarial machine learning concerns situations in which learners face attacks from active adversaries.
Such scenarios arise in applications such as spam email filtering, malware detection and fake-image generation.
We model these interactions between the learner and the adversary as a game and formulate the problem as a pessimistic bilevel optimisation problem.
arXiv Detail & Related papers (2024-10-26T22:27:21Z) - Adjusting Pretrained Backbones for Performativity [34.390793811659556]
We propose a novel technique to adjust pretrained backbones for performativity in a modular way.
We show how it leads to smaller loss along the retraining trajectory and enables us to effectively select among candidate models to anticipate performance degradations.
arXiv Detail & Related papers (2024-10-06T14:41:13Z) - Decoupling Decision-Making in Fraud Prevention through Classifier
Calibration for Business Logic Action [1.8289218434318257]
We use calibration strategies as strategy for decoupling machine learning (ML) classifiers from score-based actions within business logic frameworks.
Our findings highlight the trade-offs and performance implications of the approach.
In particular, the Isotonic and Beta calibration methods stand out for scenarios in which there is shift between training and testing data.
arXiv Detail & Related papers (2024-01-10T16:13:21Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Towards Compute-Optimal Transfer Learning [82.88829463290041]
We argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance.
Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.
arXiv Detail & Related papers (2023-04-25T21:49:09Z) - Causal Strategic Classification: A Tale of Two Shifts [11.929584800629675]
We show how strategic behavior and causal effects underlie two complementing forms of distribution shift.
We propose a learning algorithm that balances these two forces and over time, and permits end-to-end training.
arXiv Detail & Related papers (2023-02-13T11:35:59Z) - Adaptive Fine-Grained Predicates Learning for Scene Graph Generation [122.4588401267544]
General Scene Graph Generation (SGG) models tend to predict head predicates and re-balancing strategies prefer tail categories.
We propose an Adaptive Fine-Grained Predicates Learning (FGPL-A) which aims at differentiating hard-to-distinguish predicates for SGG.
Our proposed model-agnostic strategy significantly boosts performance of benchmark models on VG-SGG and GQA-SGG datasets by up to 175% and 76% on Mean Recall@100, achieving new state-of-the-art performance.
arXiv Detail & Related papers (2022-07-11T03:37:57Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - Adaptive Sampling for Minimax Fair Classification [40.936345085421955]
We propose an adaptive sampling algorithm based on the principle of optimism, and derive theoretical bounds on its performance.
By deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general.
arXiv Detail & Related papers (2021-03-01T04:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.