t-RAIN: Robust generalization under weather-aliasing label shift attacks
- URL: http://arxiv.org/abs/2305.08302v1
- Date: Mon, 15 May 2023 02:05:56 GMT
- Title: t-RAIN: Robust generalization under weather-aliasing label shift attacks
- Authors: Aboli Marathe, Sanjana Prabhu
- Abstract summary: We analyze the impact of label shift on the task of multi-weather classification for autonomous vehicles.
We propose t-RAIN a similarity mapping technique for synthetic data augmentation using large scale generative models.
We present state-of-the-art pedestrian detection results on real and synthetic weather domains with best performing 82.69 AP (snow) and 62.31 AP (fog) respectively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the classical supervised learning settings, classifiers are fit with the
assumption of balanced label distributions and produce remarkable results on
the same. In the real world, however, these assumptions often bend and in turn
adversely impact model performance. Identifying bad learners in skewed target
distributions is even more challenging. Thus achieving model robustness under
these "label shift" settings is an important task in autonomous perception. In
this paper, we analyze the impact of label shift on the task of multi-weather
classification for autonomous vehicles. We use this information as a prior to
better assess pedestrian detection in adverse weather. We model the
classification performance as an indicator of robustness under 4 label shift
scenarios and study the behavior of multiple classes of models. We propose
t-RAIN a similarity mapping technique for synthetic data augmentation using
large scale generative models and evaluate the performance on DAWN dataset.
This mapping boosts model test accuracy by 2.1, 4.4, 1.9, 2.7 % in no-shift,
fog, snow, dust shifts respectively. We present state-of-the-art pedestrian
detection results on real and synthetic weather domains with best performing
82.69 AP (snow) and 62.31 AP (fog) respectively.
Related papers
- Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data [39.40116554523575]
We present Drift-Resilient TabPFN, a fresh approach based on In-Context Learning with a Prior-Data Fitted Network.
It learns to approximate Bayesian inference on synthetic datasets drawn from a prior.
It improves accuracy from 0.688 to 0.744 and ROC AUC from 0.786 to 0.832 while maintaining stronger calibration.
arXiv Detail & Related papers (2024-11-15T23:49:23Z) - Prediction Accuracy & Reliability: Classification and Object Localization under Distribution Shift [1.433758865948252]
This study investigates the effect of natural distribution shift and weather augmentations on both detection quality and confidence estimation.
A novel dataset has been curated from publicly available autonomous driving datasets.
A granular analysis of CNNs under distribution shift allows to quantize the impact of different types of shifts on both, task performance and confidence estimation.
arXiv Detail & Related papers (2024-09-05T14:06:56Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification [10.911464455072391]
FACTUAL is a Contrastive Learning framework for Adversarial Training and robust SAR classification.
Our model achieves 99.7% accuracy on clean samples, and 89.6% on perturbed samples, both outperforming previous state-of-the-art methods.
arXiv Detail & Related papers (2024-04-04T06:20:22Z) - Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving [0.0]
We benchmark machine learning models against perturbations that simulate functional insufficiencies observed during model deployment in a vehicle.
Training the models with similar perturbations effectively reduces performance degradation, with error increases of up to +87.5%.
We argue that despite being an effective mitigation strategy, data augmentation through perturbations during training does not guarantee robustness towards unforeseen perturbations.
arXiv Detail & Related papers (2023-04-04T15:47:42Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - AutoAssign: Differentiable Label Assignment for Dense Object Detection [94.24431503373884]
Auto COCO is an anchor-free detector for object detection.
It achieves appearance-aware through a fully differentiable weighting mechanism.
Our best model achieves 52.1% AP, outperforming all existing one-stage detectors.
arXiv Detail & Related papers (2020-07-07T14:32:21Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.