Env-Aware Anomaly Detection: Ignore Style Changes, Stay True to Content!
- URL: http://arxiv.org/abs/2210.03103v1
- Date: Thu, 6 Oct 2022 17:52:33 GMT
- Title: Env-Aware Anomaly Detection: Ignore Style Changes, Stay True to Content!
- Authors: Stefan Smeu, Elena Burceanu, Andrei Liviu Nicolicioiu, Emanuela Haller
- Abstract summary: We introduce a formalization and benchmark for the unsupervised anomaly detection task in the distribution-shift scenario.
We empirically validate that environment-aware methods perform better in such cases when compared with the basic Empirical Risk Minimization (ERM)
We propose an extension for generating positive samples for contrastive methods that considers the environment labels when training, improving the baseline score by 8.7%.
- Score: 6.633524353120579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a formalization and benchmark for the unsupervised anomaly
detection task in the distribution-shift scenario. Our work builds upon the
iWildCam dataset, and, to the best of our knowledge, we are the first to
propose such an approach for visual data. We empirically validate that
environment-aware methods perform better in such cases when compared with the
basic Empirical Risk Minimization (ERM). We next propose an extension for
generating positive samples for contrastive methods that considers the
environment labels when training, improving the ERM baseline score by 8.7%.
Related papers
- No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - Is user feedback always informative? Retrieval Latent Defending for Semi-Supervised Domain Adaptation without Source Data [34.55109747972333]
This paper aims to adapt the source model to the target environment using user feedback readily available in real-world applications.
We analyze this phenomenon via a novel concept called Negatively Biased Feedback (NBF)
We propose a scalable adapting approach, Retrieval Latent Defending.
arXiv Detail & Related papers (2024-07-22T05:15:41Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Learning Feature Inversion for Multi-class Anomaly Detection under General-purpose COCO-AD Benchmark [101.23684938489413]
Anomaly detection (AD) is often focused on detecting anomalies for industrial quality inspection and medical lesion examination.
This work first constructs a large-scale and general-purpose COCO-AD dataset by extending COCO to the AD field.
Inspired by the metrics in the segmentation field, we propose several more practical threshold-dependent AD-specific metrics.
arXiv Detail & Related papers (2024-04-16T17:38:26Z) - Data Pruning via Moving-one-Sample-out [61.45441981346064]
We propose a novel data-pruning approach called moving-one-sample-out (MoSo)
MoSo aims to identify and remove the least informative samples from the training set.
Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios.
arXiv Detail & Related papers (2023-10-23T08:00:03Z) - Continual Test-time Domain Adaptation via Dynamic Sample Selection [38.82346845855512]
This paper proposes a Dynamic Sample Selection (DSS) method for Continual Test-time Domain Adaptation (CTDA)
We apply joint positive and negative learning on both high- and low-quality samples to reduce the risk of using wrong information.
Our approach is also evaluated in the 3D point cloud domain, showcasing its versatility and potential for broader applicability.
arXiv Detail & Related papers (2023-10-05T06:35:21Z) - Okapi: Generalising Better by Making Statistical Matches Match [7.392460712829188]
Okapi is a simple, efficient, and general method for robust semi-supervised learning based on online statistical matching.
Our method uses a nearest-neighbours-based matching procedure to generate cross-domain views for a consistency loss.
We show that it is in fact possible to leverage additional unlabelled data to improve upon empirical risk minimisation.
arXiv Detail & Related papers (2022-11-07T12:41:17Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Repeated Environment Inference for Invariant Learning [8.372465442144046]
We focus on the invariant representation notion when the Bayes optimal conditional label distribution is the same across different environments.
Previous work conducts Environment Inference (EI) by maximizing the penalty term from Invariant Risk Minimization (IRM) framework.
We show that this method outperforms baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-07-26T13:07:22Z) - DAGA: Data Augmentation with a Generation Approach for Low-resource
Tagging Tasks [88.62288327934499]
We propose a novel augmentation method with language models trained on the linearized labeled sentences.
Our method is applicable to both supervised and semi-supervised settings.
arXiv Detail & Related papers (2020-11-03T07:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.