Cross-Domain Video Anomaly Detection without Target Domain Adaptation
- URL: http://arxiv.org/abs/2212.07010v1
- Date: Wed, 14 Dec 2022 03:48:00 GMT
- Title: Cross-Domain Video Anomaly Detection without Target Domain Adaptation
- Authors: Abhishek Aich, Kuan-Chuan Peng, Amit K. Roy-Chowdhury
- Abstract summary: Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain.
This requires laborious model-tuning by the end-user who may prefer to have a system that works out-of-the-box"
- Score: 38.823721272155616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most cross-domain unsupervised Video Anomaly Detection (VAD) works assume
that at least few task-relevant target domain training data are available for
adaptation from the source to the target domain. However, this requires
laborious model-tuning by the end-user who may prefer to have a system that
works ``out-of-the-box." To address such practical scenarios, we identify a
novel target domain (inference-time) VAD task where no target domain training
data are available. To this end, we propose a new `Zero-shot Cross-domain Video
Anomaly Detection (zxvad)' framework that includes a future-frame prediction
generative model setup. Different from prior future-frame prediction models,
our model uses a novel Normalcy Classifier module to learn the features of
normal event videos by learning how such features are different ``relatively"
to features in pseudo-abnormal examples. A novel Untrained Convolutional Neural
Network based Anomaly Synthesis module crafts these pseudo-abnormal examples by
adding foreign objects in normal video frames with no extra training cost. With
our novel relative normalcy feature learning strategy, zxvad generalizes and
learns to distinguish between normal and abnormal frames in a new target domain
without adaptation during inference. Through evaluations on common datasets, we
show that zxvad outperforms the state-of-the-art (SOTA), regardless of whether
task-relevant (i.e., VAD) source training data are available or not. Lastly,
zxvad also beats the SOTA methods in inference-time efficiency metrics
including the model size, total parameters, GPU energy consumption, and GMACs.
Related papers
- NuwaTS: a Foundation Model Mending Every Incomplete Time Series [24.768755438620666]
We present textbfNuwaTS, a novel framework that repurposes Pre-trained Language Models for general time series imputation.
NuwaTS can be applied to impute missing data across any domain.
We show that NuwaTS generalizes to other time series tasks, such as forecasting.
arXiv Detail & Related papers (2024-05-24T07:59:02Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - GAN-based Domain Inference Attack [3.731168012111833]
We propose a generative adversarial network (GAN) based method to explore likely or similar domains of a target model.
We find that the target model may distract the training procedure less if the domain is more similar to the target domain.
Our experiments show that the auxiliary dataset from an MDI top-ranked domain can effectively boost the result of model-inversion attacks.
arXiv Detail & Related papers (2022-12-22T15:40:53Z) - Towards Online Domain Adaptive Object Detection [79.89082006155135]
Existing object detection models assume both the training and test data are sampled from the same source domain.
We propose a novel unified adaptation framework that adapts and improves generalization on the target domain in online settings.
arXiv Detail & Related papers (2022-04-11T17:47:22Z) - DANNTe: a case study of a turbo-machinery sensor virtualization under
domain shift [0.0]
We propose an adversarial learning method to tackle a Domain Adaptation (DA) time series regression task (DANNTe)
The regression aims at building a virtual copy of a sensor installed on a gas turbine, to be used in place of the physical sensor which can be missing in certain situations.
We report a significant improvement in regression performance, compared to the baseline model trained on the source domain only.
arXiv Detail & Related papers (2022-01-11T09:24:33Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for
Semantic Segmentation Without Using Source Domain Representations [35.586031601299034]
Unsupervised BatchNorm Adaptation (UBNA) adapts a given pre-trained model to an unseen target domain.
We partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor.
Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.
arXiv Detail & Related papers (2020-11-17T08:37:40Z) - Domain Adaptation Using Class Similarity for Robust Speech Recognition [24.951852740214413]
This paper proposes a novel adaptation method for deep neural network (DNN) acoustic model using class similarity.
Experiments showed that our approach outperforms fine-tuning using one-hot labels on both accent and noise adaptation task.
arXiv Detail & Related papers (2020-11-05T12:26:43Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.