Open-World Pose Transfer via Sequential Test-Time Adaption
- URL: http://arxiv.org/abs/2303.10945v1
- Date: Mon, 20 Mar 2023 09:01:23 GMT
- Title: Open-World Pose Transfer via Sequential Test-Time Adaption
- Authors: Junyang Chen, Xiaoyu Xian, Zhijing Yang, Tianshui Chen, Yongyi Lu,
Yukai Shi, Jinshan Pan, Liang Lin
- Abstract summary: A typical pose transfer framework usually employs representative datasets to train a discriminative model.
Test-time adaption (TTA) offers a feasible solution for OOD data by using a pre-trained model that learns essential features with self-supervision.
In our experiment, we first show that pose transfer can be applied to open-world applications, including Tiktok reenactment and celebrity motion synthesis.
- Score: 92.67291699304992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pose transfer aims to transfer a given person into a specified posture, has
recently attracted considerable attention. A typical pose transfer framework
usually employs representative datasets to train a discriminative model, which
is often violated by out-of-distribution (OOD) instances. Recently, test-time
adaption (TTA) offers a feasible solution for OOD data by using a pre-trained
model that learns essential features with self-supervision. However, those
methods implicitly make an assumption that all test distributions have a
unified signal that can be learned directly. In open-world conditions, the pose
transfer task raises various independent signals: OOD appearance and skeleton,
which need to be extracted and distributed in speciality. To address this
point, we develop a SEquential Test-time Adaption (SETA). In the test-time
phrase, SETA extracts and distributes external appearance texture by augmenting
OOD data for self-supervised training. To make non-Euclidean similarity among
different postures explicit, SETA uses the image representations derived from a
person re-identification (Re-ID) model for similarity computation. By
addressing implicit posture representation in the test-time sequentially, SETA
greatly improves the generalization performance of current pose transfer
models. In our experiment, we first show that pose transfer can be applied to
open-world applications, including Tiktok reenactment and celebrity motion
synthesis.
Related papers
- Exploring Stronger Transformer Representation Learning for Occluded Person Re-Identification [2.552131151698595]
We proposed a novel self-supervision and supervision combining transformer-based person re-identification framework, namely SSSC-TransReID.
We designed a self-supervised contrastive learning branch, which can enhance the feature representation for person re-identification without negative samples or additional pre-training.
Our proposed model obtains superior Re-ID performance consistently and outperforms the state-of-the-art ReID methods by large margins on the mean average accuracy (mAP) and Rank-1 accuracy.
arXiv Detail & Related papers (2024-10-21T03:17:25Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Learning by Erasing: Conditional Entropy based Transferable Out-Of-Distribution Detection [17.31471594748061]
Out-of-distribution (OOD) detection is essential to handle the distribution shifts between training and test scenarios.
Existing methods require retraining to capture the dataset-specific feature representation or data distribution.
We propose a deep generative models (DGM) based transferable OOD detection method, which is unnecessary to retrain on a new ID dataset.
arXiv Detail & Related papers (2022-04-23T10:19:58Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - MUTANT: A Training Paradigm for Out-of-Distribution Generalization in
Visual Question Answering [58.30291671877342]
We present MUTANT, a training paradigm that exposes the model to perceptually similar, yet semantically distinct mutations of the input.
MUTANT establishes a new state-of-the-art accuracy on VQA-CP with a $10.57%$ improvement.
arXiv Detail & Related papers (2020-09-18T00:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.