Meta-Auxiliary Learning for Adaptive Human Pose Prediction
- URL: http://arxiv.org/abs/2304.06411v1
- Date: Thu, 13 Apr 2023 11:17:09 GMT
- Title: Meta-Auxiliary Learning for Adaptive Human Pose Prediction
- Authors: Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Bin Li, Weiqing Li
- Abstract summary: Predicting high-fidelity future human poses is decisive for intelligent robots to interact with humans.
Deep end-to-end learning approaches, which typically train a generic pre-trained model on external datasets and then directly apply it to all test samples, remain non-optimal.
We propose a novel test-time adaptation framework that leverages two self-supervised auxiliary tasks to help the primary forecasting network adapt to the test sequence.
- Score: 26.877194503491072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting high-fidelity future human poses, from a historically observed
sequence, is decisive for intelligent robots to interact with humans. Deep
end-to-end learning approaches, which typically train a generic pre-trained
model on external datasets and then directly apply it to all test samples,
emerge as the dominant solution to solve this issue. Despite encouraging
progress, they remain non-optimal, as the unique properties (e.g., motion
style, rhythm) of a specific sequence cannot be adapted. More generally, at
test-time, once encountering unseen motion categories (out-of-distribution),
the predicted poses tend to be unreliable. Motivated by this observation, we
propose a novel test-time adaptation framework that leverages two
self-supervised auxiliary tasks to help the primary forecasting network adapt
to the test sequence. In the testing phase, our model can adjust the model
parameters by several gradient updates to improve the generation quality.
However, due to catastrophic forgetting, both auxiliary tasks typically tend to
the low ability to automatically present the desired positive incentives for
the final prediction performance. For this reason, we also propose a
meta-auxiliary learning scheme for better adaptation. In terms of general
setup, our approach obtains higher accuracy, and under two new experimental
designs for out-of-distribution data (unseen subjects and categories), achieves
significant improvements.
Related papers
- Adaptive Cascading Network for Continual Test-Time Adaptation [12.718826132518577]
We study the problem of continual test-time adaption where the goal is to adapt a source pre-trained model to a sequence of unlabelled target domains at test time.
Existing methods on test-time training suffer from several limitations.
arXiv Detail & Related papers (2024-07-17T01:12:57Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Parameter-free Online Test-time Adaptation [19.279048049267388]
We show how test-time adaptation methods fare for a number of pre-trained models on a variety of real-world scenarios.
We propose a particularly "conservative" approach, which addresses the problem with a Laplacian Adjusted Maximum Estimation (LAME)
Our approach exhibits a much higher average accuracy across scenarios than existing methods, while being notably faster and have a much lower memory footprint.
arXiv Detail & Related papers (2022-01-15T00:29:16Z) - Online Adaptation of Neural Network Models by Modified Extended Kalman
Filter for Customizable and Transferable Driving Behavior Prediction [3.878105750489657]
Behavior prediction of human drivers is crucial for efficient and safe deployment of autonomous vehicles.
In this paper, we apply a $tau$-step modified Extended Kalman Filter parameter adaptation algorithm to the driving behavior prediction task.
With the feedback of the observed trajectory, the algorithm is applied to improve the performance of driving behavior predictions across different human subjects and scenarios.
arXiv Detail & Related papers (2021-12-09T05:39:21Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.