Uncertainty-aware Score Distribution Learning for Action Quality
Assessment
- URL: http://arxiv.org/abs/2006.07665v1
- Date: Sat, 13 Jun 2020 15:41:29 GMT
- Title: Uncertainty-aware Score Distribution Learning for Action Quality
Assessment
- Authors: Yansong Tang, Zanlin Ni, Jiahuan Zhou, Danyang Zhang, Jiwen Lu, Ying
Wu, Jie Zhou
- Abstract summary: We propose an uncertainty-aware score distribution learning (USDL) approach for action quality assessment (AQA)
Specifically, we regard an action as an instance associated with a score distribution, which describes the probability of different evaluated scores.
Under the circumstance where fine-grained score labels are available, we devise a multi-path uncertainty-aware score distributions learning (MUSDL) method to explore the disentangled components of a score.
- Score: 91.05846506274881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assessing action quality from videos has attracted growing attention in
recent years. Most existing approaches usually tackle this problem based on
regression algorithms, which ignore the intrinsic ambiguity in the score labels
caused by multiple judges or their subjective appraisals. To address this
issue, we propose an uncertainty-aware score distribution learning (USDL)
approach for action quality assessment (AQA). Specifically, we regard an action
as an instance associated with a score distribution, which describes the
probability of different evaluated scores. Moreover, under the circumstance
where fine-grained score labels are available (e.g., difficulty degree of an
action or multiple scores from different judges), we further devise a
multi-path uncertainty-aware score distributions learning (MUSDL) method to
explore the disentangled components of a score. We conduct experiments on three
AQA datasets containing various Olympic actions and surgical activities, where
our approaches set new state-of-the-arts under the Spearman's Rank Correlation.
Related papers
- RICA2: Rubric-Informed, Calibrated Assessment of Actions [8.641411594566714]
We present RICA2 - a deep probabilistic model that score rubric and accounts for prediction uncertainty for action quality assessment (AQA)
We demonstrate that our method establishes new state of the art on public benchmarks, including FineDiving, MTL-AQA, and JIGSAWS, with superior performance in score prediction and uncertainty calibration.
arXiv Detail & Related papers (2024-08-04T20:35:33Z) - Causal Interventions-based Few-Shot Named Entity Recognition [5.961427870758681]
Few-shot named entity recognition (NER) systems aims at recognizing new classes of entities based on a few labeled samples.
The heavy overfitting in few-shot learning is mainly led by spurious correlation caused by the few samples selection bias.
We propose a causal intervention-based few-shot NER method to alleviate the problem of the spurious correlation.
arXiv Detail & Related papers (2023-05-03T06:11:39Z) - The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
Estimators with MetaQuantus [10.135749005469686]
One of the unsolved challenges in the field of Explainable AI (XAI) is determining how to most reliably estimate the quality of an explanation method.
We address this issue through a meta-evaluation of different quality estimators in XAI.
Our novel framework, MetaQuantus, analyses two complementary performance characteristics of a quality estimator.
arXiv Detail & Related papers (2023-02-14T18:59:02Z) - Uncertainty-Driven Action Quality Assessment [67.20617610820857]
We propose a novel probabilistic model, named Uncertainty-Driven AQA (UD-AQA), to capture the diversity among multiple judge scores.
We generate the estimation of uncertainty for each prediction, which is employed to re-weight AQA regression loss.
Our proposed method achieves competitive results on three benchmarks including the Olympic events MTL-AQA and FineDiving, and the surgical skill JIGSAWS datasets.
arXiv Detail & Related papers (2022-07-29T07:21:15Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - Group-aware Contrastive Regression for Action Quality Assessment [85.43203180953076]
We show that the relations among videos can provide important clues for more accurate action quality assessment.
Our approach outperforms previous methods by a large margin and establishes new state-of-the-art on all three benchmarks.
arXiv Detail & Related papers (2021-08-17T17:59:39Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.