SALAD: Self-Assessment Learning for Action Detection
- URL: http://arxiv.org/abs/2011.06958v1
- Date: Fri, 13 Nov 2020 15:10:40 GMT
- Title: SALAD: Self-Assessment Learning for Action Detection
- Authors: Guillaume Vaudaux-Ruth, Adrien Chan-Hon-Tong, Catherine Achard
- Abstract summary: We show that using within a framework of action detection, the learning of a self-assessment score is able to improve the whole action localization process.
Our approach outperforms the state-of-the-art on two action detection benchmarks.
- Score: 4.189643331553922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Literature on self-assessment in machine learning mainly focuses on the
production of well-calibrated algorithms through consensus frameworks i.e.
calibration is seen as a problem. Yet, we observe that learning to be properly
confident could behave like a powerful regularization and thus, could be an
opportunity to improve performance.Precisely, we show that used within a
framework of action detection, the learning of a self-assessment score is able
to improve the whole action localization process.Experimental results show that
our approach outperforms the state-of-the-art on two action detection
benchmarks. On THUMOS14 dataset, the mAP at tIoU@0.5 is improved from 42.8\% to
44.6\%, and from 50.4\% to 51.7\% on ActivityNet1.3 dataset. For lower tIoU
values, we achieve even more significant improvements on both datasets.
Related papers
- STAT: Towards Generalizable Temporal Action Localization [56.634561073746056]
Weakly-supervised temporal action localization (WTAL) aims to recognize and localize action instances with only video-level labels.
Existing methods suffer from severe performance degradation when transferring to different distributions.
We propose GTAL, which focuses on improving the generalizability of action localization methods.
arXiv Detail & Related papers (2024-04-20T07:56:21Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Augmenting Unsupervised Reinforcement Learning with Self-Reference [63.68018737038331]
Humans possess the ability to draw on past experiences explicitly when learning new tasks.
We propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information.
Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark.
arXiv Detail & Related papers (2023-11-16T09:07:34Z) - Patch-Level Contrasting without Patch Correspondence for Accurate and
Dense Contrastive Representation Learning [79.43940012723539]
ADCLR is a self-supervised learning framework for learning accurate and dense vision representation.
Our approach achieves new state-of-the-art performance for contrastive methods.
arXiv Detail & Related papers (2023-06-23T07:38:09Z) - Re-Benchmarking Pool-Based Active Learning for Binary Classification [27.034593234956713]
Active learning is a paradigm that significantly enhances the performance of machine learning models when acquiring labeled data.
While several benchmarks exist for evaluating active learning strategies, their findings exhibit some misalignment.
This discrepancy motivates us to develop a transparent and reproducible benchmark for the community.
arXiv Detail & Related papers (2023-06-15T08:47:50Z) - Self-supervised Semi-supervised Learning for Data Labeling and Quality
Evaluation [10.483508279350195]
We tackle the problems of efficient data labeling and annotation verification under the human-in-the-loop setting.
We propose a unifying framework by leveraging self-supervised semi-supervised learning and use it to construct for data labeling and verification tasks.
arXiv Detail & Related papers (2021-11-22T00:59:00Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z) - Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and
the CARING Models [37.60817779613977]
We present the first study of how welthe confidence values of modern action recognition architectures indeed reflect the probability of the correct outcome.
We introduce a new approach which learns to transform the model output into realistic confidence estimates through an additional calibration network.
arXiv Detail & Related papers (2021-01-02T15:41:21Z) - On the Marginal Benefit of Active Learning: Does Self-Supervision Eat
Its Cake? [31.563514432259897]
We present a novel framework integrating self-supervised pretraining, active learning, and consistency-regularized self-training.
Our experiments reveal two key insights: (i) Self-supervised pre-training significantly improves semi-supervised learning, especially in the few-label regime.
We fail to observe any additional benefit of state-of-the-art active learning algorithms when combined with state-of-the-art S4L techniques.
arXiv Detail & Related papers (2020-11-16T17:34:55Z) - Generalized Reinforcement Meta Learning for Few-Shot Optimization [3.7675996866306845]
We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
Our framework could be easily extended to do network architecture search.
arXiv Detail & Related papers (2020-05-04T03:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.