Supervision Levels Scale (SLS)
- URL: http://arxiv.org/abs/2008.09890v1
- Date: Sat, 22 Aug 2020 18:03:20 GMT
- Title: Supervision Levels Scale (SLS)
- Authors: Dima Damen and Michael Wray
- Abstract summary: We capture three aspects of supervision, that are known to give methods an advantage while requiring additional costs: pre-training, training labels and training data.
The proposed three-dimensional scale can be included in result tables or leaderboards to handily compare methods not only by their performance, but also by the level of data supervision utilised by each method.
- Score: 37.944946917484444
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a three-dimensional discrete and incremental scale to encode a
method's level of supervision - i.e. the data and labels used when training a
model to achieve a given performance. We capture three aspects of supervision,
that are known to give methods an advantage while requiring additional costs:
pre-training, training labels and training data. The proposed three-dimensional
scale can be included in result tables or leaderboards to handily compare
methods not only by their performance, but also by the level of data
supervision utilised by each method. The Supervision Levels Scale (SLS) is
first presented generally fo any task/dataset/challenge. It is then applied to
the EPIC-KITCHENS-100 dataset, to be used for the various leaderboards and
challenges associated with this dataset.
Related papers
- Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards [67.65408769829524]
Scientific leaderboards are standardized ranking systems that facilitate evaluating and comparing competitive methods.
The exponential increase in publications has made it infeasible to construct and maintain these leaderboards manually.
automatic leaderboard construction has emerged as a solution to reduce manual labor.
arXiv Detail & Related papers (2024-09-19T11:12:27Z) - Unsupervised Pre-training with Language-Vision Prompts for Low-Data Instance Segmentation [105.23631749213729]
We propose a novel method for unsupervised pre-training in low-data regimes.
Inspired by the recently successful prompting technique, we introduce a new method, Unsupervised Pre-training with Language-Vision Prompts.
We show that our method can converge faster and perform better than CNN-based models in low-data regimes.
arXiv Detail & Related papers (2024-05-22T06:48:43Z) - One-Shot Learning as Instruction Data Prospector for Large Language Models [108.81681547472138]
textscNuggets uses one-shot learning to select high-quality instruction data from extensive datasets.
We show that instruction tuning with the top 1% of examples curated by textscNuggets substantially outperforms conventional methods employing the entire dataset.
arXiv Detail & Related papers (2023-12-16T03:33:12Z) - Towards Generic Semi-Supervised Framework for Volumetric Medical Image
Segmentation [19.09640071505051]
We develop a generic SSL framework to handle settings such as UDA and SemiDG.
We evaluate our proposed framework on four benchmark datasets for SSL, Class-imbalanced SSL, UDA and SemiDG.
The results showcase notable improvements compared to state-of-the-art methods across all four settings.
arXiv Detail & Related papers (2023-10-17T14:58:18Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Multi-Class 3D Object Detection with Single-Class Supervision [34.216636233945856]
Training multi-class 3D detectors with fully labeled datasets can be expensive.
An alternative approach is to have targeted single-class labels on disjoint data samples.
In this paper, we are interested in training a multi-class 3D object detection model, while using single-class labeled data.
arXiv Detail & Related papers (2022-05-11T18:00:05Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse
LiDAR datasets [3.8243923744440926]
Transfer learning is a proven technique in 2D computer vision to leverage the large amount of data available and achieve high performance.
In this work, we tackle the case of real-time 3D semantic segmentation of sparse autonomous driving LiDAR scans.
We introduce a new pre-training task: coarse label pre-training, also called COLA.
arXiv Detail & Related papers (2022-02-14T17:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.