Confidence-Aware Paced-Curriculum Learning by Label Smoothing for
Surgical Scene Understanding
- URL: http://arxiv.org/abs/2212.11511v1
- Date: Thu, 22 Dec 2022 07:19:15 GMT
- Title: Confidence-Aware Paced-Curriculum Learning by Label Smoothing for
Surgical Scene Understanding
- Authors: Mengya Xu, Mobarakol Islam, Ben Glocker and Hongliang Ren
- Abstract summary: We design a curriculum by label smoothing (P-CBLS) using paced learning with uniform label smoothing (ULS) for classification tasks and fuse uniform and spatially varying label smoothing (SVLS) for semantic segmentation tasks in a curriculum manner.
We set a bigger smoothing value at the beginning of training and gradually decreased it to zero to control the model learning utility from lower to higher.
The proposed techniques are validated on four robotic surgery datasets.
- Score: 33.62888947753327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Curriculum learning and self-paced learning are the training strategies that
gradually feed the samples from easy to more complex. They have captivated
increasing attention due to their excellent performance in robotic vision. Most
recent works focus on designing curricula based on difficulty levels in input
samples or smoothing the feature maps. However, smoothing labels to control the
learning utility in a curriculum manner is still unexplored. In this work, we
design a paced curriculum by label smoothing (P-CBLS) using paced learning with
uniform label smoothing (ULS) for classification tasks and fuse uniform and
spatially varying label smoothing (SVLS) for semantic segmentation tasks in a
curriculum manner. In ULS and SVLS, a bigger smoothing factor value enforces a
heavy smoothing penalty in the true label and limits learning less information.
Therefore, we design the curriculum by label smoothing (CBLS). We set a bigger
smoothing value at the beginning of training and gradually decreased it to zero
to control the model learning utility from lower to higher. We also designed a
confidence-aware pacing function and combined it with our CBLS to investigate
the benefits of various curricula. The proposed techniques are validated on
four robotic surgery datasets of multi-class, multi-label classification,
captioning, and segmentation tasks. We also investigate the robustness of our
method by corrupting validation data into different severity levels. Our
extensive analysis shows that the proposed method improves prediction accuracy
and robustness.
Related papers
- Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Learning with Partial Labels from Semi-supervised Perspective [28.735185883881172]
Partial Label (PL) learning refers to the task of learning from partially labeled data.
We propose a novel PL learning method, namely Partial Label learning with Semi-Supervised Perspective (PLSP)
PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels.
arXiv Detail & Related papers (2022-11-24T15:12:16Z) - Is margin all you need? An extensive empirical study of active learning
on tabular data [66.18464006872345]
We analyze the performance of a variety of active learning algorithms on 69 real-world datasets from the OpenML-CC18 benchmark.
Surprisingly, we find that the classical margin sampling technique matches or outperforms all others, including current state-of-art.
arXiv Detail & Related papers (2022-10-07T21:18:24Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - STAR: Noisy Semi-Supervised Transfer Learning for Visual Classification [0.8662293148437356]
Semi-supervised learning (SSL) has proven to be effective at leveraging large-scale unlabeled data.
Recent SSL methods rely on unlabeled image data at a scale of billions to work well.
We propose noisy semi-supervised transfer learning, which integrates transfer learning and self-training with noisy student.
arXiv Detail & Related papers (2021-08-18T19:35:05Z) - Improving Self-supervised Learning with Hardness-aware Dynamic
Curriculum Learning: An Application to Digital Pathology [2.2742357407157847]
Self-supervised learning (SSL) has recently shown tremendous potential to learn generic visual representations useful for many image analysis tasks.
The existing SSL methods fail to generalize to downstream tasks when the number of labeled training instances is small or if the domain shift between the transfer domains is significant.
This paper attempts to improve self-supervised pretrained representations through the lens of curriculum learning.
arXiv Detail & Related papers (2021-08-16T15:44:48Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Boosting the Performance of Semi-Supervised Learning with Unsupervised
Clustering [10.033658645311188]
We show that ignoring labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime.
We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms.
arXiv Detail & Related papers (2020-12-01T14:19:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.