SILT: Shadow-aware Iterative Label Tuning for Learning to Detect Shadows
from Noisy Labels
- URL: http://arxiv.org/abs/2308.12064v1
- Date: Wed, 23 Aug 2023 11:16:36 GMT
- Title: SILT: Shadow-aware Iterative Label Tuning for Learning to Detect Shadows
from Noisy Labels
- Authors: Han Yang, Tianyu Wang, Xiaowei Hu and Chi-Wing Fu
- Abstract summary: We propose SILT, the Shadow-aware Iterative Label Tuning framework, which explicitly considers noise in shadow labels and trains the deep model in a self-training manner.
We also devise a simple yet effective label tuning strategy with global-local fusion and shadow-aware filtering to encourage the network to make significant refinements on the noisy labels.
Our results show that even a simple U-Net trained with SILT can outperform all state-of-the-art methods by a large margin.
- Score: 53.30604926018168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing shadow detection datasets often contain missing or mislabeled
shadows, which can hinder the performance of deep learning models trained
directly on such data. To address this issue, we propose SILT, the Shadow-aware
Iterative Label Tuning framework, which explicitly considers noise in shadow
labels and trains the deep model in a self-training manner. Specifically, we
incorporate strong data augmentations with shadow counterfeiting to help the
network better recognize non-shadow regions and alleviate overfitting. We also
devise a simple yet effective label tuning strategy with global-local fusion
and shadow-aware filtering to encourage the network to make significant
refinements on the noisy labels. We evaluate the performance of SILT by
relabeling the test set of the SBU dataset and conducting various experiments.
Our results show that even a simple U-Net trained with SILT can outperform all
state-of-the-art methods by a large margin. When trained on SBU / UCF / ISTD,
our network can successfully reduce the Balanced Error Rate by 25.2% / 36.9% /
21.3% over the best state-of-the-art method.
Related papers
- ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - Can semi-supervised learning reduce the amount of manual labelling
required for effective radio galaxy morphology classification? [0.0]
We test whether SSL can achieve performance comparable to the current supervised state of the art when using many fewer labelled data points.
We find that although SSL provides additional regularisation, its performance degrades rapidly when using very few labels.
arXiv Detail & Related papers (2021-11-08T09:36:48Z) - R2D: Learning Shadow Removal to Enhance Fine-Context Shadow Detection [64.10636296274168]
Current shadow detection methods perform poorly when detecting shadow regions that are small, unclear or have blurry edges.
We propose a new method called Restore to Detect (R2D), where a deep neural network is trained for restoration (shadow removal)
We show that our proposed method R2D improves the shadow detection performance while being able to detect fine context better compared to the other recent methods.
arXiv Detail & Related papers (2021-09-20T15:09:22Z) - Co-Seg: An Image Segmentation Framework Against Label Corruption [8.219887855003648]
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training.
We propose a novel framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels.
Our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
arXiv Detail & Related papers (2021-01-31T20:01:40Z) - Boosting the Performance of Semi-Supervised Learning with Unsupervised
Clustering [10.033658645311188]
We show that ignoring labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime.
We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms.
arXiv Detail & Related papers (2020-12-01T14:19:14Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.