Learning from Pixel-Level Noisy Label : A New Perspective for Light
Field Saliency Detection
- URL: http://arxiv.org/abs/2204.13456v1
- Date: Thu, 28 Apr 2022 12:44:08 GMT
- Title: Learning from Pixel-Level Noisy Label : A New Perspective for Light
Field Saliency Detection
- Authors: Mingtao Feng, Kendong Liu, Liang Zhang, Hongshan Yu, Yaonan Wang,
Ajmal Mian
- Abstract summary: Saliency detection with light field images is becoming attractive given the abundant cues available.
We propose to learn light field saliency from pixel-level noisy labels obtained from unsupervised hand crafted featured based saliency methods.
- Score: 40.76268976076642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency detection with light field images is becoming attractive given the
abundant cues available, however, this comes at the expense of large-scale
pixel level annotated data which is expensive to generate. In this paper, we
propose to learn light field saliency from pixel-level noisy labels obtained
from unsupervised hand crafted featured based saliency methods. Given this
goal, a natural question is: can we efficiently incorporate the relationships
among light field cues while identifying clean labels in a unified framework?
We address this question by formulating the learning as a joint optimization of
intra light field features fusion stream and inter scenes correlation stream to
generate the predictions. Specially, we first introduce a pixel forgetting
guided fusion module to mutually enhance the light field features and exploit
pixel consistency across iterations to identify noisy pixels. Next, we
introduce a cross scene noise penalty loss for better reflecting latent
structures of training data and enabling the learning to be invariant to noise.
Extensive experiments on multiple benchmark datasets demonstrate the
superiority of our framework showing that it learns saliency prediction
comparable to state-of-the-art fully supervised light field saliency methods.
Our code is available at https://github.com/OLobbCode/NoiseLF.
Related papers
- You Only Look Around: Learning Illumination Invariant Feature for Low-light Object Detection [46.636878653865104]
We introduce YOLA, a novel framework for object detection in low-light scenarios.
We learn illumination-invariant features through the Lambertian image formation model.
Our empirical findings reveal significant improvements in low-light object detection tasks.
arXiv Detail & Related papers (2024-10-24T03:23:50Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Probabilistic Deep Metric Learning for Hyperspectral Image
Classification [91.5747859691553]
This paper proposes a probabilistic deep metric learning framework for hyperspectral image classification.
It aims to predict the category of each pixel for an image captured by hyperspectral sensors.
Our framework can be readily applied to existing hyperspectral image classification methods.
arXiv Detail & Related papers (2022-11-15T17:57:12Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Distilling effective supervision for robust medical image segmentation
with noisy labels [21.68138582276142]
We propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels.
In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation.
We present an image-level robust learning method to accommodate more information as the complements to pixel-level learning.
arXiv Detail & Related papers (2021-06-21T13:33:38Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator [22.271760669551817]
CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
We propose an active learning framework, which progressively integrates pixel-level annotations during training.
Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of-the-art CAMs and AL methods.
arXiv Detail & Related papers (2020-10-10T03:25:54Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.