Sill-Net: Feature Augmentation with Separated Illumination
Representation
- URL: http://arxiv.org/abs/2102.03539v1
- Date: Sat, 6 Feb 2021 09:00:10 GMT
- Title: Sill-Net: Feature Augmentation with Separated Illumination
Representation
- Authors: Haipeng Zhang, Zhong Cao, Ziang Yan, Changshui Zhang
- Abstract summary: We propose a novel neural network architecture called Separating-Illumination Network (Sill-Net)
Sill-Net learns to separate illumination features from images, and then during training we augment training samples with these separated illumination features in the feature space.
Experimental results demonstrate that our approach outperforms current state-of-the-art methods in several object classification benchmarks.
- Score: 35.25230715669166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For visual object recognition tasks, the illumination variations can cause
distinct changes in object appearance and thus confuse the deep neural network
based recognition models. Especially for some rare illumination conditions,
collecting sufficient training samples could be time-consuming and expensive.
To solve this problem, in this paper we propose a novel neural network
architecture called Separating-Illumination Network (Sill-Net). Sill-Net learns
to separate illumination features from images, and then during training we
augment training samples with these separated illumination features in the
feature space. Experimental results demonstrate that our approach outperforms
current state-of-the-art methods in several object classification benchmarks.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Efficient Visualization of Neural Networks with Generative Models and Adversarial Perturbations [0.0]
This paper presents a novel approach for deep visualization via a generative network, offering an improvement over existing methods.
Our model simplifies the architecture by reducing the number of networks used, requiring only a generator and a discriminator.
Our model requires less prior training knowledge and uses a non-adversarial training process, where the discriminator acts as a guide.
arXiv Detail & Related papers (2024-09-20T14:59:25Z) - Volume Feature Rendering for Fast Neural Radiance Field Reconstruction [11.05302598034426]
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
arXiv Detail & Related papers (2023-05-29T06:58:27Z) - Untrained, physics-informed neural networks for structured illumination
microscopy [0.456877715768796]
We show that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data.
The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction limited sub-images.
arXiv Detail & Related papers (2022-07-15T19:02:07Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Multiplexed Illumination for Classifying Visually Similar Objects [2.715884199292287]
We propose the use of multiplexed illumination to extend the range of objects that can be successfully classified.
We construct a compact RGB-IR light stage that images samples under different combinations of illuminant position and colour.
We then develop a methodology for selecting illumination patterns and training a classifier using the resulting imagery.
arXiv Detail & Related papers (2020-09-23T12:10:06Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.