Angular Luminance for Material Segmentation
- URL: http://arxiv.org/abs/2009.10825v1
- Date: Tue, 22 Sep 2020 21:15:27 GMT
- Title: Angular Luminance for Material Segmentation
- Authors: Jia Xue, Matthew Purri, Kristin Dana
- Abstract summary: Moving cameras provide multiple intensity measurements per pixel, yet often semantic segmentation, material recognition, and object recognition do not utilize this information.
We utilize per-pixel angular luminance distributions as a key feature in discriminating the material of the surface.
For real-world materials there is significant intra-class variation that can be managed by building a angular luminance network (AngLNet)
- Score: 6.374538197161135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Moving cameras provide multiple intensity measurements per pixel, yet often
semantic segmentation, material recognition, and object recognition do not
utilize this information. With basic alignment over several frames of a moving
camera sequence, a distribution of intensities over multiple angles is
obtained. It is well known from prior work that luminance histograms and the
statistics of natural images provide a strong material recognition cue. We
utilize per-pixel {\it angular luminance distributions} as a key feature in
discriminating the material of the surface. The angle-space sampling in a
multiview satellite image sequence is an unstructured sampling of the
underlying reflectance function of the material. For real-world materials there
is significant intra-class variation that can be managed by building a angular
luminance network (AngLNet). This network combines angular reflectance cues
from multiple images with spatial cues as input to fully convolutional networks
for material segmentation. We demonstrate the increased performance of AngLNet
over prior state-of-the-art in material segmentation from satellite imagery.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - MatSpectNet: Material Segmentation Network with Domain-Aware and
Physically-Constrained Hyperspectral Reconstruction [13.451692195639696]
MatSpectNet is a new model to segment materials with recovered hyperspectral images from RGB images.
It exploits the principles of colour perception in modern cameras to constrain the reconstructed hyperspectral images.
It attains a 1.60% increase in average pixel accuracy and a 3.42% improvement in mean class accuracy compared with the most recent publication.
arXiv Detail & Related papers (2023-07-21T10:02:02Z) - Learning-based Spatial and Angular Information Separation for Light
Field Compression [29.827366575505557]
We propose a novel neural network that can separate angular and spatial information of a light field.
The network represents spatial information using spatial kernels shared among all Sub-Aperture Images (SAIs), and angular information using sets of angular kernels for each SAI.
arXiv Detail & Related papers (2023-04-13T08:02:38Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - NeROIC: Neural Rendering of Objects from Online Image Collections [42.02832046768925]
We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects.
This enables various object-centric rendering applications such as novel-view synthesis, relighting, and harmonized background composition.
arXiv Detail & Related papers (2022-01-07T16:45:15Z) - Multi-Content Complementation Network for Salient Object Detection in
Optical Remote Sensing Images [108.79667788962425]
salient object detection in optical remote sensing images (RSI-SOD) remains to be a challenging emerging topic.
We propose a novel Multi-Content Complementation Network (MCCNet) to explore the complementarity of multiple content for RSI-SOD.
In MCCM, we consider multiple types of features that are critical to RSI-SOD, including foreground features, edge features, background features, and global image-level features.
arXiv Detail & Related papers (2021-12-02T04:46:40Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z) - Differential Viewpoints for Ground Terrain Material Recognition [32.91058153755717]
We build a large-scale material database to support ground terrain recognition for applications such as autonomous driving and robot navigation.
We develop a novel approach for material recognition called texture-encoded angular network (TEAN) that combines deep encoding of RGB information and differential angular images for angular-gradient features.
Our results show that TEAN achieves recognition performance that surpasses single view performance and standard (non-differential/large-angle sampling) multiview performance.
arXiv Detail & Related papers (2020-09-22T02:57:28Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.