Multiplexed Illumination for Classifying Visually Similar Objects
- URL: http://arxiv.org/abs/2009.11084v1
- Date: Wed, 23 Sep 2020 12:10:06 GMT
- Title: Multiplexed Illumination for Classifying Visually Similar Objects
- Authors: Taihua Wang and Donald G. Dansereau
- Abstract summary: We propose the use of multiplexed illumination to extend the range of objects that can be successfully classified.
We construct a compact RGB-IR light stage that images samples under different combinations of illuminant position and colour.
We then develop a methodology for selecting illumination patterns and training a classifier using the resulting imagery.
- Score: 2.715884199292287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distinguishing visually similar objects like forged/authentic bills and
healthy/unhealthy plants is beyond the capabilities of even the most
sophisticated classifiers. We propose the use of multiplexed illumination to
extend the range of objects that can be successfully classified. We construct a
compact RGB-IR light stage that images samples under different combinations of
illuminant position and colour. We then develop a methodology for selecting
illumination patterns and training a classifier using the resulting imagery. We
use the light stage to model and synthetically relight training samples, and
propose a greedy pattern selection scheme that exploits this ability to train
in simulation. We then apply the trained patterns to carry out fast
classification of new objects. We demonstrate the approach on visually similar
artificial and real fruit samples, showing a marked improvement compared with
fixed-illuminant approaches as well as a more conventional code selection
scheme. This work allows fast classification of previously indistinguishable
objects, with potential applications in forgery detection, quality control in
agriculture and manufacturing, and skin lesion classification.
Related papers
- ConDL: Detector-Free Dense Image Matching [2.7582789611575897]
We introduce a deep-learning framework designed for estimating dense image correspondences.
Our fully convolutional model generates dense feature maps for images, where each pixel is associated with a descriptor that can be matched across multiple images.
arXiv Detail & Related papers (2024-08-05T18:34:15Z) - Accurate Explanation Model for Image Classifiers using Class Association Embedding [5.378105759529487]
We propose a generative explanation model that combines the advantages of global and local knowledge.
Class association embedding (CAE) encodes each sample into a pair of separated class-associated and individual codes.
Building-block coherency feature extraction algorithm is proposed that efficiently separates class-associated features from individual ones.
arXiv Detail & Related papers (2024-06-12T07:41:00Z) - Diversified in-domain synthesis with efficient fine-tuning for few-shot
classification [64.86872227580866]
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
We propose DISEF, a novel approach which addresses the generalization challenge in few-shot learning using synthetic data.
We validate our method in ten different benchmarks, consistently outperforming baselines and establishing a new state-of-the-art for few-shot classification.
arXiv Detail & Related papers (2023-12-05T17:18:09Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Enhance Images as You Like with Unpaired Learning [8.104571453311442]
We propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space.
Our network learns to generate a collection of enhanced images from a given input conditioned on various reference images.
Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets.
arXiv Detail & Related papers (2021-10-04T03:00:44Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Sill-Net: Feature Augmentation with Separated Illumination
Representation [35.25230715669166]
We propose a novel neural network architecture called Separating-Illumination Network (Sill-Net)
Sill-Net learns to separate illumination features from images, and then during training we augment training samples with these separated illumination features in the feature space.
Experimental results demonstrate that our approach outperforms current state-of-the-art methods in several object classification benchmarks.
arXiv Detail & Related papers (2021-02-06T09:00:10Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.