Nonparametric Data Analysis on the Space of Perceived Colors
- URL: http://arxiv.org/abs/2004.03402v1
- Date: Sun, 5 Apr 2020 17:43:33 GMT
- Title: Nonparametric Data Analysis on the Space of Perceived Colors
- Authors: Vic Patrangenaru and Yifang Deng
- Abstract summary: This article is concerned with perceived colors regarded as random objects on a Resnikoff 3D homogeneous space model.
Two applications to color differentiation in machine vision are illustrated for the proposed statistical methodology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Moving around in a 3D world, requires the visual system of a living
individual to rely on three channels of image recognition, which is done
through three types of retinal cones. Newton, Grasmann, Helmholz and
Schr$\ddot{o}$dinger laid down the basic assumptions needed to understand
colored vision. Such concepts were furthered by Resnikoff, who imagined the
space of perceived colors as a 3D homogeneous space.
This article is concerned with perceived colors regarded as random objects on
a Resnikoff 3D homogeneous space model. Two applications to color
differentiation in machine vision are illustrated for the proposed statistical
methodology, applied to the Euclidean model for perceived colors.
Related papers
- Visibility-Uncertainty-guided 3D Gaussian Inpainting via Scene Conceptional Learning [63.94919846010485]
3D Gaussian inpainting (3DGI) is challenging in effectively leveraging complementary visual and semantic cues from multiple input views.
We propose a method that measures the visibility uncertainties of 3D points across different input views and uses them to guide 3DGI.
We build a novel 3DGI framework, VISTA, by integrating VISibility-uncerTainty-guided 3DGI with scene conceptuAl learning.
arXiv Detail & Related papers (2025-04-23T06:21:11Z) - A Computational Framework for Modeling Emergence of Color Vision in the Human Brain [9.10623460958915]
It is a mystery how the brain decodes color vision purely from the optic nerve signals it receives.
We introduce a computational framework for modeling this emergence of human color vision by simulating both the eye and the cortex.
arXiv Detail & Related papers (2024-08-29T21:27:06Z) - SpecGaussian with Latent Features: A High-quality Modeling of the View-dependent Appearance for 3D Gaussian Splatting [11.978842116007563]
Lantent-SpecGS is an approach that utilizes a universal latent neural descriptor within each 3D Gaussian.
Two parallel CNNs are designed to decoder the splatting feature maps into diffuse color and specular color separately.
A mask that depends on the viewpoint is learned to merge these two colors, resulting in the final rendered image.
arXiv Detail & Related papers (2024-08-23T15:25:08Z) - Computational Trichromacy Reconstruction: Empowering the Color-Vision Deficient to Recognize Colors Using Augmented Reality [12.77228283953913]
We propose an assistive technology that helps individuals with Color Vision Deficiencies (CVD) to recognize/name colors.
A dichromat's color perception is a reduced two-dimensional (2D) subset of a normal trichromat's three dimensional color (3D) perception.
Using our proposed system, CVD individuals can interactively induce distinct changes to originally confusing colors via a computational color space transformation.
arXiv Detail & Related papers (2024-08-04T01:34:22Z) - THOR2: Leveraging Topological Soft Clustering of Color Space for Human-Inspired Object Recognition in Unseen Environments [1.9950682531209158]
This study presents a 3D shape and color-based descriptor, TOPS2, for point clouds generated from RGB-D images and an accompanying recognition framework, THOR2.
The TOPS2 descriptor embodies object unity, a human cognition mechanism, by retaining the slicing-based topological representation of 3D shape from the TOPS descriptor.
THOR2, trained using synthetic data, demonstrates markedly improved recognition accuracy compared to THOR, its 3D shape-based predecessor.
arXiv Detail & Related papers (2024-08-02T21:24:14Z) - Towards Human-Level 3D Relative Pose Estimation: Generalizable, Training-Free, with Single Reference [62.99706119370521]
Humans can easily deduce the relative pose of an unseen object, without label/training, given only a single query-reference image pair.
We propose a novel 3D generalizable relative pose estimation method by elaborating (i) with a 2.5D shape from an RGB-D reference, (ii) with an off-the-shelf differentiable, and (iii) with semantic cues from a pretrained model like DINOv2.
arXiv Detail & Related papers (2024-06-26T16:01:10Z) - Colorizing Monochromatic Radiance Fields [55.695149357101755]
We consider reproducing color from monochromatic radiance fields as a representation-prediction task in the Lab color space.
By first constructing the luminance and density representation using monochromatic images, our prediction stage can recreate color representation on the basis of an image colorization module.
We then reproduce a colorful implicit model through the representation of luminance, density, and color.
arXiv Detail & Related papers (2024-02-19T14:47:23Z) - ChromaDistill: Colorizing Monochrome Radiance Fields with Knowledge Distillation [23.968181738235266]
We present a method for colorized novel views from input grayscale multi-view images.
We propose a distillation-based method that transfers color from these networks trained on natural images to the target 3D representation.
Our method is agnostic to the underlying 3D representation and easily generalizable to NeRF and 3DGS methods.
arXiv Detail & Related papers (2023-09-14T12:30:48Z) - SSR-2D: Semantic 3D Scene Reconstruction from 2D Images [54.46126685716471]
In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations.
The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images.
Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet.
arXiv Detail & Related papers (2023-02-07T17:47:52Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Category-Level 3D Non-Rigid Registration from Single-View RGB Images [28.874008960264202]
We propose a novel approach to solve the 3D non-rigid registration problem from RGB images using CNNs.
Our objective is to find a deformation field that warps a given 3D canonical model into a novel instance observed by a single-view RGB image.
arXiv Detail & Related papers (2020-08-17T10:35:19Z) - Appearance Consensus Driven Self-Supervised Human Mesh Recovery [67.20942777949793]
We present a self-supervised human mesh recovery framework to infer human pose and shape from monocular images.
We achieve state-of-the-art results on the standard model-based 3D pose estimation benchmarks.
The resulting colored mesh prediction opens up the usage of our framework for a variety of appearance-related tasks beyond the pose and shape estimation.
arXiv Detail & Related papers (2020-08-04T05:40:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.