MultiBARF: Integrating Imagery of Different Wavelength Regions by Using Neural Radiance Fields
- URL: http://arxiv.org/abs/2503.15070v1
- Date: Wed, 19 Mar 2025 10:08:29 GMT
- Title: MultiBARF: Integrating Imagery of Different Wavelength Regions by Using Neural Radiance Fields
- Authors: Kana Kurata, Hitoshi Niigaki, Xiaojun Wu, Ryuichi Tanida,
- Abstract summary: We develop MultiBARF to make data preparation easier for users unfamiliar with sensing and image processing.<n>Our method superimposes two color channels of those sensor images on NeRF.
- Score: 5.9426090202741735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical sensor applications have become popular through digital transformation. Linking observed data to real-world locations and combining different image sensors is essential to make the applications practical and efficient. However, data preparation to try different sensor combinations requires high sensing and image processing expertise. To make data preparation easier for users unfamiliar with sensing and image processing, we have developed MultiBARF. This method replaces the co-registration and geometric calibration by synthesizing pairs of two different sensor images and depth images at assigned viewpoints. Our method extends Bundle Adjusting Neural Radiance Fields(BARF), a deep neural network-based novel view synthesis method, for the two imagers. Through experiments on visible light and thermographic images, we demonstrate that our method superimposes two color channels of those sensor images on NeRF.
Related papers
- Multispectral Texture Synthesis using RGB Convolutional Neural Networks [2.3213238782019316]
State-of-the-art RGB texture synthesis algorithms rely on style distances that are computed through statistics of deep features.
We propose two solutions to extend these methods to multispectral imaging.
arXiv Detail & Related papers (2024-10-21T13:49:54Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Bootstrapping Interactive Image-Text Alignment for Remote Sensing Image
Captioning [49.48946808024608]
We propose a novel two-stage vision-language pre-training-based approach to bootstrap interactive image-text alignment for remote sensing image captioning, called BITA.
Specifically, the first stage involves preliminary alignment through image-text contrastive learning.
In the second stage, the interactive Fourier Transformer connects the frozen image encoder with a large language model.
arXiv Detail & Related papers (2023-12-02T17:32:17Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - On Learning the Invisible in Photoacoustic Tomography with Flat
Directionally Sensitive Detector [0.27074235008521236]
In this paper, we focus on the second type caused by a varying sensitivity of the sensor to the incoming wavefront direction.
The visible ranges, in image and data domains, are related by the wavefront direction mapping.
We optimally combine fast approximate operators with tailored deep neural network architectures into efficient learned reconstruction methods.
arXiv Detail & Related papers (2022-04-21T09:57:01Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Thermal Image Processing via Physics-Inspired Deep Networks [21.094006629684376]
DeepIR combines physically accurate sensor modeling with deep network-based image representation.
DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target.
Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images.
arXiv Detail & Related papers (2021-08-18T04:57:48Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.