SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View
Representation
- URL: http://arxiv.org/abs/2307.05087v1
- Date: Tue, 11 Jul 2023 07:37:56 GMT
- Title: SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View
Representation
- Authors: Zhengxin Lei, Feng Xu, Jiangtao Wei, Feng Cai, Feng Wang, and Ya-Qiu
Jin
- Abstract summary: This study combines SAR imaging mechanisms with neural networks to propose a novel NeRF model for SAR image generation.
SAR-NeRF is constructed to learn the distribution of attenuation coefficients and scattering intensities of voxels.
It is found that SAR-NeRF augumented dataset can significantly improve SAR target classification performance under few-shot learning setup.
- Score: 7.907504142396784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: SAR images are highly sensitive to observation configurations, and they
exhibit significant variations across different viewing angles, making it
challenging to represent and learn their anisotropic features. As a result,
deep learning methods often generalize poorly across different view angles.
Inspired by the concept of neural radiance fields (NeRF), this study combines
SAR imaging mechanisms with neural networks to propose a novel NeRF model for
SAR image generation. Following the mapping and projection pinciples, a set of
SAR images is modeled implicitly as a function of attenuation coefficients and
scattering intensities in the 3D imaging space through a differentiable
rendering equation. SAR-NeRF is then constructed to learn the distribution of
attenuation coefficients and scattering intensities of voxels, where the
vectorized form of 3D voxel SAR rendering equation and the sampling
relationship between the 3D space voxels and the 2D view ray grids are
analytically derived. Through quantitative experiments on various datasets, we
thoroughly assess the multi-view representation and generalization capabilities
of SAR-NeRF. Additionally, it is found that SAR-NeRF augumented dataset can
significantly improve SAR target classification performance under few-shot
learning setup, where a 10-type classification accuracy of 91.6\% can be
achieved by using only 12 images per class.
Related papers
- Sparse-DeRF: Deblurred Neural Radiance Fields from Sparse View [17.214047499850487]
This paper focuses on constructing deblurred neural radiance fields (DeRF) from sparse-view for more pragmatic real-world scenarios.
Sparse-DeRF successfully regularizes the complicated joint optimization, presenting alleviated overfitting artifacts and enhanced quality on radiance fields.
We demonstrate the effectiveness of the Sparse-DeRF with extensive quantitative and qualitative experimental results by training DeRF from 2-view, 4-view, and 6-view blurry images.
arXiv Detail & Related papers (2024-07-09T07:36:54Z) - Learning Surface Scattering Parameters From SAR Images Using
Differentiable Ray Tracing [8.19502673278742]
This paper proposes a surface microwave rendering model that comprehensively considers both Specular and Diffuse contributions.
A differentiable ray tracing (DRT) engine based on SAR images was constructed for CSVBSDF surface scattering parameter learning.
The effectiveness of this approach has been validated through simulations and comparisons with real SAR images.
arXiv Detail & Related papers (2024-01-02T12:09:06Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Differentiable SAR Renderer and SAR Target Reconstruction [7.840247953745616]
A differentiable SAR (DSR) is developed which reformulates the mapping and projection of SAR imaging mechanism.
A 3D inverse target reconstruction algorithm from SAR images is devised.
arXiv Detail & Related papers (2022-05-14T17:24:32Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic
Resonance Image using Implicit Neural Representation [37.43985628701494]
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis.
Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input.
We propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images.
arXiv Detail & Related papers (2021-10-27T14:48:54Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.