Synthetic Sonar Image Simulation with Various Seabed Conditions for
Automatic Target Recognition
- URL: http://arxiv.org/abs/2210.10267v1
- Date: Wed, 19 Oct 2022 03:08:02 GMT
- Title: Synthetic Sonar Image Simulation with Various Seabed Conditions for
Automatic Target Recognition
- Authors: Jaejeong Shin, Shi Chang, Matthew Bays, Joshua Weaver, Tom Wettergren,
Silvia Ferrari
- Abstract summary: We propose a novel method to generate underwater object imagery that is acoustically compliant with that generated by side-scan sonar using the Unreal Engine.
We describe the process to develop, tune, and generate imagery to provide representative images for use in training automated target recognition (ATR) and machine learning algorithms.
- Score: 1.179296191012968
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel method to generate underwater object imagery that is
acoustically compliant with that generated by side-scan sonar using the Unreal
Engine. We describe the process to develop, tune, and generate imagery to
provide representative images for use in training automated target recognition
(ATR) and machine learning algorithms. The methods provide visual
approximations for acoustic effects such as back-scatter noise and acoustic
shadow, while allowing fast rendering with C++ actor in UE for maximizing the
size of potential ATR training datasets. Additionally, we provide analysis of
its utility as a replacement for actual sonar imagery or physics-based sonar
data.
Related papers
- AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis [62.33446681243413]
view acoustic synthesis aims to render audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene.
Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing audio.
We propose a novel Audio-Visual Gaussian Splatting (AV-GS) model to characterize the entire scene environment.
Experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets.
arXiv Detail & Related papers (2024-06-13T08:34:12Z) - Visual Car Brand Classification by Implementing a Synthetic Image Dataset Creation Pipeline [3.524869467682149]
We propose an automatic pipeline for generating synthetic image datasets using Stable Diffusion.
We leverage YOLOv8 for automatic bounding box detection and quality assessment of synthesized images.
arXiv Detail & Related papers (2024-06-03T07:44:08Z) - Graphical Object-Centric Actor-Critic [55.2480439325792]
We propose a novel object-centric reinforcement learning algorithm combining actor-critic and model-based approaches.
We use a transformer encoder to extract object representations and graph neural networks to approximate the dynamics of an environment.
Our algorithm performs better in a visually complex 3D robotic environment and a 2D environment with compositional structure than the state-of-the-art model-free actor-critic algorithm.
arXiv Detail & Related papers (2023-10-26T06:05:12Z) - AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene
Synthesis [61.07542274267568]
We study a new task -- real-world audio-visual scene synthesis -- and a first-of-its-kind NeRF-based approach for multimodal learning.
We propose an acoustic-aware audio generation module that integrates prior knowledge of audio propagation into NeRF.
We present a coordinate transformation module that expresses a view direction relative to the sound source, enabling the model to learn sound source-centric acoustic fields.
arXiv Detail & Related papers (2023-02-04T04:17:19Z) - Listen2Scene: Interactive material-aware binaural sound propagation for
reconstructed 3D scenes [69.03289331433874]
We present an end-to-end audio rendering approach (Listen2Scene) for virtual reality (VR) and augmented reality (AR) applications.
We propose a novel neural-network-based sound propagation method to generate acoustic effects for 3D models of real environments.
arXiv Detail & Related papers (2023-02-02T04:09:23Z) - FSID: Fully Synthetic Image Denoising via Procedural Scene Generation [12.277286575812441]
We develop a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks.
Our Unreal engine-based synthetic data pipeline populates large scenes algorithmically with a combination of random 3D objects, materials, and geometric transformations.
We then trained and validated a CNN-based denoising model, and demonstrated that the model trained on this synthetic data alone can achieve competitive denoising results.
arXiv Detail & Related papers (2022-12-07T21:21:55Z) - Learning Visual Representation of Underwater Acoustic Imagery Using
Transformer-Based Style Transfer Method [4.885034271315195]
This letter proposes a framework for learning the visual representation of underwater acoustic imageries.
It could replace the low-level texture features of optical images with the visual features of underwater acoustic imageries.
The proposed framework could fully use the rich optical image dataset to generate a pseudo-acoustic image dataset.
arXiv Detail & Related papers (2022-11-10T07:54:46Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z) - Convolutional Deep Denoising Autoencoders for Radio Astronomical Images [0.0]
We apply a Machine Learning technique known as Convolutional Denoising Autoencoder to denoise synthetic images of state-of-the-art radio telescopes.
Our autoencoder can effectively denoise complex images identifying and extracting faint objects at the limits of the instrumental sensitivity.
arXiv Detail & Related papers (2021-10-16T17:08:30Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Stillleben: Realistic Scene Synthesis for Deep Learning in Robotics [33.30312206728974]
We describe a synthesis pipeline capable of producing training data for cluttered scene perception tasks.
Our approach arranges object meshes in physically realistic, dense scenes using physics simulation.
Our pipeline can be run online during training of a deep neural network.
arXiv Detail & Related papers (2020-05-12T10:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.