Deep Domain Adaptation for Detecting Bomb Craters in Aerial Images
- URL: http://arxiv.org/abs/2209.11299v1
- Date: Thu, 22 Sep 2022 20:25:25 GMT
- Title: Deep Domain Adaptation for Detecting Bomb Craters in Aerial Images
- Authors: Marco Geiger, Dominik Martin, Niklas K\"uhl
- Abstract summary: Unexploded ordnance (UXO) is an immense danger to human life and the environment.
The current manual analysis process is expensive and time-consuming.
Deep learning is a promising way to improve the UXO disposal process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The aftermath of air raids can still be seen for decades after the
devastating events. Unexploded ordnance (UXO) is an immense danger to human
life and the environment. Through the assessment of wartime images, experts can
infer the occurrence of a dud. The current manual analysis process is expensive
and time-consuming, thus automated detection of bomb craters by using deep
learning is a promising way to improve the UXO disposal process. However, these
methods require a large amount of manually labeled training data. This work
leverages domain adaptation with moon surface images to address the problem of
automated bomb crater detection with deep learning under the constraint of
limited training data. This paper contributes to both academia and practice (1)
by providing a solution approach for automated bomb crater detection with
limited training data and (2) by demonstrating the usability and associated
challenges of using synthetic images for domain adaptation.
Related papers
- 2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures [2.6642754249961103]
This paper introduces a novel technique for anomaly detection in images called 2DSig-Detect.
We show both superior performance and a reduction in the time to detect the presence of adversarial perturbations in images.
arXiv Detail & Related papers (2024-09-08T05:35:05Z) - Terrain characterisation for online adaptability of automated sonar processing: Lessons learnt from operationally applying ATR to sidescan sonar in MCM applications [0.0]
This paper presents two online seafloor characterisation techniques to improve explainability during Autonomous Underwater Vehicles (AUVs) missions.
Both techniques rely on an unsupervised machine learning approach to extract terrain features which relate to the human understanding of terrain complexity.
The first technnique provides a quantitative, application-driven terrain characterisation metric based on the performance of an ATR algorithm.
The second method provides a way to incorporate subject matter expertise and enables contextualisation and explainability in support for scenario-dependent subjective terrain characterisation.
arXiv Detail & Related papers (2024-04-29T12:48:42Z) - Deep Learning Approaches in Pavement Distress Identification: A Review [0.39373541926236766]
This paper reviews recent advancements in image processing and deep learning techniques for pavement distress detection and classification.
The ability of these algorithms to discern patterns and make predictions based on extensive datasets has revolutionized the domain of pavement distress identification.
By capturing high-resolution images, UAVs provide valuable data that can be processed using deep learning algorithms to detect and classify various pavement distresses effectively.
arXiv Detail & Related papers (2023-08-01T20:30:11Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - Monitoring War Destruction from Space: A Machine Learning Approach [1.0149624140985478]
Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection.
This article introduces an automated method of measuring destruction in high-resolution satellite images using deep learning techniques.
We apply this method to the Syrian civil war and the evolution of damage in major cities across the country.
arXiv Detail & Related papers (2020-10-12T19:01:20Z) - Deep Traffic Sign Detection and Recognition Without Target Domain Real
Images [52.079665469286496]
We propose a novel database generation method that requires no real image from the target-domain, and (ii) templates of the traffic signs.
The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available.
On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one.
arXiv Detail & Related papers (2020-07-30T21:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.