Generative Damage Learning for Concrete Aging Detection using
Auto-flight Images
- URL: http://arxiv.org/abs/2006.15257v2
- Date: Wed, 19 Aug 2020 19:47:57 GMT
- Title: Generative Damage Learning for Concrete Aging Detection using
Auto-flight Images
- Authors: Takato Yasuno, Akira Ishii, Junichiro Fujii, Masazumi Amakata, Yuta
Takahashi
- Abstract summary: We propose an anomaly detection method using unpaired image-to-image translation mapping from damaged images to reverse aging fakes that approximates healthy conditions.
We apply our method to field studies, and we examine the usefulness of our method for health monitoring of concrete damage.
- Score: 0.7612218105739107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to monitor the state of large-scale infrastructures, image
acquisition by autonomous flight drones is efficient for stable angle and
high-quality images. Supervised learning requires a large data set consisting
of images and annotation labels. It takes a long time to accumulate images,
including identifying the damaged regions of interest (ROIs). In recent years,
unsupervised deep learning approaches such as generative adversarial networks
(GANs) for anomaly detection algorithms have progressed. When a damaged image
is a generator input, it tends to reverse from the damaged state to the healthy
state generated image. Using the distance of distribution between the real
damaged image and the generated reverse aging healthy state fake image, it is
possible to detect the concrete damage automatically from unsupervised
learning. This paper proposes an anomaly detection method using unpaired
image-to-image translation mapping from damaged images to reverse aging fakes
that approximates healthy conditions. We apply our method to field studies, and
we examine the usefulness of our method for health monitoring of concrete
damage.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - GenDet: Towards Good Generalizations for AI-Generated Image Detection [27.899521298845357]
Existing methods can effectively detect images generated by seen generators, but it is challenging to detect those generated by unseen generators.
This paper addresses the unseen-generator detection problem by considering this task from the perspective of anomaly detection.
Our method encourages smaller output discrepancies between the student and the teacher models for real images while aiming for larger discrepancies for fake images.
arXiv Detail & Related papers (2023-12-12T11:20:45Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Supervised Anomaly Detection Method Combining Generative Adversarial
Networks and Three-Dimensional Data in Vehicle Inspections [0.0]
The external visual inspections of rolling stock's underfloor equipment are currently being performed via human visual inspection.
In this study, we propose a new method that uses style conversion via generative adversarial networks on three-dimensional computer graphics.
arXiv Detail & Related papers (2022-12-22T06:39:52Z) - RestoreX-AI: A Contrastive Approach towards Guiding Image Restoration
via Explainable AI Systems [8.430502131775722]
Weather corruptions can hinder the object detectability and pose a serious threat to their navigation and reliability.
We propose a contrastive approach towards mitigating this problem, by evaluating images generated by restoration models during and post training.
Our approach achieves an averaged 178 percent increase in mAP between the input and restored images under adverse weather conditions.
arXiv Detail & Related papers (2022-04-03T12:45:00Z) - Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images [0.7734726150561086]
Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations.
We show how CVAEs can be effectively used to detect adversarial attacks on image classification networks.
arXiv Detail & Related papers (2021-11-28T20:36:27Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Pixel-wise Dense Detector for Image Inpainting [34.721991959357425]
Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar.
We propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process.
Experiments on multiple public datasets show the superior performance of the proposed framework.
arXiv Detail & Related papers (2020-11-04T13:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.