Corrosion Detection for Industrial Objects: From Multi-Sensor System to
5D Feature Space
- URL: http://arxiv.org/abs/2205.07075v1
- Date: Sat, 14 May 2022 14:45:58 GMT
- Title: Corrosion Detection for Industrial Objects: From Multi-Sensor System to
5D Feature Space
- Authors: Dennis Haitz, Boris Jutzi, Patrick Huebner, Markus Ulrich
- Abstract summary: Corrosion is a form of damage that often appears on the surface of metal-made objects used in industrial applications.
We provide a testing setup consisting of a rotary table which rotates the object by 360 degrees.
RGB cameras and laser triangulation sensors for the acquisition of 2D and 3D data as our multi-sensor system.
- Score: 5.0468312081378475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Corrosion is a form of damage that often appears on the surface of metal-made
objects used in industrial applications. Those damages can be critical
depending on the purpose of the used object. Optical-based testing systems
provide a form of non-contact data acquisition, where the acquired data can
then be used to analyse the surface of an object. In the field of industrial
image processing, this is called surface inspection. We provide a testing setup
consisting of a rotary table which rotates the object by 360 degrees, as well
as industrial RGB cameras and laser triangulation sensors for the acquisition
of 2D and 3D data as our multi-sensor system. These sensors acquire data while
the object to be tested takes a full rotation. Further on, data augmentation is
applied to prepare new data or enhance already acquired data. In order to
evaluate the impact of a laser triangulation sensor for corrosion detection,
one challenge is to at first fuse the data of both domains. After the data
fusion process, 5 different channels can be utilized to create a 5D feature
space. Besides the red, green and blue channels of the image (1-3), additional
range data from the laser triangulation sensor is incorporated (4). As a fifth
channel, said sensor provides additional intensity data (5). With a
multi-channel image classification, a 5D feature space will lead to slightly
superior results opposed to a 3D feature space, composed of only the RGB
channels of the image.
Related papers
- Towards Scalable Spatial Intelligence via 2D-to-3D Data Lifting [64.64738535860351]
We present a scalable pipeline that converts single-view images into comprehensive, scale- and appearance-realistic 3D representations.<n>Our method bridges the gap between the vast repository of imagery and the increasing demand for spatial scene understanding.<n>By automatically generating authentic, scale-aware 3D data from images, we significantly reduce data collection costs and open new avenues for advancing spatial intelligence.
arXiv Detail & Related papers (2025-07-24T14:53:26Z) - Multimodal Object Detection using Depth and Image Data for Manufacturing Parts [1.0819408603463427]
This work proposes a multi-sensor system combining an red-green-blue (RGB) camera and a 3D point cloud sensor.
A novel multimodal object detection method is developed to process both RGB and depth data.
The results show that the multimodal model significantly outperforms the depth-only and RGB-only baselines on established object detection metrics.
arXiv Detail & Related papers (2024-11-13T22:43:15Z) - MAROON: A Framework for the Joint Characterization of Near-Field High-Resolution Radar and Optical Depth Imaging Techniques [4.816237933371206]
We take on the unique challenge of characterizing depth imagers from both, the optical and radio-frequency domain.
We provide a comprehensive evaluation of their depth measurements with respect to distinct object materials, geometries, and object-to-sensor distances.
All object measurements will be made public in form of a multimodal dataset, called MAROON.
arXiv Detail & Related papers (2024-11-01T11:53:10Z) - Performance Assessment of Feature Detection Methods for 2-D FS Sonar Imagery [11.23455335391121]
Key challenges include non-uniform lighting and poor visibility in turbid environments.
High-frequency forward-look sonar cameras address these issues.
We evaluate a number of feature detectors using real sonar images from five different sonar devices.
arXiv Detail & Related papers (2024-09-11T04:35:07Z) - DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping [7.050468075029598]
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments.
The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar.
Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - Joint object detection and re-identification for 3D obstacle
multi-camera systems [47.87501281561605]
This research paper introduces a novel modification to an object detection network that uses camera and lidar information.
It incorporates an additional branch designed for the task of re-identifying objects across adjacent cameras within the same vehicle.
The results underscore the superiority of this method over traditional Non-Maximum Suppression (NMS) techniques.
arXiv Detail & Related papers (2023-10-09T15:16:35Z) - Towards a Robust Sensor Fusion Step for 3D Object Detection on Corrupted
Data [4.3012765978447565]
This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust.
We demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.
arXiv Detail & Related papers (2023-06-12T18:06:29Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Radar Voxel Fusion for 3D Object Detection [0.0]
This paper develops a low-level sensor fusion network for 3D object detection.
The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes.
arXiv Detail & Related papers (2021-06-26T20:34:12Z) - Deep Continuous Fusion for Multi-Sensor 3D Object Detection [103.5060007382646]
We propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.
We design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution.
arXiv Detail & Related papers (2020-12-20T18:43:41Z) - Expandable YOLO: 3D Object Detection from RGB-D Images [64.14512458954344]
This paper aims at constructing a light-weight object detector that inputs a depth and a color image from a stereo camera.
By extending the network architecture of YOLOv3 to 3D in the middle, it is possible to output in the depth direction.
Intersection over Uninon (IoU) in 3D space is introduced to confirm the accuracy of region extraction results.
arXiv Detail & Related papers (2020-06-26T07:32:30Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.