MINet: Multi-scale Interactive Network for Real-time Salient Object Detection of Strip Steel Surface Defects
- URL: http://arxiv.org/abs/2405.16096v1
- Date: Sat, 25 May 2024 07:09:11 GMT
- Title: MINet: Multi-scale Interactive Network for Real-time Salient Object Detection of Strip Steel Surface Defects
- Authors: Kunye Shen, Xiaofei Zhou, Zhi Liu,
- Abstract summary: We devise a multi-scale interactive (MI) module, which employs depthwise convolution (DWConv) and pointwise convolution (PWConv) to independently extract and interactively fuse features of different scales.
We propose a lightweight Multi-scale Interactive Network (MINet) to conduct real-time salient object detection of strip steel surface defects.
- Score: 10.550627100986894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automated surface defect detection is a fundamental task in industrial production, and the existing saliencybased works overcome the challenging scenes and give promising detection results. However, the cutting-edge efforts often suffer from large parameter size, heavy computational cost, and slow inference speed, which heavily limits the practical applications. To this end, we devise a multi-scale interactive (MI) module, which employs depthwise convolution (DWConv) and pointwise convolution (PWConv) to independently extract and interactively fuse features of different scales, respectively. Particularly, the MI module can provide satisfactory characterization for defect regions with fewer parameters. Embarking on this module, we propose a lightweight Multi-scale Interactive Network (MINet) to conduct real-time salient object detection of strip steel surface defects. Comprehensive experimental results on SD-Saliency-900 dataset, which contains three kinds of strip steel surface defect detection images (i.e., inclusion, patches, and scratches), demonstrate that the proposed MINet presents comparable detection accuracy with the state-of-the-art methods while running at a GPU speed of 721FPS and a CPU speed of 6.3FPS for 368*368 images with only 0.28M parameters. The code is available at https://github.com/Kunye-Shen/MINet.
Related papers
- M3DM-NR: RGB-3D Noisy-Resistant Industrial Anomaly Detection via Multimodal Denoising [63.39134873744748]
Existing industrial anomaly detection methods primarily concentrate on unsupervised learning with pristine RGB images.
This paper proposes a novel noise-resistant M3DM-NR framework to leverage strong multi-modal discriminative capabilities of CLIP.
Extensive experiments show that M3DM-NR outperforms state-of-the-art methods in 3D-RGB multi-modal noisy anomaly detection.
arXiv Detail & Related papers (2024-06-04T12:33:02Z) - Cross-Cluster Shifting for Efficient and Effective 3D Object Detection
in Autonomous Driving [69.20604395205248]
We present a new 3D point-based detector model, named Shift-SSD, for precise 3D object detection in autonomous driving.
We introduce an intriguing Cross-Cluster Shifting operation to unleash the representation capacity of the point-based detector.
We conduct extensive experiments on the KITTI, runtime, and nuScenes datasets, and the results demonstrate the state-of-the-art performance of Shift-SSD.
arXiv Detail & Related papers (2024-03-10T10:36:32Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by
Multi-Scale Edge Conditioning [8.177876899141486]
Estimating surface normals from 3D point clouds is critical for various applications, including surface reconstruction and rendering.
We propose a novel approach called MSECNet, which improves estimation in normal varying regions by treating normal variation modeling as an edge detection problem.
arXiv Detail & Related papers (2023-08-04T10:26:59Z) - EasyNet: An Easy Network for 3D Industrial Anomaly Detection [49.26348455493123]
3D anomaly detection is an emerging and vital computer vision task in industrial manufacturing.
We propose an easy and deployment-friendly network (called EasyNet) without using pre-trained models and memory banks.
Experiments show that EasyNet achieves an anomaly detection AUROC of 92.6% without using pre-trained models and memory banks.
arXiv Detail & Related papers (2023-07-26T02:46:50Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - Multimodal Industrial Anomaly Detection via Hybrid Fusion [59.16333340582885]
We propose a novel multimodal anomaly detection method with hybrid fusion scheme.
Our model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTecD-3 AD dataset.
arXiv Detail & Related papers (2023-03-01T15:48:27Z) - DPNet: Dual-Path Network for Real-time Object Detection with Lightweight
Attention [15.360769793764526]
This paper presents a dual-path network, named DPNet, with a lightweight attention scheme for real-time object detection.
DPNet achieves state-of-the-art trade-off between detection accuracy and implementation efficiency.
arXiv Detail & Related papers (2022-09-28T09:11:01Z) - CAMO-MOT: Combined Appearance-Motion Optimization for 3D Multi-Object
Tracking with Camera-LiDAR Fusion [34.42289908350286]
3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection.
It can be challenging to accurately track the irregular motion of objects for LiDAR-based methods.
We propose a novel camera-LiDAR fusion 3D MOT framework based on the Combined Appearance-Motion Optimization (CAMO-MOT)
arXiv Detail & Related papers (2022-09-06T14:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.