Robust Component Detection for Flexible Manufacturing: A Deep Learning Approach to Tray-Free Object Recognition under Variable Lighting
- URL: http://arxiv.org/abs/2507.00852v1
- Date: Tue, 01 Jul 2025 15:23:54 GMT
- Title: Robust Component Detection for Flexible Manufacturing: A Deep Learning Approach to Tray-Free Object Recognition under Variable Lighting
- Authors: Fatemeh Sadat Daneshmand,
- Abstract summary: We implement and evaluate a Mask R-CNN-based approach on a complete pen manufacturing line at ZHAW.<n>Our system achieves 95% detection accuracy across diverse lighting conditions while eliminating the need for structured component placement.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flexible manufacturing systems in Industry 4.0 require robots capable of handling objects in unstructured environments without rigid positioning constraints. This paper presents a computer vision system that enables industrial robots to detect and grasp pen components in arbitrary orientations without requiring structured trays, while maintaining robust performance under varying lighting conditions. We implement and evaluate a Mask R-CNN-based approach on a complete pen manufacturing line at ZHAW, addressing three critical challenges: object detection without positional constraints, robustness to extreme lighting variations, and reliable performance with cost-effective cameras. Our system achieves 95% detection accuracy across diverse lighting conditions while eliminating the need for structured component placement, demonstrating a 30% reduction in setup time and significant improvement in manufacturing flexibility. The approach is validated through extensive testing under four distinct lighting scenarios, showing practical applicability for real-world industrial deployment.
Related papers
- Verification of Visual Controllers via Compositional Geometric Transformations [49.81690518952909]
We introduce a novel verification framework for perception-based controllers that can generate outer-approximations of reachable sets.<n>We provide theoretical guarantees on the soundness of our method and demonstrate its effectiveness across benchmark control environments.
arXiv Detail & Related papers (2025-07-06T20:22:58Z) - Adaptive Contextual Embedding for Robust Far-View Borehole Detection [2.206623168926072]
In blasting operations, accurately detecting densely distributed tiny boreholes from far-view imagery is critical for operational safety and efficiency.<n>We propose an adaptive detection approach that builds upon existing architectures (e.g., YOLO) by explicitly leveraging consistent embedding representations derived through exponential moving average (EMA)-based statistical updates.<n>Our method introduces three synergistic components: (1) adaptive augmentation utilizing dynamically updated image statistics to robustly handle illumination and texture variations; (2) embedding stabilization to ensure consistent and reliable feature extraction; and (3) contextual refinement leveraging spatial context for improved detection accuracy.
arXiv Detail & Related papers (2025-05-08T07:25:42Z) - Grasping Partially Occluded Objects Using Autoencoder-Based Point Cloud Inpainting [50.4653584592824]
Real-world applications often come with challenges that might not be considered in grasping solutions tested in simulation or lab settings.<n>In this paper, we present an algorithm to reconstruct the missing information.<n>Our inpainting solution facilitates the real-world utilization of robust object matching approaches for grasping point calculation.
arXiv Detail & Related papers (2025-03-16T15:38:08Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.<n>We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.<n>By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Vision-based Manipulation of Transparent Plastic Bags in Industrial Setups [0.37187295985559027]
This paper addresses the challenges of vision-based manipulation for autonomous cutting and unpacking of transparent plastic bags in industrial setups.
The proposed solution employs advanced Machine Learning algorithms, particularly Convolutional Neural Networks (CNNs)
Tracking algorithms and depth sensing technologies are utilized for 3D spatial awareness during pick and placement.
arXiv Detail & Related papers (2024-11-14T17:47:54Z) - A Complete System for Automated 3D Semantic-Geometric Mapping of Corrosion in Industrial Environments [0.6749750044497731]
We propose a complete system for semi-automated corrosion identification and mapping in industrial environments.
We leverage recent advances in LiDAR-based methods for localization and mapping, with vision-based semantic segmentation deep learning techniques.
A set of experiments in an indoor laboratory environment, demonstrate quantitatively the high accuracy of the employed LiDAR based 3D mapping and localization system.
arXiv Detail & Related papers (2024-04-21T15:40:32Z) - LAECIPS: Large Vision Model Assisted Adaptive Edge-Cloud Collaboration for IoT-based Embodied Intelligence System [22.779285672925425]
Embodied intelligence (EI) enables manufacturing systems to flexibly perceive, reason, adapt, and operate within dynamic shop floor environments.<n>We propose LAECIPS, a large vision model-assisted adaptive edge-cloud collaboration framework for IoT-based embodied intelligence systems.<n>LAECIPS decouples large vision models in the cloud from lightweight models on the edge, enabling plug-and-play model adaptation and continual learning.
arXiv Detail & Related papers (2024-04-16T12:12:06Z) - Object Detectors in the Open Environment: Challenges, Solutions, and Outlook [95.3317059617271]
The dynamic and intricate nature of the open environment poses novel and formidable challenges to object detectors.
This paper aims to conduct a comprehensive review and analysis of object detectors in open environments.
We propose a framework that includes four quadrants (i.e., out-of-domain, out-of-category, robust learning, and incremental learning) based on the dimensions of the data / target changes.
arXiv Detail & Related papers (2024-03-24T19:32:39Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Machine vision for vial positioning detection toward the safe automation
of material synthesis [0.4893345190925178]
We report a novel deep learning (DL)-based object detector, namely, DenseSSD.
DenseSSD achieved a mean average precision (mAP) over 95% based on a complex dataset involving both empty and solution-filled vials.
This work demonstrates that DenseSSD is useful for enhancing safety in an automated material synthesis environment.
arXiv Detail & Related papers (2022-06-15T03:19:25Z) - Cognitive Visual Inspection Service for LCD Manufacturing Industry [80.63336968475889]
This paper discloses a novel visual inspection system for liquid crystal display (LCD), which is currently a dominant type in the FPD industry.
System is based on two cornerstones: robust/high-performance defect recognition model and cognitive visual inspection service architecture.
arXiv Detail & Related papers (2021-01-11T08:14:35Z) - Automatic LiDAR Extrinsic Calibration System using Photodetector and
Planar Board for Large-scale Applications [110.32028864986918]
This study proposes a new concept of a target board with embedded photodetector arrays, named the PD-target system, to find the precise position of the correspondence laser beams on the target surface.
The experimental evaluation of the proposed system on low-resolution LiDAR showed that the LiDAR offset pose can be estimated within 0.1 degree and 3 mm levels of precision.
arXiv Detail & Related papers (2020-08-24T16:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.