Physical Annotation for Automated Optical Inspection: A Concept for In-Situ, Pointer-Based Training Data Generation
- URL: http://arxiv.org/abs/2506.05026v2
- Date: Thu, 17 Jul 2025 07:47:21 GMT
- Title: Physical Annotation for Automated Optical Inspection: A Concept for In-Situ, Pointer-Based Training Data Generation
- Authors: Oliver Krumpek, Oliver Heimann, Jörg Krüger,
- Abstract summary: This paper introduces a novel physical annotation system designed to generate training data for automated optical inspection.<n>The system uses pointer-based in-situ interaction to transfer the valuable expertise of trained inspection personnel directly into a machine learning (ML) training pipeline.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel physical annotation system designed to generate training data for automated optical inspection. The system uses pointer-based in-situ interaction to transfer the valuable expertise of trained inspection personnel directly into a machine learning (ML) training pipeline. Unlike conventional screen-based annotation methods, our system captures physical trajectories and contours directly on the object, providing a more intuitive and efficient way to label data. The core technology uses calibrated, tracked pointers to accurately record user input and transform these spatial interactions into standardised annotation formats that are compatible with open-source annotation software. Additionally, a simple projector-based interface projects visual guidance onto the object to assist users during the annotation process, ensuring greater accuracy and consistency. The proposed concept bridges the gap between human expertise and automated data generation, enabling non-IT experts to contribute to the ML training pipeline and preventing the loss of valuable training samples. Preliminary evaluation results confirm the feasibility of capturing detailed annotation trajectories and demonstrate that integration with CVAT streamlines the workflow for subsequent ML tasks. This paper details the system architecture, calibration procedures and interface design, and discusses its potential contribution to future ML data generation for automated optical inspection.
Related papers
- A Visual Tool for Interactive Model Explanation using Sensitivity Analysis [0.0]
We present SAInT, a Python-based tool for exploring and understanding the behavior of Machine Learning (ML) models.<n>Our system supports Human-in-the-Loop attribution (HITL) by enabling users to configure, train, evaluate, and explain models.<n>We demonstrate the system on a classification task predicting survival on the Titanic dataset and show how sensitivity information can guide feature selection and data refinement.
arXiv Detail & Related papers (2025-08-06T09:53:31Z) - Learning Before Filtering: Real-Time Hardware Learning at the Detector Level [0.0]
This paper presents a digital hardware architecture designed for real-time neural network training.<n>The architecture is both scalable and adaptable, representing a significant advancement toward integrating learning directly within detector systems.
arXiv Detail & Related papers (2025-06-13T17:38:16Z) - Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach [87.8330887605381]
We show how to adapt a pre-trained Vision Transformer to downstream recognition tasks with only a few learnable parameters.
We synthesize a task-specific query with a learnable and lightweight module, which is independent of the pre-trained model.
Our method achieves state-of-the-art performance under memory constraints, showcasing its applicability in real-world situations.
arXiv Detail & Related papers (2024-07-09T15:45:04Z) - Development of an NLP-driven computer-based test guide for visually
impaired students [0.28647133890966986]
This paper presents an NLP-driven Computer-Based Test guide for visually impaired students.
It employs a speech technology pre-trained methods to provide real-time assistance and support to visually impaired students.
arXiv Detail & Related papers (2024-01-22T21:59:00Z) - Towards Automatic Translation of Machine Learning Visual Insights to
Analytical Assertions [23.535630175567146]
We present our vision for developing an automated tool capable of translating visual properties observed in Machine Learning (ML) visualisations into Python assertions.
The tool aims to streamline the process of manually verifying these visualisations in the ML development cycle, which is critical as real-world data and assumptions often change post-deployment.
arXiv Detail & Related papers (2024-01-15T14:11:59Z) - Model-Driven Engineering Method to Support the Formalization of Machine
Learning using SysML [0.0]
This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering.
The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps.
arXiv Detail & Related papers (2023-07-10T11:33:46Z) - Benchmarking Automated Machine Learning Methods for Price Forecasting
Applications [58.720142291102135]
We show the possibility of substituting manually created ML pipelines with automated machine learning (AutoML) solutions.
Based on the CRISP-DM process, we split the manual ML pipeline into a machine learning and non-machine learning part.
We show in a case study for the industrial use case of price forecasting, that domain knowledge combined with AutoML can weaken the dependence on ML experts.
arXiv Detail & Related papers (2023-04-28T10:27:38Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Feature Visualization within an Automated Design Assessment leveraging
Explainable Artificial Intelligence Methods [0.0]
Automated capability assessment, mainly leveraged by deep learning systems driven from 3D CAD data, have been presented.
Current assessment systems may be able to assess CAD data with regards to abstract features, but without any geometrical indicator about the reasons of the system's decision.
Within the NeuroCAD Project, xAI methods are used to identify geometrical features which are associated with a certain abstract feature.
arXiv Detail & Related papers (2022-01-28T13:31:42Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.