Tactile Gesture Recognition with Built-in Joint Sensors for Industrial Robots
- URL: http://arxiv.org/abs/2508.12435v1
- Date: Sun, 17 Aug 2025 17:04:58 GMT
- Title: Tactile Gesture Recognition with Built-in Joint Sensors for Industrial Robots
- Authors: Deqing Song, Weimin Yang, Maryam Rezayati, Hans Wernher van de Venn,
- Abstract summary: This paper explores deep learning methods relying solely on a robot's built-in joint sensors, eliminating the need for external sensors.<n>We evaluated various convolutional neural network (CNN) architectures and collected two datasets to study the impact of data representation and model architecture on the recognition accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While gesture recognition using vision or robot skins is an active research area in Human-Robot Collaboration (HRC), this paper explores deep learning methods relying solely on a robot's built-in joint sensors, eliminating the need for external sensors. We evaluated various convolutional neural network (CNN) architectures and collected two datasets to study the impact of data representation and model architecture on the recognition accuracy. Our results show that spectrogram-based representations significantly improve accuracy, while model architecture plays a smaller role. We also tested generalization to new robot poses, where spectrogram-based models performed better. Implemented on a Franka Emika Research robot, two of our methods, STFT2DCNN and STT3DCNN, achieved over 95% accuracy in contact detection and gesture classification. These findings demonstrate the feasibility of external-sensor-free tactile recognition and promote further research toward cost-effective, scalable solutions for HRC.
Related papers
- Efficient Surgical Robotic Instrument Pose Reconstruction in Real World Conditions Using Unified Feature Detection [21.460727996614704]
MIS robots have long kinematic chains and partial visibility of their degrees of freedom in the camera.<n>We propose a novel framework that unifies the detection of geometric primitives through a shared encoding.<n>This architecture detects both keypoints and edges in a single inference and is trained on large-scale synthetic data with projective labeling.
arXiv Detail & Related papers (2025-10-03T22:03:28Z) - What Matters for Active Texture Recognition With Vision-Based Tactile Sensors [17.019982978396122]
We formalize the active sampling problem in the context of tactile fabric recognition.
We investigate which components are crucial for quick and reliable texture recognition.
Our best approach reaches 90.0% in under 5 touches, highlighting that vision-based tactile sensors are highly effective for fabric texture recognition.
arXiv Detail & Related papers (2024-03-20T16:06:01Z) - Online Recognition of Incomplete Gesture Data to Interface Collaborative
Robots [0.0]
This paper introduces an HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors.
The recognized gestures are used to teleoperate a robot in a collaborative process that consists of preparing a breakfast meal.
arXiv Detail & Related papers (2023-04-13T18:49:08Z) - Deep Transfer Learning with Graph Neural Network for Sensor-Based Human
Activity Recognition [12.51766929898714]
We devised a graph-inspired deep learning approach toward the sensor-based HAR tasks.
We present a multi-layer residual structure involved graph convolutional neural network (ResGCNN) toward the sensor-based HAR tasks.
Experimental results on the PAMAP2 and mHealth data sets demonstrate that our ResGCNN is effective at capturing the characteristics of actions.
arXiv Detail & Related papers (2022-03-14T07:57:32Z) - Object recognition for robotics from tactile time series data utilising
different neural network architectures [0.0]
This paper investigates the use of Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) neural network architectures for object classification on tactile data.
We compare these methods using data from two different fingertip sensors (namely the BioTac SP and WTS-FT) in the same physical setup.
The results show that the proposed method improves the maximum accuracy from 82.4% (BioTac SP fingertips) and 90.7% (WTS-FT fingertips) with complete time-series data to about 94% for both sensor types.
arXiv Detail & Related papers (2021-09-09T22:05:45Z) - Cross-Modal Analysis of Human Detection for Robotics: An Industrial Case
Study [7.844709223688293]
We conduct a systematic cross-modal analysis of sensor-algorithm combinations typically used in robotics.
We compare the performance of state-of-the-art person detectors for 2D range data, 3D lidar, and RGB-D data.
We extend a strong image-based RGB-D detector to provide cross-modal supervision for lidar detectors in the form of weak 3D bounding box labels.
arXiv Detail & Related papers (2021-08-03T13:33:37Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - Visionary: Vision architecture discovery for robot learning [58.67846907923373]
We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.
Our approach automatically designs architectures while training on the task - discovering novel ways of combining and attending image feature representations with actions as well as features from previous layers.
arXiv Detail & Related papers (2021-03-26T17:51:43Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.