A multi-modal table tennis robot system
- URL: http://arxiv.org/abs/2310.19062v2
- Date: Sat, 25 Nov 2023 15:29:59 GMT
- Title: A multi-modal table tennis robot system
- Authors: Andreas Ziegler, Thomas Gossard, Karl Vetter, Jonas Tebbe, Andreas
Zell
- Abstract summary: We present an improved table tennis robot system with high accuracy vision detection and fast robot reaction.
Based on previous work, our system contains a KUKA robot arm with 6 DOF, with four frame-based cameras and two additional event-based cameras.
- Score: 12.590158763556186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, robotic table tennis has become a popular research challenge
for perception and robot control. Here, we present an improved table tennis
robot system with high accuracy vision detection and fast robot reaction. Based
on previous work, our system contains a KUKA robot arm with 6 DOF, with four
frame-based cameras and two additional event-based cameras. We developed a
novel calibration approach to calibrate this multimodal perception system. For
table tennis, spin estimation is crucial. Therefore, we introduced a novel, and
more accurate spin estimation approach. Finally, we show how combining the
output of an event-based camera and a Spiking Neural Network (SNN) can be used
for accurate ball detection.
Related papers
- Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [17.505318269362512]
Open-TeleVision allows operators to actively perceive the robot's surroundings in a stereoscopic manner.
The system mirrors the operator's arm and hand movements on the robot, creating an immersive experience.
We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks.
arXiv Detail & Related papers (2024-07-01T17:55:35Z) - Table tennis ball spin estimation with an event camera [11.735290341808064]
In table tennis, the combination of high velocity and spin renders traditional low frame rate cameras inadequate.
We present the first method for table tennis spin estimation using an event camera.
We achieve a spin magnitude mean error of $10.7 pm 17.3$ rps and a spin axis mean error of $32.9 pm 38.2deg$ in real time for a flying ball.
arXiv Detail & Related papers (2024-04-15T15:36:38Z) - Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills
using a Quadrupedal Robot [76.04391023228081]
We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning.
We propose a hierarchical framework that leverages deep reinforcement learning to train a robust motion control policy.
We deploy the proposed framework on an A1 quadrupedal robot and enable it to accurately shoot the ball to random targets in the real world.
arXiv Detail & Related papers (2022-08-01T22:34:51Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Single-view robot pose and joint angle estimation via render & compare [40.05546237998603]
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots.
arXiv Detail & Related papers (2021-04-19T14:48:29Z) - Event Camera Based Real-Time Detection and Tracking of Indoor Ground
Robots [2.471139321417215]
This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras.
The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k-d) tree to accurately keep track of them as they move in an indoor arena.
arXiv Detail & Related papers (2021-02-23T19:50:17Z) - Performance Evaluation of Low-Cost Machine Vision Cameras for
Image-Based Grasp Verification [0.0]
In this paper, we propose a vision based grasp verification system using machine vision cameras.
Our experiments demonstrate that the selected machine vision camera and the deep learning models can robustly verify grasp with 97% per frame accuracy.
arXiv Detail & Related papers (2020-03-23T10:34:27Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.