Robot Kinematics: Motion, Kinematics and Dynamics
- URL: http://arxiv.org/abs/2211.15093v1
- Date: Mon, 28 Nov 2022 06:42:14 GMT
- Title: Robot Kinematics: Motion, Kinematics and Dynamics
- Authors: Jiawei Zhang
- Abstract summary: This is a follow-up tutorial article of our previous article entitled "Robot Basics: Representation, Rotation and Velocity"
Specifically, in this article, we will cover some more advanced topics on robot kinematics.
Similar to the previous article, math and formulas will also be heavily used in this article.
- Score: 10.879701971582502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This is a follow-up tutorial article of our previous article entitled "Robot
Basics: Representation, Rotation and Velocity". For better understanding of the
topics covered in this articles, we recommend the readers to first read our
previous tutorial article on robot basics. Specifically, in this article, we
will cover some more advanced topics on robot kinematics, including robot
motion, forward kinematics, inverse kinematics, and robot dynamics. For the
topics, terminologies and notations introduced in the previous article, we will
use them directly without re-introducing them again in this article. Also
similar to the previous article, math and formulas will also be heavily used in
this article as well (hope the readers are well prepared for the upcoming math
bomb). After reading this article, readers should be able to have a deeper
understanding about how robot motion, kinematics and dynamics. As to some more
advanced topics about robot control, we will introduce them in the following
tutorial articles for readers instead.
Related papers
- Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Robot Basics: Representation, Rotation and Velocity [10.879701971582502]
Key topics of classic robotics will be introduced, including robot representation, robot rotational motion, coordinates transformation and velocity transformation.
Most of the materials covered in this article are based on the rigid-body kinematics that the readers probably have learned from the physics course at high-school or college.
arXiv Detail & Related papers (2022-11-05T00:03:53Z) - ClipBot: an educational, physically impaired robot that learns to walk
via genetic algorithm optimization [0.0]
We propose ClipBot, a low-cost, do-it-yourself, robot whose skeleton is made of two paper clips.
An Arduino nano microcontroller actuates two servo motors that move the paper clips.
Students at the high school level were asked to implement a genetic algorithm to optimize the movements of the robot.
arXiv Detail & Related papers (2022-10-26T13:31:43Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - Make Bipedal Robots Learn How to Imitate [3.1981440103815717]
We propose a method to train a bipedal robot to perform some basic movements with the help of imitation learning (IL)
An ingeniously written Deep Q Network (DQN) is trained with experience replay to make the robot learn to perform the movements as similar as the instructor.
arXiv Detail & Related papers (2021-05-15T10:06:13Z) - Future Frame Prediction for Robot-assisted Surgery [57.18185972461453]
We propose a ternary prior guided variational autoencoder (TPG-VAE) model for future frame prediction in robotic surgical video sequences.
Besides content distribution, our model learns motion distribution, which is novel to handle the small movements of surgical tools.
arXiv Detail & Related papers (2021-03-18T15:12:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.