Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance
- URL: http://arxiv.org/abs/2109.02700v1
- Date: Mon, 6 Sep 2021 19:19:47 GMT
- Title: Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance
- Authors: Sai Nikhil Gona, Prithvi Raj Bandhakavi
- Abstract summary: We propose a robotic system which uses robot vision and deep learning to get the required linear and angular velocities.
The novel methodology that we are proposing is accurate in detecting the position of the unique coloured object in any kind of lighting.
- Score: 0.2062593640149623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are few industries which use manually controlled robots for carrying
material and this cannot be used all the time in all the places. So, it is very
tranquil to have robots which can follow a specific human by following the
unique coloured object held by that person. So, we propose a robotic system
which uses robot vision and deep learning to get the required linear and
angular velocities which are {\nu} and {\omega}, respectively. Which in turn
makes the robot to avoid obstacles when following the unique coloured object
held by the human. The novel methodology that we are proposing is accurate in
detecting the position of the unique coloured object in any kind of lighting
and tells us the horizontal pixel value where the robot is present and also
tells if the object is close to or far from the robot. Moreover, the artificial
neural networks that we have used in this problem gave us a meagre error in
linear and angular velocity prediction and the PI controller which was used to
control the linear and angular velocities, which in turn controls the position
of the robot gave us impressive results and this methodology outperforms all
other methodologies.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Image-based Pose Estimation and Shape Reconstruction for Robot
Manipulators and Soft, Continuum Robots via Differentiable Rendering [20.62295718847247]
State estimation from measured data is crucial for robotic applications as autonomous systems rely on sensors to capture the motion and localize in the 3D world.
In this work, we achieve image-based robot pose estimation and shape reconstruction from camera images.
We demonstrate that our method of using geometrical shape primitives can achieve high accuracy in shape reconstruction for a soft continuum robot and pose estimation for a robot manipulator.
arXiv Detail & Related papers (2023-02-27T18:51:29Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot [0.0]
This project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior.
The robot is applied on a typical mobile robot's issue: line following.
Decision making of where to move next is based on the line center of the path and is fully automated.
arXiv Detail & Related papers (2021-01-12T23:52:53Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.