Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot
- URL: http://arxiv.org/abs/2101.04804v1
- Date: Tue, 12 Jan 2021 23:52:53 GMT
- Title: Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot
- Authors: Beatriz Arruda Asfora
- Abstract summary: This project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior.
The robot is applied on a typical mobile robot's issue: line following.
Decision making of where to move next is based on the line center of the path and is fully automated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Robotics can be defined as the connection of perception to action. Taking
this further, this project aims to drive a robot using an automated computer
vision embedded system, connecting the robot's vision to its behavior. In order
to implement a color recognition system on the robot, open source tools are
chosen, such as Processing language, Android system, Arduino platform and Pixy
camera. The constraints are clear: simplicity, replicability and financial
viability. In order to integrate Robotics, Computer Vision and Image
Processing, the robot is applied on a typical mobile robot's issue: line
following. The problem of distinguishing the path from the background is
analyzed through different approaches: the popular Otsu's Method, thresholding
based on color combinations through experimentation and color tracking via hue
and saturation. Decision making of where to move next is based on the line
center of the path and is fully automated. Using a four-legged robot as
platform and a camera as its only sensor, the robot is capable of successfully
follow a line. From capturing the image to moving the robot, it's evident how
integrative Robotics can be. The issue of this paper alone involves knowledge
of Mechanical Engineering, Electronics, Control Systems and Programming.
Everything related to this work was documented and made available on an open
source online page, so it can be useful in learning and experimenting with
robotics.
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Image-based Pose Estimation and Shape Reconstruction for Robot
Manipulators and Soft, Continuum Robots via Differentiable Rendering [20.62295718847247]
State estimation from measured data is crucial for robotic applications as autonomous systems rely on sensors to capture the motion and localize in the 3D world.
In this work, we achieve image-based robot pose estimation and shape reconstruction from camera images.
We demonstrate that our method of using geometrical shape primitives can achieve high accuracy in shape reconstruction for a soft continuum robot and pose estimation for a robot manipulator.
arXiv Detail & Related papers (2023-02-27T18:51:29Z) - Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance [0.2062593640149623]
We propose a robotic system which uses robot vision and deep learning to get the required linear and angular velocities.
The novel methodology that we are proposing is accurate in detecting the position of the unique coloured object in any kind of lighting.
arXiv Detail & Related papers (2021-09-06T19:19:47Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - Single-view robot pose and joint angle estimation via render & compare [40.05546237998603]
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots.
arXiv Detail & Related papers (2021-04-19T14:48:29Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.