Autonomous bot with ML-based reactive navigation for indoor environment
- URL: http://arxiv.org/abs/2111.12542v1
- Date: Wed, 24 Nov 2021 15:24:39 GMT
- Title: Autonomous bot with ML-based reactive navigation for indoor environment
- Authors: Yash Srivastava, Saumya Singh, S.P. Syed Ibrahim
- Abstract summary: This paper aims to develop a robot that balances cost and accuracy by using machine learning to predict the best obstacle avoidance move.
The underlying hardware consists of an Arduino Uno and a Raspberry Pi 3B.
The system is mounted on a 2-WD robot chassis and tested in a cluttered indoor setting with most impressive results.
- Score: 0.7519872646378835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local or reactive navigation is essential for autonomous mobile robots which
operate in an indoor environment. Techniques such as SLAM, computer vision
require significant computational power which increases cost. Similarly, using
rudimentary methods makes the robot susceptible to inconsistent behavior. This
paper aims to develop a robot that balances cost and accuracy by using machine
learning to predict the best obstacle avoidance move based on distance inputs
from four ultrasonic sensors that are strategically mounted on the front,
front-left, front-right, and back of the robot. The underlying hardware
consists of an Arduino Uno and a Raspberry Pi 3B. The machine learning model is
first trained on the data collected by the robot. Then the Arduino continuously
polls the sensors and calculates the distance values, and in case of critical
need for avoidance, a suitable maneuver is made by the Arduino. In other
scenarios, sensor data is sent to the Raspberry Pi using a USB connection and
the machine learning model generates the best move for navigation, which is
sent to the Arduino for driving motors accordingly. The system is mounted on a
2-WD robot chassis and tested in a cluttered indoor setting with most
impressive results.
Related papers
- Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation [50.34179054785646]
We present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed.
Taccel provides precise physics simulation and realistic tactile signals while supporting flexible robot-sensor configurations through user-friendly APIs.
These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development.
arXiv Detail & Related papers (2025-04-17T12:57:11Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Grasping Student: semi-supervised learning for robotic manipulation [0.7282230325785884]
We design a semi-supervised grasping system that takes advantage of images of products to be picked, which are collected without any interactions with the robot.
In the regime of a small number of robot training samples, taking advantage of the unlabeled data allows us to achieve performance at the level of 10-fold bigger dataset size.
arXiv Detail & Related papers (2023-03-08T09:03:11Z) - ClipBot: an educational, physically impaired robot that learns to walk
via genetic algorithm optimization [0.0]
We propose ClipBot, a low-cost, do-it-yourself, robot whose skeleton is made of two paper clips.
An Arduino nano microcontroller actuates two servo motors that move the paper clips.
Students at the high school level were asked to implement a genetic algorithm to optimize the movements of the robot.
arXiv Detail & Related papers (2022-10-26T13:31:43Z) - See What the Robot Can't See: Learning Cooperative Perception for Visual
Navigation [11.943412856714154]
We train the sensors to encode and communicate relevant viewpoint information to the mobile robot.
We overcome the challenge of enabling all the sensors to predict the direction along the shortest path to the target.
Our results show that by using communication between the sensors and the robot, we achieve up to 2.0x improvement in SPL.
arXiv Detail & Related papers (2022-08-01T11:37:01Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.