Autonomous Intruder Detection Using a ROS-Based Multi-Robot System
Equipped with 2D-LiDAR Sensors
- URL: http://arxiv.org/abs/2011.03838v1
- Date: Sat, 7 Nov 2020 19:49:07 GMT
- Title: Autonomous Intruder Detection Using a ROS-Based Multi-Robot System
Equipped with 2D-LiDAR Sensors
- Authors: Mashnoon Islam, Touhid Ahmed, Abu Tammam Bin Nuruddin, Mashuda Islam,
Shahnewaz Siddique
- Abstract summary: This paper proposes a multi-robot system for intruder detection in a single-range-sensor-per-robot scenario with centralized processing of detections from all robots by our central bot MIDNet.
This work is aimed at providing an autonomous multi-robot security solution for a warehouse in the absence of human personnel.
- Score: 0.5512295869673147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of autonomous mobile robots in robotic security platforms is
becoming a promising field of innovation due to their adaptive capability of
responding to potential disturbances perceived through a wide range of sensors.
Researchers have proposed systems that either focus on utilizing a single
mobile robot or a system of cooperative multiple robots. However, very few of
the proposed works, particularly in the field of multi-robot systems, are
completely dependent on LiDAR sensors for achieving various tasks. This is
essential when other sensors on a robot fail to provide peak performance in
particular conditions, such as a camera operating in the absence of light. This
paper proposes a multi-robot system that is developed using ROS (Robot
Operating System) for intruder detection in a single-range-sensor-per-robot
scenario with centralized processing of detections from all robots by our
central bot MIDNet (Multiple Intruder Detection Network). This work is aimed at
providing an autonomous multi-robot security solution for a warehouse in the
absence of human personnel.
Related papers
- $\textbf{EMOS}$: $\textbf{E}$mbodiment-aware Heterogeneous $\textbf{M}$ulti-robot $\textbf{O}$perating $\textbf{S}$ystem with LLM Agents [33.77674812074215]
We introduce a novel multi-agent framework designed to enable effective collaboration among heterogeneous robots.
We propose a self-prompted approach, where agents comprehend robot URDF files and call robot kinematics tools to generate descriptions of their physics capabilities.
The Habitat-MAS benchmark is designed to assess how a multi-agent framework handles tasks that require embodiment-aware reasoning.
arXiv Detail & Related papers (2024-10-30T03:20:01Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - Multi-Robot Collaborative Perception with Graph Neural Networks [6.383576104583731]
We propose a general-purpose Graph Neural Network (GNN) with the main goal to increase, in multi-robot perception tasks.
We show that the proposed framework can address multi-view visual perception problems such as monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2022-01-05T18:47:07Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - AuraSense: Robot Collision Avoidance by Full Surface Proximity Detection [3.9770080498150224]
AuraSense is the first system to realize no-dead-spot proximity sensing for robot arms.
It requires only a single pair of piezoelectric transducers, and can easily be applied to off-the-shelf robots.
arXiv Detail & Related papers (2021-08-10T18:37:54Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - DEVI: Open-source Human-Robot Interface for Interactive Receptionist
Systems [0.8972186395640678]
"DEVI" is an open-source robot receptionist intelligence core.
This paper presents details on a prototype implementation of a physical robot using DEVI.
Experiments conducted with DEVI show the effectiveness of the proposed system.
arXiv Detail & Related papers (2021-01-02T17:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.