Neuro-LIFT: A Neuromorphic, LLM-based Interactive Framework for Autonomous Drone FlighT at the Edge
- URL: http://arxiv.org/abs/2501.19259v1
- Date: Fri, 31 Jan 2025 16:17:03 GMT
- Title: Neuro-LIFT: A Neuromorphic, LLM-based Interactive Framework for Autonomous Drone FlighT at the Edge
- Authors: Amogh Joshi, Sourav Sanyal, Kaushik Roy,
- Abstract summary: We present Neuro-LIFT, a real-time neuromorphic navigation framework implemented on a Parrot Bebop quadrotor2.
Our framework translates human speech into high-level planning commands which are then autonomously executed using event-based neuromorphic vision and physics-driven planning.
Our framework demonstrates its capabilities in navigating in a dynamic environment, avoiding obstacles, and adapting to human instructions in real-time.
- Score: 9.461346539158475
- License:
- Abstract: The integration of human-intuitive interactions into autonomous systems has been limited. Traditional Natural Language Processing (NLP) systems struggle with context and intent understanding, severely restricting human-robot interaction. Recent advancements in Large Language Models (LLMs) have transformed this dynamic, allowing for intuitive and high-level communication through speech and text, and bridging the gap between human commands and robotic actions. Additionally, autonomous navigation has emerged as a central focus in robotics research, with artificial intelligence (AI) increasingly being leveraged to enhance these systems. However, existing AI-based navigation algorithms face significant challenges in latency-critical tasks where rapid decision-making is critical. Traditional frame-based vision systems, while effective for high-level decision-making, suffer from high energy consumption and latency, limiting their applicability in real-time scenarios. Neuromorphic vision systems, combining event-based cameras and spiking neural networks (SNNs), offer a promising alternative by enabling energy-efficient, low-latency navigation. Despite their potential, real-world implementations of these systems, particularly on physical platforms such as drones, remain scarce. In this work, we present Neuro-LIFT, a real-time neuromorphic navigation framework implemented on a Parrot Bebop2 quadrotor. Leveraging an LLM for natural language processing, Neuro-LIFT translates human speech into high-level planning commands which are then autonomously executed using event-based neuromorphic vision and physics-driven planning. Our framework demonstrates its capabilities in navigating in a dynamic environment, avoiding obstacles, and adapting to human instructions in real-time.
Related papers
- Towards Probabilistic Inference of Human Motor Intentions by Assistive Mobile Robots Controlled via a Brain-Computer Interface [0.0]
Brain Computer Interface (BCI) is a highly user-friendly option that does not require physical movement.
Current BCI systems can understand whether users want to accelerate or decelerate, but they implement these changes in discrete speed steps.
The authors aim to address this limitation by redesigning the perception-action cycle in a BCI controlled robotic system.
arXiv Detail & Related papers (2025-01-09T23:18:38Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack [7.253801704452419]
Recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics.
This paper will discuss how we can enable embodied neuromorphic AI for robotic systems through our perspectives.
arXiv Detail & Related papers (2024-04-04T09:52:22Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Bio-inspired spike-based Hippocampus and Posterior Parietal Cortex
models for robot navigation and environment pseudo-mapping [52.77024349608834]
This work proposes a spike-based robotic navigation and environment pseudomapping system.
The hippocampus is in charge of maintaining a representation of an environment state map, and the PPC is in charge of local decision-making.
This is the first implementation of an environment pseudo-mapping system with dynamic learning based on a bio-inspired hippocampal memory.
arXiv Detail & Related papers (2023-05-22T10:20:34Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - A toolbox for neuromorphic sensing in robotics [4.157415305926584]
We introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot.
This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI.
arXiv Detail & Related papers (2021-03-03T23:22:05Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - A Spiking Neural Network Emulating the Structure of the Oculomotor
System Requires No Learning to Control a Biomimetic Robotic Head [0.0]
A neuromorphic oculomotor controller is placed at the heart of our in-house biomimetic robotic head prototype.
The controller is unique in the sense that all data are encoded and processed by a spiking neural network (SNN)
We report the robot's target tracking ability, demonstrate that its eye kinematics are similar to those reported in human eye studies and show that a biologically-constrained learning can be used to further refine its performance.
arXiv Detail & Related papers (2020-02-18T13:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.