Olfactory Inertial Odometry: Methodology for Effective Robot Navigation by Scent
- URL: http://arxiv.org/abs/2506.02373v1
- Date: Tue, 03 Jun 2025 02:21:12 GMT
- Title: Olfactory Inertial Odometry: Methodology for Effective Robot Navigation by Scent
- Authors: Kordel K. France, Ovidiu Daescu,
- Abstract summary: Olfactory navigation is one of the most primitive mechanisms of exploration used by organisms.<n>This work defines olfactory inertial odometry (OIO) to enable navigation by scent analogous to visual inertial odometry (VIO)<n>We demonstrate OIO with three different odour localization algorithms on a real 5-DoF robot arm over an odour-tracking scenario that resembles real applications in agriculture and food quality control.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Olfactory navigation is one of the most primitive mechanisms of exploration used by organisms. Navigation by machine olfaction (artificial smell) is a very difficult task to both simulate and solve. With this work, we define olfactory inertial odometry (OIO), a framework for using inertial kinematics, and fast-sampling olfaction sensors to enable navigation by scent analogous to visual inertial odometry (VIO). We establish how principles from SLAM and VIO can be extrapolated to olfaction to enable real-world robotic tasks. We demonstrate OIO with three different odour localization algorithms on a real 5-DoF robot arm over an odour-tracking scenario that resembles real applications in agriculture and food quality control. Our results indicate success in establishing a baseline framework for OIO from which other research in olfactory navigation can build, and we note performance enhancements that can be made to address more complex tasks in the future.
Related papers
- LOVON: Legged Open-Vocabulary Object Navigator [9.600429521100041]
We propose a novel framework that integrates large language models for hierarchical task planning with open-vocabulary visual detection models.<n>To tackle real-world challenges including visual jittering, blind zones, and temporary target loss, we design dedicated solutions.<n>We also develop a functional execution logic for the robot that guarantees LOVON's capabilities in autonomous navigation, task adaptation, and robust task completion.
arXiv Detail & Related papers (2025-07-09T11:02:46Z) - Olfactory Inertial Odometry: Sensor Calibration and Drift Compensation [0.0]
Olfactory inertial odometry (OIO) fuses signals from gas sensors with inertial data to help a robot navigate by scent.<n>Gas dynamics and environmental factors introduce disturbances into olfactory navigation tasks that can make OIO difficult to facilitate.<n>We demonstrate our process for OIO calibration on a real robotic arm and show how this calibration improves performance over a cold-start olfactory navigation task.
arXiv Detail & Related papers (2025-06-05T01:16:39Z) - EDEN: Entorhinal Driven Egocentric Navigation Toward Robotic Deployment [1.5190286092106713]
EDEN is a biologically inspired navigation framework that integrates learned entorhinal-like grid cell representations and reinforcement learning to enable autonomous navigation.<n>Inspired by the mammalian entorhinal-hippocampal system, EDEN allows agents to perform path integration and vector-based navigation using visual and motion sensor data.
arXiv Detail & Related papers (2025-06-03T16:28:33Z) - Diffusion Models for Increasing Accuracy in Olfaction Sensors and Datasets [0.0]
We introduce a novel machine learning method using diffusion-based molecular generation to enhance odour localization accuracy.<n>Our framework enhances the ability of olfaction-vision models on robots to accurately associate odours with their correct sources.
arXiv Detail & Related papers (2025-05-31T08:22:09Z) - ForesightNav: Learning Scene Imagination for Efficient Exploration [57.49417653636244]
We propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning.<n>Our approach equips robotic agents with the capability to predict contextual information, such as occupancy and semantic details, for unexplored regions.<n>We validate our imagination-based approach using the Structured3D dataset, demonstrating accurate occupancy prediction and superior performance in anticipating unseen scene geometry.
arXiv Detail & Related papers (2025-04-22T17:38:38Z) - CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs [33.123447047397484]
Object goal navigation (ObjectNav) is a fundamental task in embodied AI, requiring an agent to locate a target object in previously unseen environments.<n>We propose CogNav, a framework designed to mimic the cognitive process using large language models.<n>CogNav significantly improves the success rate of ObjectNav at least by relative 14% over the state-of-the-arts.
arXiv Detail & Related papers (2024-12-11T09:50:35Z) - A transformer-based deep reinforcement learning approach to spatial navigation in a partially observable Morris Water Maze [0.0]
This work applies a transformer-based architecture using deep reinforcement learning to navigate a 2D version of the Morris Water Maze.
We demonstrate that the proposed architecture enables the agent to efficiently learn spatial navigation strategies.
This work suggests promising avenues for future research in artificial agents whose behavior resembles that of biological agents.
arXiv Detail & Related papers (2024-10-01T13:22:56Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Virtual Reality via Object Poses and Active Learning: Realizing
Telepresence Robots with Aerial Manipulation Capabilities [39.29763956979895]
This article presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments.
The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot's workspace.
We show over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM)
arXiv Detail & Related papers (2022-10-18T08:42:30Z) - Pushing it out of the Way: Interactive Visual Navigation [62.296686176988125]
We study the problem of interactive navigation where agents learn to change the environment to navigate more efficiently to their goals.
We introduce the Neural Interaction Engine (NIE) to explicitly predict the change in the environment caused by the agent's actions.
By modeling the changes while planning, we find that agents exhibit significant improvements in their navigational capabilities.
arXiv Detail & Related papers (2021-04-28T22:46:41Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.