Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry
- URL: http://arxiv.org/abs/2209.06588v1
- Date: Wed, 14 Sep 2022 12:13:36 GMT
- Title: Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry
- Authors: Vincenzo Polizzi, Robert Hewitt, Javier Hidalgo-Carri\'o, Jeff Delaune
and Davide Scaramuzza
- Abstract summary: We propose a system to achieve data-efficient, decentralized state estimation for a team of flying robots.
Each robot can fly independently, and exchange data when possible to refine its state estimate.
Our results show that the proposed method improves by up to 46 % trajectory estimation with respect to an individual-agent approach.
- Score: 37.23164397188061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a system solution to achieve data-efficient, decentralized state
estimation for a team of flying robots using thermal images and inertial
measurements. Each robot can fly independently, and exchange data when possible
to refine its state estimate. Our system front-end applies an online
photometric calibration to refine the thermal images so as to enhance feature
tracking and place recognition. Our system back-end uses a
covariance-intersection fusion strategy to neglect the cross-correlation
between agents so as to lower memory usage and computational cost. The
communication pipeline uses Vector of Locally Aggregated Descriptors (VLAD) to
construct a request-response policy that requires low bandwidth usage. We test
our collaborative method on both synthetic and real-world data. Our results
show that the proposed method improves by up to 46 % trajectory estimation with
respect to an individual-agent approach, while reducing up to 89 % the
communication exchange. Datasets and code are released to the public, extending
the already-public JPL xVIO library.
Related papers
- FedScalar: A Communication efficient Federated Learning [0.0]
Federated learning (FL) has gained considerable popularity for distributed machine learning.
emphFedScalar enables agents to communicate updates using a single scalar.
arXiv Detail & Related papers (2024-10-03T07:06:49Z) - Decentralized Federated Learning with Gradient Tracking over Time-Varying Directed Networks [42.92231921732718]
We propose a consensus-based algorithm called DSGTm-TV.
It incorporates gradient tracking and heavy-ball momentum to optimize a global objective function.
Under DSGTm-TV, agents will update local model parameters and gradient estimates using information exchange with neighboring agents.
arXiv Detail & Related papers (2024-09-25T06:23:16Z) - Distributed and Rate-Adaptive Feature Compression [25.36842869638915]
We study the problem of distributed and rate-adaptive feature compression for linear regression.
We propose a distributed compression scheme which works by quantizing a one-dimensional projection of the sensor data.
We also propose a simple adaptive scheme for handling changes in communication constraints.
arXiv Detail & Related papers (2024-04-02T03:21:06Z) - Online Distributed Learning with Quantized Finite-Time Coordination [0.4910937238451484]
In our setting a set of agents need to cooperatively train a learning model from streaming data.
We propose a distributed algorithm that relies on a quantized, finite-time coordination protocol.
We analyze the performance of the proposed algorithm in terms of the mean distance from the online solution.
arXiv Detail & Related papers (2023-07-13T08:36:15Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.