Online Estimation and Coverage Control with Heterogeneous Sensing
Information
- URL: http://arxiv.org/abs/2106.14984v1
- Date: Mon, 28 Jun 2021 20:59:57 GMT
- Title: Online Estimation and Coverage Control with Heterogeneous Sensing
Information
- Authors: Andrew McDonald, Lai Wei, Vaibhav Srivastava
- Abstract summary: Heterogeneous multi-robot sensing systems are able to characterize physical processes more comprehensively than homogeneous systems.
Access to multiple modalities of sensory data allow such systems to fuse information between complementary sources.
Low-fidelity data may be more plentiful, while high-fidelity data may be more trustworthy.
- Score: 4.350783459690612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous multi-robot sensing systems are able to characterize physical
processes more comprehensively than homogeneous systems. Access to multiple
modalities of sensory data allow such systems to fuse information between
complementary sources and learn richer representations of a phenomenon of
interest. Often, these data are correlated but vary in fidelity, i.e., accuracy
(bias) and precision (noise). Low-fidelity data may be more plentiful, while
high-fidelity data may be more trustworthy. In this paper, we address the
problem of multi-robot online estimation and coverage control by combining low-
and high-fidelity data to learn and cover a sensory function of interest. We
propose two algorithms for this task of heterogeneous learning and coverage --
namely Stochastic Sequencing of Multi-fidelity Learning and Coverage (SMLC) and
Deterministic Sequencing of Multi-fidelity Learning and Coverage (DMLC) -- and
prove that they converge asymptotically. In addition, we demonstrate the
empirical efficacy of SMLC and DMLC through numerical simulations.
Related papers
- Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout [62.73150122809138]
Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices.<n>We propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD)<n>The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and cost (up to 15.0% smaller)
arXiv Detail & Related papers (2025-07-14T16:19:00Z) - Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation [61.64052577026623]
Real-world multi-view datasets are often heterogeneous and imperfect.
We propose a novel robust MVL method (namely RML) with simultaneous representation fusion and alignment.
In experiments, we employ it in unsupervised multi-view clustering, noise-label classification, and as a plug-and-play module for cross-modal hashing retrieval.
arXiv Detail & Related papers (2025-03-06T07:01:08Z) - Lightweight Multi-System Multivariate Interconnection and Divergence Discovery [0.0]
This study presents a lightweight interconnection and divergence discovery mechanism (LIDD) to identify abnormal behavior in multi-system environments.
Our experiment on the readout systems of the Hadron Calorimeter of the Compact Muon Solenoid (CMS) experiment at CERN demonstrates the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-04-12T13:02:33Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Disentangled Multi-Fidelity Deep Bayesian Active Learning [19.031567953748453]
Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity.
Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity.
We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities.
arXiv Detail & Related papers (2023-05-07T23:14:58Z) - HFN: Heterogeneous Feature Network for Multivariate Time Series Anomaly
Detection [2.253268952202213]
We propose a novel semi-supervised anomaly detection framework based on a heterogeneous feature network (HFN) for MTS.
We first combine the embedding similarity subgraph generated by sensor embedding and feature value similarity subgraph generated by sensor values to construct a time-series heterogeneous graph.
This approach fuses the state-of-the-art technologies of heterogeneous graph structure learning (HGSL) and representation learning.
arXiv Detail & Related papers (2022-11-01T05:01:34Z) - Differentiable Agent-based Epidemiology [71.81552021144589]
We introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation.
GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources.
arXiv Detail & Related papers (2022-07-20T07:32:02Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Coupled Support Tensor Machine Classification for Multimodal
Neuroimaging Data [28.705764174771936]
A Coupled Support Machine (C-STM) is built upon the latent factors estimated from the Advanced Coupled Matrix Factorization (ACMTF)
C-STM combines individual and shared latent factors with multiple kernels and estimates a maximal-margin for coupled matrix tensor data.
The classification risk of C-STM is shown to converge to the optimal Bayes risk, making it a statistically consistent rule.
arXiv Detail & Related papers (2022-01-19T16:13:09Z) - Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals [10.866594993485226]
We propose a novel deep learning-based anomaly detection algorithm called Deep Convolutional Autoencoding Memory network (CAE-M)
We first build a Deep Convolutional Autoencoder to characterize spatial dependence of multi-sensor data with a Maximum Mean Discrepancy (MMD)
Then, we construct a Memory Network consisting of linear (Autoregressive Model) and non-linear predictions (Bigressive LSTM with Attention) to capture temporal dependence from time-series data.
arXiv Detail & Related papers (2021-07-27T06:48:20Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Multilinear Compressive Learning with Prior Knowledge [106.12874293597754]
Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
arXiv Detail & Related papers (2020-02-17T19:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.