Data-Driven Distributed State Estimation and Behavior Modeling in Sensor
Networks
- URL: http://arxiv.org/abs/2009.10827v2
- Date: Thu, 24 Sep 2020 15:12:45 GMT
- Title: Data-Driven Distributed State Estimation and Behavior Modeling in Sensor
Networks
- Authors: Rui Yu, Zhenyuan Yuan, Minghui Zhu, Zihan Zhou
- Abstract summary: We formulate the problem of simultaneous state estimation and behavior learning in a sensor network.
We propose a simple yet effective solution by extending the Gaussian process-based Bayes filters (GP-BayesFilters) to an online, distributed setting.
The effectiveness of the proposed method is evaluated on tracking objects with unknown movement behaviors using both synthetic data and data collected from a multi-robot platform.
- Score: 5.817715558396024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, the prevalence of sensor networks has enabled tracking of the
states of dynamic objects for a wide spectrum of applications from autonomous
driving to environmental monitoring and urban planning. However, tracking
real-world objects often faces two key challenges: First, due to the limitation
of individual sensors, state estimation needs to be solved in a collaborative
and distributed manner. Second, the objects' movement behavior is unknown, and
needs to be learned using sensor observations. In this work, for the first
time, we formally formulate the problem of simultaneous state estimation and
behavior learning in a sensor network. We then propose a simple yet effective
solution to this new problem by extending the Gaussian process-based Bayes
filters (GP-BayesFilters) to an online, distributed setting. The effectiveness
of the proposed method is evaluated on tracking objects with unknown movement
behaviors using both synthetic data and data collected from a multi-robot
platform.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal
Forecasting [24.00162014044092]
Earth science systems rely heavily on the extensive deployment of sensors.
Traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors.
In this paper, we introduce for the first time the concept of dynamic sparse training and are committed to adaptively, dynamically filtering important sensor data.
arXiv Detail & Related papers (2024-03-05T12:31:24Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes [0.0]
We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
arXiv Detail & Related papers (2023-11-16T02:43:13Z) - Simultaneous Clutter Detection and Semantic Segmentation of Moving
Objects for Automotive Radar Data [12.96486891333286]
Radar sensors are an important part of the environment perception system of autonomous vehicles.
One of the first steps during the processing of radar point clouds is often the detection of clutter.
Another common objective is the semantic segmentation of moving road users.
We show that our setup is highly effective and outperforms every existing network for semantic segmentation on the RadarScenes dataset.
arXiv Detail & Related papers (2023-11-13T11:29:38Z) - Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting [58.45661235893729]
One promising self-supervised task is 3D point cloud forecasting from unannotated LiDAR sequences.
We show that this task requires algorithms to implicitly capture (1) sensor extrinsics (i.e., the egomotion of the autonomous vehicle), (2) sensor intrinsics (i.e., the sampling pattern specific to the particular LiDAR sensor), and (3) the shape and motion of other objects in the scene.
We render point cloud data from 4D occupancy predictions given sensor extrinsics and intrinsics, allowing one to train and test occupancy algorithms with unannotated LiDAR sequences.
arXiv Detail & Related papers (2023-02-25T18:12:37Z) - Because Every Sensor Is Unique, so Is Every Pair: Handling Dynamicity in
Traffic Forecasting [32.354251863295424]
Traffic forecasting is a critical task to extract values from cyber-physical infrastructures.
In this paper, we first analyze real-world traffic data to show that each sensor has a unique dynamic.
Next, we propose a novel module called Spatial Graph Transformers (SGT) to leverage the self-attention mechanism.
Finally, we present Graph Self-attention WaveNet (G-SWaN) to address the complex, non-lineartemporal traffic dynamics.
arXiv Detail & Related papers (2023-02-20T12:57:31Z) - Feeling of Presence Maximization: mmWave-Enabled Virtual Reality Meets
Deep Reinforcement Learning [76.46530937296066]
This paper investigates the problem of providing ultra-reliable and energy-efficient virtual reality (VR) experiences for wireless mobile users.
To ensure reliable ultra-high-definition (UHD) video frame delivery to mobile users, a coordinated multipoint (CoMP) transmission technique and millimeter wave (mmWave) communications are exploited.
arXiv Detail & Related papers (2021-06-03T08:35:10Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.