Hybrid Model and Data Driven Algorithm for Online Learning of Any-to-Any
Path Loss Maps
- URL: http://arxiv.org/abs/2107.06677v1
- Date: Wed, 14 Jul 2021 13:08:25 GMT
- Title: Hybrid Model and Data Driven Algorithm for Online Learning of Any-to-Any
Path Loss Maps
- Authors: M. A. Gutierrez-Estevez, Martin Kasparick, Renato L. G. Cavalvante,
S{\l}awomir Sta\'nczak
- Abstract summary: Learning any-to-any path loss maps might be a key enabler for applications that rely on device-to-any (D2D) communication.
Model-based methods have the advantage that they can generate reliable estimations with low computational complexity.
Pure data-driven methods can achieve good performance without assuming any physical model.
We propose a novel hybrid model and data-driven approach that obtained datasets from an online fashion.
- Score: 19.963385352536616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning any-to-any (A2A) path loss maps, where the objective is the
reconstruction of path loss between any two given points in a map, might be a
key enabler for many applications that rely on device-to-device (D2D)
communication. Such applications include machine-type communications (MTC) or
vehicle-to-vehicle (V2V) communications. Current approaches for learning A2A
maps are either model-based methods, or pure data-driven methods. Model-based
methods have the advantage that they can generate reliable estimations with low
computational complexity, but they cannot exploit information coming from data.
Pure data-driven methods can achieve good performance without assuming any
physical model, but their complexity and their lack of robustness is not
acceptable for many applications. In this paper, we propose a novel hybrid
model and data-driven approach that fuses information obtained from datasets
and models in an online fashion. To that end, we leverage the framework of
stochastic learning to deal with the sequential arrival of samples and propose
an online algorithm that alternatively and sequentially minimizes the original
non-convex problem. A proof of convergence is presented, along with experiments
based firstly on synthetic data, and secondly on a more realistic dataset for
V2X, with both experiments showing promising results.
Related papers
- Truncated Consistency Models [57.50243901368328]
Training consistency models requires learning to map all intermediate points along PF ODE trajectories to their corresponding endpoints.
We empirically find that this training paradigm limits the one-step generation performance of consistency models.
We propose a new parameterization of the consistency function and a two-stage training procedure that prevents the truncated-time training from collapsing to a trivial solution.
arXiv Detail & Related papers (2024-10-18T22:38:08Z) - Conformal Trajectory Prediction with Multi-View Data Integration in Cooperative Driving [4.628774934971078]
Current research on trajectory prediction primarily relies on data collected by onboard sensors of an ego vehicle.
We introduce V2INet, a novel trajectory prediction framework designed to model multi-view data by extending existing single-view models.
Our results demonstrate superior performance in terms of Final Displacement Error (FDE) and Miss Rate (MR) using a single GPU.
arXiv Detail & Related papers (2024-08-01T08:32:03Z) - Multiple data sources and domain generalization learning method for road surface defect classification [2.9109581496560044]
We propose a method for classifying road surface defects using camera images.
We present a domain generalization training algorithm for developing a generalized model.
The results show that our method can efficiently classify road surface defects on previously unseen data.
arXiv Detail & Related papers (2024-07-14T13:37:47Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - IDM-Follower: A Model-Informed Deep Learning Method for Long-Sequence
Car-Following Trajectory Prediction [24.94160059351764]
Most car-following models are generative and only consider the inputs of the speed, position, and acceleration of the last time step.
We implement a novel structure with two independent encoders and a self-attention decoder that could sequentially predict the following trajectories.
Numerical experiments with multiple settings on simulation and NGSIM datasets show that the IDM-Follower can improve the prediction performance.
arXiv Detail & Related papers (2022-10-20T02:24:27Z) - Learning Phone Recognition from Unpaired Audio and Phone Sequences Based
on Generative Adversarial Network [58.82343017711883]
This paper investigates how to learn directly from unpaired phone sequences and speech utterances.
GAN training is adopted in the first stage to find the mapping relationship between unpaired speech and phone sequence.
In the second stage, another HMM model is introduced to train from the generator's output, which boosts the performance.
arXiv Detail & Related papers (2022-07-29T09:29:28Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.