An Overview of Machine Learning Techniques for Radiowave Propagation
Modeling
- URL: http://arxiv.org/abs/2101.11760v1
- Date: Thu, 28 Jan 2021 00:55:11 GMT
- Title: An Overview of Machine Learning Techniques for Radiowave Propagation
Modeling
- Authors: Aristeidis Seretis, Costas D. Sarris
- Abstract summary: We identify the input and output specification and the architecture of the model as the main challenges associated with machine learning-driven propagation models.
Papers are discussed and categorized based on their approach to each of these challenges.
- Score: 0.6853165736531939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give an overview of recent developments in the modeling of radiowave
propagation, based on machine learning algorithms. We identify the input and
output specification and the architecture of the model as the main challenges
associated with machine learning-driven propagation models. Relevant papers are
discussed and categorized based on their approach to each of these challenges.
Emphasis is given on presenting the prospects and open problems in this
promising and rapidly evolving area.
Related papers
- A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.
We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)
We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - Generative AI in Vision: A Survey on Models, Metrics and Applications [0.0]
Generative AI models have revolutionized various fields by enabling the creation of realistic and diverse data samples.
Among these models, diffusion models have emerged as a powerful approach for generating high-quality images, text, and audio.
This survey paper provides a comprehensive overview of generative AI diffusion and legacy models, focusing on their underlying techniques, applications across different domains, and their challenges.
arXiv Detail & Related papers (2024-02-26T07:47:12Z) - Physics-informed Generalizable Wireless Channel Modeling with
Segmentation and Deep Learning: Fundamentals, Methodologies, and Challenges [26.133092114053472]
We show that PINN-based approaches in channel modeling exhibit promising attributes such as generalizability, interpretability, and robustness.
A case-study of our recent work on precise indoor channel prediction with semantic segmentation and deep learning is presented.
arXiv Detail & Related papers (2024-01-02T16:56:13Z) - Masked Modeling for Self-supervised Representation Learning on Vision
and Beyond [69.64364187449773]
Masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training.
We elaborate on the details of techniques within masked modeling, including diverse masking strategies, recovering targets, network architectures, and more.
We conclude by discussing the limitations of current techniques and point out several potential avenues for advancing masked modeling research.
arXiv Detail & Related papers (2023-12-31T12:03:21Z) - Learning from models beyond fine-tuning [78.20895343699658]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Model-Based Machine Learning for Communications [110.47840878388453]
We review existing strategies for combining model-based algorithms and machine learning from a high level perspective.
We focus on symbol detection, which is one of the fundamental tasks of communication receivers.
arXiv Detail & Related papers (2021-01-12T19:55:34Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - A Perspective on Machine Learning Methods in Turbulence Modelling [0.0]
This work presents a review of the current state of research in data-driven turbulence closure modeling.
We stress that consistency of the training data, the model, the underlying physics and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy.
arXiv Detail & Related papers (2020-10-23T08:19:30Z) - Graph signal processing for machine learning: A review and new
perspectives [57.285378618394624]
We review a few important contributions made by GSP concepts and tools, such as graph filters and transforms, to the development of novel machine learning algorithms.
We discuss exploiting data structure and relational priors, improving data and computational efficiency, and enhancing model interpretability.
We provide new perspectives on future development of GSP techniques that may serve as a bridge between applied mathematics and signal processing on one side, and machine learning and network science on the other.
arXiv Detail & Related papers (2020-07-31T13:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.