A Disentangled Representation Learning Framework for Low-altitude Network Coverage Prediction
- URL: http://arxiv.org/abs/2507.14186v1
- Date: Sun, 13 Jul 2025 05:31:35 GMT
- Title: A Disentangled Representation Learning Framework for Low-altitude Network Coverage Prediction
- Authors: Xiaojie Li, Zhijie Cai, Nan Qi, Chao Dong, Guangxu Zhu, Haixia Ma, Qihui Wu, Shi Jin,
- Abstract summary: The expansion of the low-altitude economy has underscored the significance of Low-Altitude Network Coverage (LANC) prediction for designing aerial corridors.<n>We introduce a dual strategy comprising expert knowledge-based feature compression and disentangled representation learning.<n> Experimental evaluation confirms the efficacy of our framework, yielding a 7% reduction in error compared to the best baseline algorithm.
- Score: 42.71119573226285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The expansion of the low-altitude economy has underscored the significance of Low-Altitude Network Coverage (LANC) prediction for designing aerial corridors. While accurate LANC forecasting hinges on the antenna beam patterns of Base Stations (BSs), these patterns are typically proprietary and not readily accessible. Operational parameters of BSs, which inherently contain beam information, offer an opportunity for data-driven low-altitude coverage prediction. However, collecting extensive low-altitude road test data is cost-prohibitive, often yielding only sparse samples per BS. This scarcity results in two primary challenges: imbalanced feature sampling due to limited variability in high-dimensional operational parameters against the backdrop of substantial changes in low-dimensional sampling locations, and diminished generalizability stemming from insufficient data samples. To overcome these obstacles, we introduce a dual strategy comprising expert knowledge-based feature compression and disentangled representation learning. The former reduces feature space complexity by leveraging communications expertise, while the latter enhances model generalizability through the integration of propagation models and distinct subnetworks that capture and aggregate the semantic representations of latent features. Experimental evaluation confirms the efficacy of our framework, yielding a 7% reduction in error compared to the best baseline algorithm. Real-network validations further attest to its reliability, achieving practical prediction accuracy with MAE errors at the 5dB level.
Related papers
- Debiasing Machine Learning Predictions for Causal Inference Without Additional Ground Truth Data: "One Map, Many Trials" in Satellite-Driven Poverty Analysis [3.4137115855910762]
We introduce and evaluate two correction methods that substantially reduce prediction bias without relying on newly collected labeled data.<n>We demonstrate that the proposed methods meet or outperform existing approaches that either require (a) adjustments to training pipelines or (b) additional labeled data.
arXiv Detail & Related papers (2025-08-02T12:26:26Z) - Conformal Prediction for Zero-Shot Models [20.94974284175104]
We investigate the capabilities of CLIP models under the split conformal prediction paradigm.<n>We propose Conf-OT, a transfer learning setting that operates transductive over the combined calibration and query sets.
arXiv Detail & Related papers (2025-05-30T15:16:19Z) - Soft ascent-descent as a stable and flexible alternative to flooding [6.527016551650139]
We propose a softened, pointwise mechanism called SoftAD that downweights points on the borderline, limits the effects of outliers, and retains the ascent-descent effect of flooding.
We demonstrate how SoftAD can realize classification accuracy competitive with flooding while enjoying a much smaller loss generalization gap and model norm.
arXiv Detail & Related papers (2023-10-16T02:02:56Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - A Bayesian approach to quantifying uncertainties and improving
generalizability in traffic prediction models [0.0]
We propose a Bayesian recurrent neural network framework for uncertainty in traffic prediction with higher generalizability.
We show that normalization alters the training process of deep neural networks by controlling the model's complexity.
Our findings are especially relevant to traffic management applications, where predicting traffic conditions across multiple locations is the goal.
arXiv Detail & Related papers (2023-07-12T06:23:31Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Robustness to Pruning Predicts Generalization in Deep Neural Networks [29.660568281957072]
We introduce prunability: the smallest emphfraction of a network's parameters that can be kept while pruning without adversely affecting its training loss.
We show that this measure is highly predictive of a model's generalization performance across a large set of convolutional networks trained on CIFAR-10.
arXiv Detail & Related papers (2021-03-10T11:39:14Z) - Towards Overcoming False Positives in Visual Relationship Detection [95.15011997876606]
We investigate the cause of the high false positive rate in Visual Relationship Detection (VRD)
This paper presents Spatially-Aware Balanced negative pRoposal sAmpling (SABRA) as a robust VRD framework that alleviates the influence of false positives.
arXiv Detail & Related papers (2020-12-23T06:28:00Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Characterising Bias in Compressed Models [20.901725971127973]
pruning and quantization techniques achieve high levels of compression with negligible impact on top-line metrics.
We call this subset Compression Identified Exemplars (CIE)
CIE is a relatively small subset but a great contributor of error in the model.
We propose its use as a human-in-the-loop auditing tool to surface a tractable subset of the dataset.
arXiv Detail & Related papers (2020-10-06T22:02:46Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.