Automated Model Design using Gated Neuron Selection in Telecom
- URL: http://arxiv.org/abs/2602.10854v1
- Date: Wed, 11 Feb 2026 13:40:48 GMT
- Title: Automated Model Design using Gated Neuron Selection in Telecom
- Authors: Adam Orucu, Marcus Medhage, Farnaz Moradi, Andreas Johnsson, Sarunas Girdzijauskas,
- Abstract summary: This paper introduces TabGNS (Tabular Gated Neuron Selection), a novel gradient-based Neural Architecture Search (NAS) method for telecommunications networks.<n>We demonstrate improvements in prediction performance while reducing the architecture size by 51-82% and reducing the search time by up to 36x.<n> Integrating TabGNS into the model lifecycle management enables automated design of neural networks throughout the lifecycle.
- Score: 2.1754414620227362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The telecommunications industry is experiencing rapid growth in adopting deep learning for critical tasks such as traffic prediction, signal strength prediction, and quality of service optimisation. However, designing neural network architectures for these applications remains challenging and time-consuming, particularly when targeting compact models suitable for resource-constrained network environments. Therefore, there is a need for automating the model design process to create high-performing models efficiently. This paper introduces TabGNS (Tabular Gated Neuron Selection), a novel gradient-based Neural Architecture Search (NAS) method specifically tailored for tabular data in telecommunications networks. We evaluate TabGNS across multiple telecommunications and generic tabular datasets, demonstrating improvements in prediction performance while reducing the architecture size by 51-82% and reducing the search time by up to 36x compared to state-of-the-art tabular NAS methods. Integrating TabGNS into the model lifecycle management enables automated design of neural networks throughout the lifecycle, accelerating deployment of ML solutions in telecommunications networks.
Related papers
- Predicting Large-scale Urban Network Dynamics with Energy-informed Graph Neural Diffusion [51.198001060683296]
Networked urban systems facilitate the flow of people, resources, and services.<n>Current models such as graph neural networks have shown promise but face a trade-off between efficacy and efficiency.<n>This paper addresses this trade-off by drawing inspiration from physical laws to inform essential model designs.
arXiv Detail & Related papers (2025-07-31T01:24:01Z) - Improving Wi-Fi Network Performance Prediction with Deep Learning Models [0.9632663495317711]
This study makes use of machine learning techniques to predict channel quality in a Wi-Fi network in terms of the frame delivery ratio.<n>Predictions can be used proactively to adjust communication parameters at runtime and optimize network operations for industrial applications.
arXiv Detail & Related papers (2025-07-15T10:18:32Z) - World Model-Based Learning for Long-Term Age of Information Minimization in Vehicular Networks [53.98633183204453]
In this paper, a novel world model-based learning framework is proposed to minimize packet-completeness-aware age of information (CAoI) in a vehicular network.<n>A world model framework is proposed to jointly learn a dynamic model of the mmWave V2X environment and use it to imagine trajectories for learning how to perform link scheduling.<n>In particular, the long-term policy is learned in differentiable imagined trajectories instead of environment interactions.
arXiv Detail & Related papers (2025-05-03T06:23:18Z) - Efficient Federated Learning Tiny Language Models for Mobile Network Feature Prediction [13.32608465848856]
In telecommunications, Autonomous Networks (ANs) automatically adjust configurations based on specific requirements (e.g. bandwidth, available resources)<n>Here, Federated Learning (FL) allows multiple AN cells - each equipped with Neural Networks (NNs) - to collaboratively train models while preserving data privacy.<n>We investigate NNCodec, a implementation of the ISO/IEC Neural Network Coding (NNC) standard, within a novel FL framework that integrates tiny language models (TLMs)<n>Our experimental results on the Berlin V2X dataset demonstrate that NNCodec achieves transparent compression while reducing communication overhead to below 1%.
arXiv Detail & Related papers (2025-04-02T17:54:06Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Improving the Real-Data Driven Network Evaluation Model for Digital Twin Networks [0.2499907423888049]
Digital Twin Networks (DTN) technology is expected to become the foundation technology for autonomous networks.
DTN has the advantage of being able to operate and system networks based on real-time collected data in a closed-loop system.
Various AI research and standardization work is ongoing to optimize the use of DTN.
arXiv Detail & Related papers (2024-05-14T09:55:03Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Communication-Efficient Separable Neural Network for Distributed
Inference on Edge Devices [2.28438857884398]
We propose a novel method of exploiting model parallelism to separate a neural network for distributed inferences.
Under proper specifications of devices and configurations of models, our experiments show that the inference of large neural networks on edge clusters can be distributed and accelerated.
arXiv Detail & Related papers (2021-11-03T19:30:28Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.